id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2309.16919 | Study of two-body doubly charmful baryonic $B$ decays with $SU(3)$
flavor symmetry | Within the framework of $SU(3)$ flavor symmetry, we investigate two-body
doubly charmful baryonic $B\to{\bf B}_c\bar{\bf B}'_c$ decays, where ${\bf
B}_c\bar{\bf B}'_c$ represents the anti-triplet charmed dibaryon. We determine
the $SU(3)_f$ amplitudes and calculate ${\cal B}(B^-\to \Xi_c^0\bar
\Xi_c^-)=(3.4^{+1.0}_{-0.9})\times 10^{-5}$ and ${\cal B}(\bar B^0_s\to
\Lambda_c^+\bar \Xi_c^-)=(3.9^{+1.2}_{-1.0})\times 10^{-5}$ induced by the
single $W$-emission configuration. We find that the $W$-exchange amplitude,
previously neglected in studies, needs to be taken into account. It can cause a
destructive interfering effect with the $W$-emission amplitude, alleviating the
significant discrepancy between the theoretical estimation and experimental
data for ${\cal B}(\bar B^0\to\Lambda_c^+\bar\Lambda_c^-)$. To test other
interfering decay channels, we calculate ${\cal B}(\bar B^0_s\to
\Xi_c^{0(+)}\bar \Xi_c^{0(+)})=(3.0^{+1.4}_{-1.1})\times 10^{-4}$ and ${\cal
B}(\bar B^0\to \Xi_c^0\bar \Xi_c^0)=(1.5^{+0.7}_{-0.6})\times 10^{-5}$. We
estimate non-zero branching fractions for the pure $W$-exchange decay channels,
specifically ${\cal B}(\bar B^0_s\to \Lambda_c^+\bar
\Lambda_c^-)=(8.1^{+1.7}_{-1.5})\times 10^{-5}$ and ${\cal B}(\bar B^0\to
\Xi_c^+\bar \Xi_c^-)=(3.0\pm 0.6)\times 10^{-6}$. Additionally, we predict
${\cal B}(B^+_c\to \Xi_c^+\bar \Xi_c^0)=(2.8^{+0.9}_{-0.7})\times 10^{-4}$ and
${\cal B}(B^+_c\to \Lambda_c^+\bar \Xi_c^0)=(1.6^{+0.5}_{-0.4})\times 10^{-5}$,
which are accessible to experimental facilities such as LHCb. | Yu-Kuo Hsiao | 2023-09-29T01:24:27Z | http://arxiv.org/abs/2309.16919v1 | # Study of two-body doubly charmful baryonic \(B\) decays
###### Abstract
Within the framework of \(SU(3)\) flavor symmetry, we investigate two-body doubly charmful baryonic \(B\to{\bf B}_{c}\bar{\bf B}^{\prime}_{c}\) decays, where \({\bf B}_{c}\bar{\bf B}^{\prime}_{c}\) represents the anti-triplet charmed dibaryon. We determine the \(SU(3)_{f}\) amplitudes and calculate \({\cal B}(B^{-}\to\Xi_{c}^{0}\bar{\Xi}_{c}^{-})=(3.4^{+1.0}_{-0.9})\times 10^{-5}\) and \({\cal B}(\bar{B}_{s}^{0}\to\Lambda_{c}^{+}\bar{\Xi}_{c}^{-})=(3.9^{+1.2}_{-1.0 })\times 10^{-5}\) induced by the single \(W\)-emission configuration. We find that the \(W\)-exchange amplitude, previously neglected in studies, needs to be taken into account. It can cause a destructive interfering effect with the \(W\)-emission amplitude, alleviating the significant discrepancy between the theoretical estimation and experimental data for \({\cal B}(\bar{B}^{0}\to\Lambda_{c}^{+}\bar{\Lambda}_{c}^{-})\). To test other interfering decay channels, we calculate \({\cal B}(\bar{B}_{s}^{0}\to\Xi_{c}^{0(+)}\bar{\Xi}_{c}^{0(+)})=(3.0^{+1.4}_{- 1.1})\times 10^{-4}\) and \({\cal B}(\bar{B}^{0}\to\Xi_{c}^{0}\Xi_{c}^{0})=(1.5^{+0.7}_{-0.6})\times 10^{-5}\). We estimate non-zero branching fractions for the pure \(W\)-exchange decay channels, specifically \({\cal B}(\bar{B}_{s}^{0}\to\Lambda_{c}^{+}\bar{\Lambda}_{c}^{-})=(8.1^{+1.7}_{ -1.5})\times 10^{-5}\) and \({\cal B}(\bar{B}^{0}\to\Xi_{c}^{+}\bar{\Xi}_{c}^{-})=(3.0\pm 0.6)\times 10^{-6}\). Additionally, we predict \({\cal B}(B_{c}^{+}\to\Xi_{c}^{+}\bar{\Xi}_{c}^{0})=(2.8^{+0.9}_{-0.7})\times 10 ^{-4}\) and \({\cal B}(B_{c}^{+}\to\Lambda_{c}^{+}\bar{\Xi}_{c}^{0})=(1.6^{+0.5}_{-0.4})\times 1 0^{-5}\), which are accessible to experimental facilities such as LHCb.
Introduction
The tree-level dominated two-body charmless baryonic \(B\) meson decays, \(B\to{\bf B}\bar{\bf B}^{\prime}\), can proceed through the \(W\)-boson exchange (\(W_{\rm ex}\)), \(W\)-boson annihilation (\(W_{\rm an}\)), and \(W\)-boson emission (\(W_{\rm em}\)) decay configurations. In analogy with leptonic \(B\) decay, where the \(W_{\rm an}\) amplitude \({\cal M}_{\rm wan}(B\to\ell\bar{\nu})\propto m_{\ell}\bar{u}_{\ell}(1+\gamma_ {5})v_{\bar{\nu}}\) involves a tiny lepton mass \(m_{\ell}\) corresponding to helicity suppression [1; 2], \({\cal M}_{\rm wex(wan)}(B\to{\bf B}\bar{\bf B}^{\prime})\propto m_{-}\langle{\bf B }\bar{\bf B}^{\prime}|\bar{q}q^{\prime}|0\rangle+m_{+}\langle{\bf B}\bar{\bf B} ^{\prime}|\bar{q}\gamma_{5}q^{\prime}|0\rangle\) with \(m_{\mp}=m_{q}\mp m_{q^{\prime}}\) is considered to be more suppressed than \({\cal M}_{\rm wex}(B\to{\bf B}\bar{\bf B}^{\prime})\)[3]. Hence, it raises the debate if one can really neglect the \(W_{\rm ex(an)}\) contribution to the branching fractions [3; 4; 5; 6; 7; 8; 9; 10].
In the study of singly charmful baryonic \(B\to{\bf B}_{c}\bar{\bf B}^{\prime}\) decays, the \(W_{\rm ex(an)}\) amplitude was also neglected [11; 12]. Nonetheless, it has been found that \({\cal M}_{\rm wex(wan)}(B\to{\bf B}_{c}\bar{\bf B}^{\prime})\propto m_{c} \langle{\bf B}_{c}\bar{\bf B}^{\prime}|\bar{c}(1+\gamma_{5})q|0\rangle\) with \(m_{c}\gg m_{q}\) can alleviate the helicity suppression [13]. This results in \({\cal B}_{\rm wex}(\bar{B}^{0}_{s}\to\Lambda^{+}_{c}\bar{p})\) and \({\cal B}_{\rm wex}(\bar{B}^{0}\to\Xi^{+}_{c}\bar{\Sigma}^{-})\) being predicted to be of order \(10^{-5}\), much more reachable than \({\cal B}(B\to{\bf B}\bar{\bf B}^{\prime})\sim 10^{-8}-10^{-7}\) for the test of a non-negligible \(W_{\rm ex(an)}\) contribution. However, until very recently, these observations have not been reported.
It is worth noting that two-body doubly charmful baryonic \(B\) decays, \(B\to{\bf B}_{c}\bar{\bf B}^{\prime}_{c}\), have provided a possible experimental indication of a non-negligible contribution from the \(W_{\rm ex}\) term. The measured branching fractions for \(B\to{\bf B}_{c}\bar{\bf B}^{\prime}_{c}\) are reported as follows:
\[{\cal B}(\bar{B}^{0}\to\Xi^{+}_{c}\bar{\Lambda}^{-}_{c}) = (1.2\pm 0.8)\times 10^{-3}\ [14]\,,\] \[{\cal B}(B^{-}\to\Xi^{0}_{c}\bar{\Lambda}^{-}_{c}) = (9.5\pm 2.3)\times 10^{-4}\ [14]\,,\] \[{\cal B}(\bar{B}^{0}\to\Lambda^{+}_{c}\bar{\Lambda}^{-}_{c}) < 1.6\times 10^{-5}\ [14,\,15]\] \[= (2.2^{+2.2}_{-1.6}\pm 1.3)\times 10^{-5}\ [16]\,,\] \[{\cal B}(\bar{B}^{0}_{s}\to\Lambda^{+}_{c}\bar{\Lambda}^{-}_{c}) < 9.9\times 10^{-5}\ [14,\,15]\,. \tag{1}\]
Initially, it was considered that \(B\to{\bf B}_{c}\bar{\bf B}^{\prime}_{c}\) receives a single contribution from the \(W_{\rm em}\) topology [16; 17]. In Eq. (1), \({\cal B}(\bar{B}^{0}\to\Xi^{+}_{c}\bar{\Lambda}^{-}_{c})\simeq{\cal B}(\bar{B }^{0}\to\Lambda^{+}_{c}\bar{\Lambda}^{-}_{c})\) seemly supporting this assumption. Nonetheless, it also leads to an estimation of \({\cal B}(\bar{B}^{0}\to\Lambda^{+}_{c}\bar{\Lambda}^{-}_{c})\simeq(V_{cd}/V_{ cs})^{2}(\tau_{\bar{B}^{0}}/\tau_{\bar{B}^{-}}){\cal B}(B^{-}\to\Xi^{0}_{c} \bar{\Lambda}^{-}_{c})=(4.7\pm 1.1)\times 10^{-5}\) by utilizing the \({\cal B}(B^{-}\to\Xi^{0}_{c}\bar{\Lambda}^{-}_{c})\) value from Eq. (1). This clearly shows a significant deviation from the experimental upper limit of \(1.6\times 10^{-5}\) by around 3 standard deviations. Therefore, it is reasonable to infer that the \(W_{\rm ex}\) topology, overlooked in \(\bar{B}^{0}\to\Lambda^{+}_{c}\bar{\Lambda}^{-}_{c}\), should be taken into account. It can
cause a destructive interfering effect, thus reducing the overestimated branching fraction. Additionally, the \(W_{\rm ex}\) topology can induce a non-zero \(\mathcal{B}(\bar{B}^{0}_{s}\to\Lambda_{c}^{+}\bar{\Lambda}_{c}^{-})\), warranting further examination.
For clarification, a careful study of \(B\to{\bf B}_{c}\bar{\bf B}^{\prime}_{c}\) is necessary. The \(SU(3)\) flavor symmetry can be a useful theoretical tool [3; 8; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31], allowing us to parameterize the amplitudes without involving the complexity of model calculations. Hence, we propose using the \(SU(3)_{f}\) approach to specifically explore the \(W_{\rm ex}\) and \(W_{\rm em}\) contributions to \(B\to{\bf B}_{c}\bar{\bf B}^{\prime}_{c}\). The first observation of \(B_{c}^{+}\to J/\Psi p\bar{p}\pi^{+}\) by LHCb [32] indicates a potential test-bed for the baryonic phenomena in \(B_{c}^{+}\) decays, such as the branching fraction [33; 34; 35; 36; 37; 38; 39; 40; 41], direct \(CP\) asymmetry [42; 43], triple product asymmetry [44], angular distribution [45], and exotic states [46; 47; 48] as studied in baryonic \(B\) decays. Therefore, we will estimate \(\mathcal{B}(B_{c}^{+}\to{\bf B}_{c}\bar{\bf B}^{\prime}_{c})\) to initiate a theoretical investigation.
## II Formalism
To study the two-body doubly charmful baryonic \(B_{(c)}\to{\bf B}_{c}\bar{\bf B}^{\prime}_{c}\) decays with \(B_{c}\) denoting \(B_{c}^{+}(b\bar{c})\), the quark-level effective Hamiltonians for the \(b\to c\bar{q}q^{\prime}\) weak transitions are required, given by [49; 50]
\[\mathcal{H}^{b\to c\bar{q}q^{\prime}}_{eff} = \frac{G_{F}}{\sqrt{2}}V_{cb}V_{qq^{\prime}}^{*}\Big{[}c_{1}(\bar{ q}^{\prime}q)(\bar{c}b)+c_{2}(\bar{q}^{\prime}_{\beta}q_{\alpha})(\bar{c}_{ \alpha}b_{\beta})\Big{]}\,, \tag{2}\]
where \(G_{F}\) is the Fermi constant, \(V_{cb}\) and \(V_{qq^{\prime}}\) with \(q=(u,c)\) and \(q^{\prime}=(s,d)\) the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements. In Eq. (2), we define \((\bar{q}_{1}q_{2})=\bar{q}_{1}\gamma_{\mu}(1-\gamma_{5})q_{2}\)
and the subscripts \((\alpha,\beta)\) denote the color indices; moreover, \(c_{1,2}\) are the scale \((\mu)\)-dependent Wilson coefficients with \(\mu=m_{b}\) for the \(b\) decays. In the \(SU(3)_{f}\) representation, \({\cal H}^{b\to c\bar{c}q^{\prime}}_{eff}\) and \({\cal H}^{b\to c\bar{u}q^{\prime}}_{eff}\) by omitting Lorentz structure are reduced as \(H^{i}\) and \(H^{i}_{j}\), respectively, where \(i\) and \(j\) run from 1 to 3 to represent the flavor indices. Explicitly, the nonzero entries are given by [27]
\[H_{1}^{2}=\lambda_{ud}\,,\;H_{1}^{3}=\lambda_{us}\,,\;H^{2}=\lambda_{cd}\,,\;H ^{3}=\lambda_{cs}\,, \tag{3}\]
with \(\lambda_{qq^{\prime}}\equiv V_{cb}V_{qq^{\prime}}^{*}\). Accordingly, we present the \(B\) meson and \({\bf B}_{c}\) baryon in the \(SU(3)_{f}\) forms:
\[B(B_{i}) = (B^{-},\bar{B}^{0},\bar{B}_{s}^{0})\,,\] \[{\bf B}_{c}({\bf B}_{c}^{ij}) = \left(\begin{array}{ccc}0&\Lambda_{c}^{+}&\Xi_{c}^{+}\\ -\Lambda_{c}^{+}&0&\Xi_{c}^{0}\\ -\Xi_{c}^{+}&-\Xi_{c}^{0}&0\end{array}\right)\,, \tag{4}\]
whereas \(B_{c}\) is a singlet. By connecting the flavor indices of the initial state to those of the effective Hamiltonian and final states, the \(SU(3)_{f}\) approach yields the amplitudes to be
\[{\cal M}(B\to{\bf B}_{c}\bar{\bf B}_{c}^{\prime}) = eB_{i}H^{i}{\bf B}_{c\,jk}\bar{\bf B}_{c}^{\prime\,jk}+c^{\prime }B_{i}H^{j}{\bf B}_{c\,jk}\bar{\bf B}_{c}^{\prime\,ik}\;,\] \[{\cal M}(B_{c}\to{\bf B}_{c}\bar{\bf B}_{c}^{\prime}) = \bar{c}^{\prime}H_{j}^{i}{\bf B}_{c\,ik}\bar{\bf B}_{c}^{\prime \,jk}\;, \tag{5}\]
where the parameter \(e\) and \(c^{\prime}(\bar{c}^{\prime})\) correspond to the \(W_{\rm ex}\) and \(W_{\rm em}\) configurations in Fig. 1a and Fig. 1b(c), respectively. For a later numerical analysis, we use the equation [14]:
\[{\cal B}(B_{(c)}\to{\bf B}_{c}\bar{\bf B}_{c}^{\prime})=\frac{G_{F }^{2}|\vec{p}_{\rm cm}|\tau_{B_{(c)}}}{16\pi m_{B_{(c)}}^{2}}|{\cal M}(B_{(c)} \to{\bf B}_{c}\bar{\bf B}_{c}^{\prime})|^{2}\,,\] \[|\vec{p}_{cm}|=\frac{\sqrt{(m_{B_{(c)}}^{2}-M_{+}^{2})(m_{B_{(c)} }^{2}-M_{-}^{2})}}{2m_{B_{(c)}}}\,, \tag{6}\]
to compute the branching fractions, where \(M_{\pm}\equiv m_{{\bf B}_{c}}\pm m_{\bar{\bf B}_{c}^{\prime}}\), \(\vec{p}_{cm}\) is the three-momentum of the \({\bf B}_{c}\) baryon in the \(B_{(c)}\) meson rest frame, and \(\tau_{B_{(c)}}\) stands for the \(B_{(c)}\) lifetime. The amplitude \({\cal M}(B_{(c)}\to{\bf B}_{c}\bar{\bf B}_{c}^{\prime})\) can be found in Table 1.
## III Numerical results
In the numerical analysis, the CKM matrix elements are adopted from PDG [14]:
\[(V_{cb},V_{cs},V_{ud},V_{us},V_{cd})=(A\lambda^{2},1-\lambda^{2}/2,1-\lambda^ {2}/2,\lambda,-\lambda)\,, \tag{7}\]
where \(A=0.826\) and \(\lambda=0.225\) in the Wolfenstein parameterization. In Eq. (5), the parameters \(e\) and \(c^{\prime}\) are complex numbers, which we present as
\[|c^{\prime}|,|e|e^{i\delta_{e}}, \tag{8}\]
with \(\delta_{e}\) a relative phase. By using the experimental data in Table 2, we solve the parameters as
\[|c^{\prime}|=(1.29\pm 0.18)~{}{\rm GeV}^{3}\,,~{}|e|=(0.19\pm 0.03)~{}{\rm GeV }^{3}\,,~{}\delta_{e}=180^{\circ}\,. \tag{9}\]
Moreover, we assume \(\bar{c}^{\prime}=c^{\prime}\) due to the similarity of the Feynman diagrams in Figs. 1b and 1c. Subsequently, we calculate the branching fractions as provided in Table 2 using the determination in Eq. (9).
## IV Discussion and Conclusions
The \(SU(3)_{f}\) approach enables us to explore all possible \(B\to{\bf B}_{c}\bar{\bf B}^{\prime}_{c}\) decays, as summarized in Table 1. Furthermore, it helps in deriving constraints on \(SU(3)_{f}\) relations, facilitating the decomposition of amplitudes into \(e\) and \(c^{\prime}\) terms. These terms parameterize the \(W_{\rm ex}\) and \(W_{\rm em}\) topologies depicted in Fig.1a and Fig.1b(c), respectively.
In \(b\to c\bar{c}s\) induced decays, the \(SU(3)_{f}\) symmetry unequivocally establishes that both \(\bar{B}^{0}\to\Xi_{c}^{+}\bar{\Lambda}_{c}^{-}\) and \(B^{-}\to\Xi_{c}^{0}\bar{\Lambda}_{c}^{-}\) solely proceed through the \(W_{\rm em}\) topology, supporting earlier considerations [16; 17]. The nearly identical branching fractions, as shown in Eq. (1), also provide consistent evidence. In our latest findings, we unveil additional insights. For
\(\bar{B}^{0}_{s}\to\Xi^{0(+)}_{c}\bar{\Xi}^{0(-)}_{c}\), the interference of the \(W_{\rm ex}\) amplitude with the \(W_{\rm em}\) amplitude adds a new contribution to the decay process. Furthermore, \(\bar{B}^{0}_{s}\to\Lambda^{+}_{c}\bar{\Lambda}^{-}_{c}\) represents a pure \(W_{\rm ex}\) decay, offering a clear and distinct case for experiments to clarify if the \(W_{\rm ex}\) contribution can be neglected.
The \(b\to c\bar{c}d\) induced decays with \(|V_{cd}/V_{cs}|\simeq 0.05\) are more suppressed. Unlike \({\cal M}(\bar{B}^{0}_{s}\to\Xi^{0}_{c}\bar{\Xi}^{0}_{c})={\cal M}(\bar{B}^{0}_ {s}\to\Xi^{+}_{c}\bar{\Xi}^{-}_{c})\), \(\bar{B}^{0}\to\Xi^{+}_{c}\bar{\Xi}^{-}_{c}\) as a pure \(W_{\rm ex}\) decay does not share an isospin relation with \(\bar{B}^{0}\to\Xi^{0}_{c}\bar{\Xi}^{0}_{c}\), which is against our naive expectation. Instead, the equality relation arises from \({\cal M}(\bar{B}^{0}\to\Lambda^{+}_{c}\bar{\Lambda}^{-}_{c})={\cal M}(\bar{B} ^{0}\to\Xi^{0}_{c}\bar{\Xi}^{0}_{c})\). Moreover, an interesting triangle relation exists:
\[{\cal M}(\bar{B}^{0}\to\Xi^{+}_{c}\bar{\Xi}^{-}_{c})+{\cal M}(B^{-}\to\Xi^{0} _{c}\bar{\Xi}^{-}_{c})={\cal M}(\bar{B}^{0}\to\Xi^{0}_{c}\bar{\Xi}^{0}_{c})\,. \tag{10}\]
If the \(W_{\rm ex}\) contribution is negligible, the relation is simplified to \({\cal M}(B^{-}\to\Xi^{0}_{c}\bar{\Xi}^{-}_{c})\simeq{\cal M}(\bar{B}^{0}\to \Xi^{0}_{c}\bar{\Xi}^{0}_{c})\), resulting in nearly equal branching fractions.
It can be challenging to compute the \(W_{\rm ex}\) and \(W_{\rm em}\) amplitudes. For example, the factorization approach derives the \(W_{\rm ex}\) amplitude as \({\cal M}_{\rm wex}\propto f_{B}q^{\mu}\langle{\bf B}_{c}\bar{\bf B}^{\prime}_{c} |\bar{c}\gamma_{\mu}(1-\gamma_{5})c|0\rangle\)1, where
Footnote 1: Please consult the similar deviation for \({\cal M}(B\to{\bf B}_{c}\bar{\bf B})\) in Ref. [13].
\begin{tabular}{l c c} \hline decay channel & this work & experimental data \\ \hline \hline \(10^{4}{\cal B}(\bar{B}^{0}\to\Xi^{+}_{c}\bar{\Lambda}^{-}_{c})\) & \(7.2^{+2.1}_{-1.9}\) & \(12\pm 8\)[14] \\ \(10^{4}{\cal B}(B^{-}\to\Xi^{0}_{c}\bar{\Lambda}^{-}_{c})\) & \(7.8^{+2.3}_{-2.0}\) & \(9.5\pm 2.3\)[14] \\ \(10^{4}{\cal B}(\bar{B}^{0}_{s}\to\Xi^{0}_{c}\Xi^{0}_{c})\) & \(3.0^{+1.4}_{-1.1}\) & \\ \(10^{4}{\cal B}(\bar{B}^{0}_{s}\to\Xi^{+}_{c}\bar{\Xi}^{-}_{c})\) & \(3.0^{+1.4}_{-1.1}\) & \\ \(10^{5}{\cal B}(\bar{B}^{0}_{s}\to\Lambda^{+}_{c}\bar{\Lambda}^{-}_{c})\) & \(8.1^{+1.7}_{-1.5}\) & \(<9.9\)[14, 15] \\ \(10^{4}{\cal B}(B^{+}_{c}\to\Xi^{+}_{c}\bar{\Xi}^{0}_{c})\) & \(2.8^{+0.9}_{-0.7}\) & \\ \hline \(10^{5}{\cal B}(\bar{B}^{0}\to\Xi^{0}_{c}\Xi^{0}_{c})\) & \(1.5^{+0.7}_{-0.6}\) & \\ \(10^{6}{\cal B}(\bar{B}^{0}\to\Xi^{+}_{c}\bar{\Xi}^{-}_{c})\) & \(3.0\pm 0.6\) & \\ \(10^{5}{\cal B}(\bar{B}^{0}\to\Lambda^{+}_{c}\bar{\Lambda}^{-}_{c})\) & \(2.1^{+1.0}_{-0.8}\) & \(<1.6\)[14, 15]\((2.2^{+2.6}_{-2.1}\)[16]) \\ \(10^{5}{\cal B}(B^{-}\to\Xi^{0}_{c}\bar{\Xi}^{-}_{c})\) & \(3.4^{+1.0}_{-0.9}\) & \\ \(10^{5}{\cal B}(\bar{B}^{0}_{s}\to\Lambda^{+}_{c}\bar{\Xi}^{-}_{c})\) & \(3.9^{+1.2}_{-1.0}\) & \\ \(10^{5}{\cal B}(B^{+}_{c}\to\Lambda^{+}_{c}\bar{\Xi}^{0}_{c})\) & \(1.6^{+0.5}_{-0.4}\) & \\ \hline \end{tabular}
\(f_{B}\) is the B meson decay constant, \(q^{\mu}\) the momentum transfer, and the matrix elements present the vacuum (0) to \({\bf B}_{c}\bar{\bf B}^{\prime}_{c}\) production. As information on the \(0\to{\bf B}_{c}\bar{\bf B}^{\prime}_{c}\) production is lacking, a model calculation is currently unavailable. For a calculation on \({\cal M}_{\rm wem}\), one proposes a meson propagator to provide an additional quark pair, resulting in the branching fractions to be a few times \(10^{-3}\) for \(\bar{B}^{0}\to\Xi_{c}^{+}\bar{\Lambda}_{c}^{-}\) and \(B^{-}\to\Xi_{c}^{0}\bar{\Lambda}_{c}^{-}\)[17]. Additionally, a theoretical attempt incorporating final state interactions yields \({\cal B}\simeq{\cal O}(10^{-3})\)[51]. It appears that the above approaches may overestimate the \(W_{\rm em}\) contribution.
Without invoking the model calculations, we determine \(e\) and \(c^{\prime}\) with the experimental data based on the \(SU(3)_{f}\) symmetry. Explicitly, we use the experimental results for \({\cal B}(\bar{B}^{0}\to\Xi_{c}^{+}\bar{\Lambda}_{c}^{-})\) and \({\cal B}(B^{-}\to\Xi_{c}^{0}\bar{\Lambda}_{c}^{-})\) to fit \(|c^{\prime}|\). On the other hand, the experimental data have not been sufficient and accurate enough to simultaneously determine \(|e|\) and the relative phase \(\delta_{e}\), as indicted in Table 2. For a practical determination, we fix \(\delta_{e}=180^{\circ}\) to cause a maximum destructive interference, where the fitted \(|c^{\prime}|\) value has been used. As a consequence, the experimental upper bounds of \({\cal B}(\bar{B}^{0}_{s}\to\Lambda_{c}^{+}\bar{\Lambda}_{c}^{-})\) and \({\cal B}(\bar{B}^{0}\to\Lambda_{c}^{+}\bar{\Lambda}_{c}^{-})\) can sandwich an allowed range for \(|e|\), as given in Eq. (9).
As the numerical results, we obtain \({\cal B}(\bar{B}^{0}\to\Xi_{c}^{+}\bar{\Lambda}_{c}^{-})=(7.2^{+2.1}_{-1.9}) \times 10^{-4}\) and \({\cal B}(B^{-}\to\Xi_{c}^{0}\bar{\Lambda}_{c}^{-})=(7.8^{+2.3}_{-2.0})\times 1 0^{-4}\) utilizing \(|c^{\prime}|=(1.29\pm 0.18)\) GeV\({}^{3}\), in agreement with the experimental inputs. This demonstrates that \(c^{\prime}\) can estimate the \(W_{\rm em}\) contribution. Thus, we predict
\[{\cal B}(B^{-}\to\Xi_{c}^{0}\bar{\Xi}_{c}^{-}) = (3.4^{+1.0}_{-0.9})\times 10^{-5}\,,\] \[{\cal B}(\bar{B}^{0}_{s}\to\Lambda_{c}^{+}\bar{\Xi}_{c}^{-}) = (3.9^{+1.2}_{-1.0})\times 10^{-5}\,, \tag{11}\]
which are also contributed by the single \(W_{\rm em}\) amplitude, promising to be measured by experimental facilities such as LHCb.
The previous studies have assumed that \(\bar{B}^{0}\to\Lambda_{c}^{+}\bar{\Lambda}_{c}^{-}\) receives the single \(W_{\rm em}\) contribution [16; 17]. Consequently, the estimated branching fraction \({\cal B}(\bar{B}^{0}\to\Lambda_{c}^{+}\bar{\Lambda}_{c}^{-})\simeq 5\times 10^{-5}\) mentioned in the introduction significantly exceeds the experimental upper bound. In Table 1, since \({\cal M}(\bar{B}^{0}\to\Lambda_{c}^{+}\bar{\Lambda}_{c}^{-})=-\lambda_{cd}(2e +c^{\prime})\) is found to include the \(SU(3)_{f}\) parameter \(e\), it suggests a non-negligible \(W_{\rm ex}\) amplitude. By newly incorporating \(e\), a destructive interference with \(c^{\prime}\) could occur, effectively reducing the branching fraction. In fact, we estimate \(|e|\simeq 0.2\) GeV\({}^{3}\), and obtain \({\cal B}(\bar{B}^{0}\to\Lambda_{c}^{+}\bar{\Lambda}_{c}^{-})=(2.1^{+1.0}_{-0.8} )\times 10^{-5}\), thus alleviating the discrepancy.
To carefully test the \(W_{\rm ex}\) contribution, we predict the branching fractions of the other interfering decay channels:
\[{\cal B}(\bar{B}_{s}^{0}\to\Xi_{c}^{0(+)}\bar{\Xi}_{c}^{0(+)})=(3.0 ^{+1.4}_{-1.1})\times 10^{-4}\,,\] \[{\cal B}(\bar{B}^{0}\to\Xi_{c}^{0}\Xi_{c}^{0})=(1.5^{+0.7}_{-0.6}) \times 10^{-5}\,. \tag{12}\]
When \(|e|=0\), \({\cal B}(\bar{B}_{s}^{0}\to\Xi_{c}^{0(+)}\bar{\Xi}_{c}^{0(+)})\) would be enhanced to \((6.3^{+1.9}_{-1.6})\times 10^{-4}\); moreover, \({\cal B}(\bar{B}^{0}\to\Xi_{c}^{0}\bar{\Xi}_{c}^{0})\) would be enhanced to \((3.1^{+0.9}_{-0.8})\times 10^{-5}\), making it close to \({\cal B}(B^{-}\to\Xi_{c}^{0}\bar{\Xi}_{c}^{-})\), in accordance with the description for the triangle relation in Eq. (10). We also anticipate non-zero branching fractions of the pure \(W_{\rm ex}\) decays, given by
\[{\cal B}(\bar{B}_{s}^{0}\to\Lambda_{c}^{+}\bar{\Lambda}_{c}^{-})= (8.1^{+1.7}_{-1.5})\times 10^{-5}\,,\] \[{\cal B}(\bar{B}^{0}\to\Xi_{c}^{+}\bar{\Xi}_{c}^{-})=(3.0\pm 0.6) \times 10^{-6}\,, \tag{13}\]
which serve to test the \(W\)-exchange mechanism in the \(B\to{\bf B}_{c}\bar{\bf B}_{c}^{\prime}\) decays.
To initiate a theoretical investigation of baryonic \(B_{c}^{+}\) decays, we derive the amplitudes for \(B_{c}\to{\bf B}_{c}\bar{\bf B}_{c}^{\prime}\) using the \(SU(3)_{f}\) symmetry. This results in two possible decay channels: \(B_{c}^{+}\to\Xi_{c}^{+}\bar{\Xi}_{c}^{0}\) and \(B_{c}^{+}\to\Lambda_{c}^{+}\bar{\Xi}_{c}^{0}\), with \(\bar{c}^{\prime}\) representing the sole contribution from the \(W_{\rm em}\) term, as given in Table 1. Since both \(B_{c}\to{\bf B}_{c}\bar{\bf B}_{c}^{\prime}\) and \(B\to{\bf B}_{c}\bar{\bf B}_{c}^{\prime}\) involve \(c\bar{c}\) in the \({\bf B}_{c}\bar{\bf B}_{c}^{\prime}\) formation, it is reasonable to assume that QCD effects cannot distinguish between the topology in Fig. 1b and the one in Fig. 1c in the hadronization process. Hence, we assume \(\bar{c}^{\prime}=c^{\prime}\), and predict the following branching fractions:
\[{\cal B}(B_{c}^{+}\to\Xi_{c}^{+}\bar{\Xi}_{c}^{0}) = (2.8^{+0.9}_{-0.7})\times 10^{-4}\,,\] \[{\cal B}(B_{c}^{+}\to\Lambda_{c}^{+}\bar{\Xi}_{c}^{0}) = (1.6^{+0.5}_{-0.4})\times 10^{-5}\,, \tag{14}\]
which can be probed by the LHCb experiment.
In summary, we have explored the two-body doubly charmful baryonic \(B\to{\bf B}_{c}\bar{\bf B}_{c}^{\prime}\) decays. Here, the \(W_{\rm ex}\) and \(W_{\rm em}\) amplitudes have been parametrized as \(e\) and \(c^{\prime}\), respectively, using the \(SU(3)_{f}\) approach. With the determination of the \(SU(3)_{f}\) parameters, we have calculated the branching fractions \({\cal B}(B^{-}\to\Xi_{c}^{0}\bar{\Xi}_{c}^{-})=(3.4^{+1.0}_{-0.9})\times 10^{-5}\) and \({\cal B}(\bar{B}_{s}^{0}\to\Lambda_{c}^{+}\bar{\Xi}_{c}^{-})=(3.9^{+1.2}_{-1.0} )\times 10^{-5}\). Considering that the single \(W_{\rm em}\) contribution to \(\bar{B}^{0}\to\Lambda_{c}^{+}\bar{\Lambda}_{c}^{-}\) has caused the branching fraction to significantly exceed the experimental upper bound, we have added the \(W_{\rm ex}\) amplitude (or the \(SU(3)_{f}\) parameter \(e\)), overlooked in previous studies, to account for a destructive interfering effect. Subsequently, we have
alleviated the discrepancy. To further test the interfering decay channels, we have predicted \(\mathcal{B}(\bar{B}^{0}_{s}\to\Xi^{0(+)}_{c}\bar{\Xi}^{0(+)}_{c})=(3.0^{+1.4}_{-1.1 })\times 10^{-4}\) and \(\mathcal{B}(\bar{B}^{0}\to\Xi^{0}_{c}\Xi^{0}_{c})=(1.5^{+0.7}_{-0.6})\times 10^{-5}\). For the pure \(W_{\rm ex}\) decay channels, we have expected non-zero branching fractions, such as \(\mathcal{B}(\bar{B}^{0}_{s}\to\Lambda^{+}_{c}\bar{\Lambda}^{-}_{c})=(8.1^{+1.7 }_{-1.5})\times 10^{-5}\) and \(\mathcal{B}(\bar{B}^{0}\to\Xi^{+}_{c}\bar{\Xi}^{-}_{c})=(3.0\pm 0.6)\times 10^{-6}\), promising to be observed in near-future measurements. Additionally, we have predicted \(\mathcal{B}(B^{+}_{c}\to\Xi^{+}_{c}\bar{\Xi}^{0}_{c})=(2.8^{+0.9}_{-0.7})\times 1 0^{-4}\) and \(\mathcal{B}(B^{+}_{c}\to\Lambda^{+}_{c}\bar{\Xi}^{0}_{c})=(1.6^{+0.5}_{-0.4}) \times 10^{-5}\), which are accessible to the LHCb experiment.
###### Acknowledgements.
This work was supported by NSFC (Grants No. 11675030 and No. 12175128).
|
2309.05965 | A correction function-based kernel-free boundary integral method for
elliptic PDEs with implicitly defined interfaces | This work addresses a novel version of the kernel-free boundary integral
(KFBI) method for solving elliptic PDEs with implicitly defined irregular
boundaries and interfaces. We focus on boundary value problems and interface
problems, which are reformulated into boundary integral equations and solved
with the matrix-free GMRES method. In the KFBI method, evaluating boundary and
volume integrals only requires solving equivalent but much simpler interface
problems in a bounding box, for which fast solvers such as FFTs and geometric
multigrid methods are applicable. For the simple interface problem, a
correction function is introduced for both the evaluation of right-hand side
correction terms and the interpolation of a non-smooth potential function. A
mesh-free collocation method is proposed to compute the correction function
near the interface. The new method avoids complicated derivation for derivative
jumps of the solution and is easy to implement, especially for the fourth-order
method in three space dimensions. Various numerical examples are presented,
including challenging cases such as high-contrast coefficients, arbitrarily
close interfaces and heterogeneous interface problems. The reported numerical
results verify that the proposed method is both accurate and efficient. | Han Zhou, Wenjun Ying | 2023-09-12T05:18:42Z | http://arxiv.org/abs/2309.05965v1 | A correction function-based kernel-free boundary integral method for elliptic PDEs with implicitly defined interfaces
###### Abstract
This work addresses a novel version of the kernel-free boundary integral (KFBI) method for solving elliptic PDEs with implicitly defined irregular boundaries and interfaces. We focus on boundary value problems and interface problems, which are reformulated into boundary integral equations and solved with the matrix-free GMRES method. In the KFBI method, evaluating boundary and volume integrals only requires solving equivalent but much simpler interface problems in a bounding box, for which fast solvers such as FFTs and geometric multigrid methods are applicable. For the simple interface problem, a correction function is introduced for both the evaluation of right-hand side correction terms and the interpolation of a non-smooth potential function. A mesh-free collocation method is proposed to compute the correction function near the interface. The new method avoids complicated derivation for derivative jumps of the solution and is easy to implement, especially for the fourth-order method in three space dimensions. Various numerical examples are presented, including challenging cases such as high-contrast coefficients, arbitrarily close interfaces and heterogeneous interface problems. The reported numerical results verify that the proposed method is both accurate and efficient.
keywords: Elliptic PDEs; Interface problems; Jump conditions; Cartesian grid-based method; Compact finite difference method +
Footnote †: journal:
## 1 Introduction
Boundary value problems and interface problems of elliptic partial differential equations (PDEs) draw much attention due to their wide scientific and industrial applications, such as viscous incompressible flow [1; 2; 3; 4], heat transfer [5; 6], biomolecular electrostatics [7; 8], electromagnetics [9; 10], and many others. In practical situations, domain boundaries and material interfaces are complex and even move with time, making it challenging to design accurate and efficient numerical methods for these problems.
Body-fitted discretization approaches, such as finite element methods [11; 12; 13; 14], approximate the computational domain with an unstructured mesh, which conforms with
the geometry of boundaries and interfaces to achieve high-order accuracy. However, it is always difficult and time-consuming to generate high-quality body-fitted meshes for complex geometries, especially when the boundary or interface has a substantial motion with time evolution. In addition, the linear systems generated from the discretization of the PDE on body-fitted meshes are less structured than that from a Cartesian grid, and fast solvers such as FFTs and geometric multigrid methods can not be applied.
Immersed methods have been prevalent in recent decades, in which the complex boundary or interface is immersed into a fixed grid. The pioneering work of immersed methods is the immersed boundary method (IBM) [15; 16; 17] that was initially proposed by C. S. Peskin for simulations of cardiac mechanics and blood flows. In IBM, Peskin uses Lagrangian marker points on the boundary and regularized Dirac delta functions to approximate the singular force and spread it into the Eulerian grid. The IBM is quite robust but is restricted to first-order accuracy due to the non-smoothness of the solution in the vicinity of the boundary. Motivated by IBM, a number of immersed-type approaches have also been developed to improve the performance of conventional IBM. Among them are the immersed interface method (IIM) [18; 19; 20; 21; 22; 23], the ghost-fluid method (GFM) [24; 25; 26; 27; 28], the matched interface and boundary (MIB) method [29; 30; 31; 32], the correction function method (CFM) [33; 34; 35], the Immersed Boundary Smooth Extension (IBSE) method [36; 37]. The methods mentioned above are mainly based on finite difference discretizations. Since the finite element method may provide more rigorous convergence analysis, similar ideas have also been used to develop finite element-based immersed methods, such as the extended finite element method (XFEM) [38] and the immersed finite element method (IFEM) [39; 40; 41; 42].
The kernel-free boundary integral (KFBI) is a potential theory-based Cartesian grid method, which was initially proposed by W. Ying and C. S. Henriquez [43] as an extension of Mayo's method [44; 45; 46]. The KFBI method is also an immersed approach. Unlike traditional boundary integral methods (BIMs)/ boundary element methods (BEMs) [47; 48; 49; 50; 51; 52; 53; 54; 55; 56], layer and volume potentials are computed by solving equivalent but much simpler interface problems on a Cartesian grid, and the linear system can be efficiently solved with FFTs or geometric multigrid methods. Therefore, the KFBI method has several attractive advantages: (a) no analytical expression of Green's function is needed for solving the boundary integral equation; (b) singular and nearly singular integrals are avoided ;(c) it can be applied to variable coefficient problems. In the KFBI method, solving the constant coefficient interface problem is a fundamental building block. In previous works [57; 58; 59; 43], the simple interface problem is discretized with standard finite difference methods with a modified right-hand side. The correction terms for the right-hand side are linear combinations of derivative jumps \([u],[u_{x}],[u_{y}].[u_{z}],[u_{xx}],\cdots\), which are computed by repeatedly taking tangential derivatives of the jump values and applying the local coordinate transformation. The coordinate-transformation method for derivative jumps is accurate yet complicated when too many derivative terms are needed, such as for high-order schemes and in three space dimensions, for example, in [57].
In this work, we present a novel KFBI method that is both simple and accurate for two and three space dimensional BVPs and interface problems. Motivated by the correction function method (CFM) [33; 34; 35], we introduce a correction function in the vicinity of the interface to derive correction terms of the right-hand side for the constant coefficient interface problem. In order to solve the local Cauchy problem for
the correction function, we proposed a mesh-free collocation method based on an overlapping surface decomposition for the interface. Unlike the original CFM [33; 34; 35], no surface quadrature is required since the collocation method works with the strong form of the Cauchy problem. The overlapping surface decomposition representation of the interface also provides a good choice of collocation points such that the resulting collocation problem is accurate and stable. Another property of the collocation method is that the discrete system of the collocation problem is a square one and can be solved accurately, which is different from the original CFM in that the linear system is overdetermined and needs to be solved in the least-square sense. The new approach for the constant coefficient interface problem is built into the KFBI framework to accommodate elliptic BVPs and more general interface problems. The resulting method is named the correction function-based KFBI method.
The paper is organized as follows. The governing equations and their boundary integral equations are described in section 2 and 3. In section 4, the main idea of the KFBI method is described. The details of the numerical method for the constant coefficient interface problem are described in section 5. The algorithm is summarized in section 6. In section 7, numerical results demonstrating the method with examples are presented. Finally, we discuss the improvement and advantages of the proposed method in section 8.
## 2 Governing equations
### Boundary value problem
Let \(\Omega\subset\mathbb{R}^{d},d=2,3\) be a complex domain with smooth boundary \(\Gamma=\partial\Omega\), as illustrated in Figure 1(a). The BVP of an elliptic PDE is given by
\[\nabla\cdot(\sigma\nabla u)-\kappa u=f,\quad\text{in }\Omega, \tag{1}\]
subject to either the Dirichlet boundary condition or the Neumann boundary condition
\[u=g_{D},\quad\text{or}\quad\sigma\partial_{\mathbf{n}}u=g_{N},\quad\text{on }\Gamma, \tag{2}\]
where \(\sigma>0\) is diffusitivity and \(\kappa\geq 0\) is reaction coefficient. In this paper, we assume that \(\sigma\) and \(\kappa\) are constants.
### Interface problem
Let \(\Gamma\subset\mathbb{R}^{d},d=2,3\) be a sharp interface that separates a larger domain \(\mathcal{B}\subset\mathbb{R}^{d}\) into two subdomains \(\Omega_{1}\) and \(\Omega_{2}\), as illustrated in Figure 1(b). The interface problem of an elliptic PDE is given by
\[\nabla\cdot(\sigma_{i}\nabla u)-\kappa_{i}u=f_{i},\quad\text{in }\Omega_{i}, \quad i=1,2, \tag{3}\]
subject to two interface jump conditions
\[[u]=g_{1},\quad[\sigma\partial_{\mathbf{n}}u]=g_{2},\quad\text{on }\Gamma, \tag{4}\]
and a homogeneous Dirichlet boundary condition on the outer boundary
\[u=0,\quad\text{on }\partial\mathcal{B}, \tag{5}\]
where \(\sigma_{i}>0,i=1,2\) are diffusitivities and \(\kappa_{i}\geq 0,i=1,2\) are reaction coefficients. Similarly, we only consider the case that \(\sigma_{i}\) and \(\kappa_{i}\) are constants. Note that the Dirichlet boundary condition (5) is chosen only for simplicity, since the treatment for boundary conditions on \(\partial\mathcal{B}\) only depends on the finite difference scheme and is much simpler. Different boundary conditions, such as Neumann and periodic ones, can also be used.
Here, the boundary/interface \(\Gamma\) is assumed to be implicitly defined as the zero level set of a function. In the case that \(\Gamma\) is defined by a parametric surface or spline, it can also be transformed into an implicit form.
## 3 Boundary integral equations
Both the boundary value problem (1) and (2) and the interface problem (3) and (4) are solved by reformulating them as boundary integral equations.
### Boundary value problem
Let \(G(\mathbf{q},\mathbf{p})\) be Green's function such that for each fixed \(\mathbf{p}\in\mathcal{B}\),
\[\begin{split}\nabla_{\mathbf{q}}\cdot(\sigma(\mathbf{q})\nabla_{\mathbf{q}}G (\mathbf{q},\mathbf{p}))-\kappa(\mathbf{q})G(\mathbf{q};\mathbf{p})&=\delta(\mathbf{q}- \mathbf{p}),\qquad\text{ in }\mathcal{B},\\ G(\mathbf{q},\mathbf{p})&=0,\qquad\qquad\text{ on }\partial \mathcal{B}.\end{split} \tag{6}\]
Let \(\varphi,\psi\) be two density functions. Define the single layer, double layer, adjoint double layer and hyper-singular integrals, respectively, by
\[\mathcal{S}\psi(\mathbf{p}) =\int_{\Gamma}G(\mathbf{q};\mathbf{p})\psi(\mathbf{q})\,ds_{\mathbf{q}}, \mathbf{p}\in\Gamma, \tag{7}\] \[\mathcal{K}\varphi(\mathbf{p}) =\int_{\Gamma}\sigma(\mathbf{q})\frac{\partial G(\mathbf{q};\mathbf{p})}{ \partial\mathbf{n_{q}}}\varphi(\mathbf{q})\,ds_{\mathbf{q}}, \mathbf{p}\in\Gamma,\] (8) \[\mathcal{K}^{\prime}\psi(\mathbf{p}) =\int_{\Gamma}\sigma(\mathbf{p})\frac{\partial G(\mathbf{q};\mathbf{p})}{ \partial\mathbf{n_{p}}}\psi(\mathbf{q})\,ds_{\mathbf{q}}, \mathbf{p}\in\Gamma,\] (9) \[\mathcal{D}\varphi(\mathbf{p}) =\int_{\Gamma}\sigma(\mathbf{p})\sigma(\mathbf{q})\frac{\partial^{2}G( \mathbf{q};\mathbf{p})}{\partial\mathbf{n_{q}}\partial\mathbf{n_{p}}}\varphi(\mathbf{q})\,ds_{\bm {q}}, \mathbf{p}\in\Gamma. \tag{10}\]
Figure 1: A schematic of the (a) boundary value problem and (b) interface problem. Irregular domains and interfaces are embedded into a larger bounding box, in which a uniform Cartesian grid is used for computation.
Define the volume integrals by
\[\mathcal{G}f(\mathbf{p}) =\int_{\Omega}G(\mathbf{q};\mathbf{p})f(\mathbf{q})\,d\mathbf{q},\quad\mathbf{p}\in\Gamma, \tag{11}\] \[\partial_{\mathbf{n}}\mathcal{G}f(\mathbf{p}) =\frac{\partial}{\partial\mathbf{n_{p}}}\int_{\Omega}G(\mathbf{q};\bm {p})f(\mathbf{q})\,d\mathbf{q},\quad\mathbf{p}\in\Gamma. \tag{12}\]
The Dirichlet and Neumann BVP (1) and (2) can be, respectively, reformulated as boundary integral equations
\[(\frac{1}{2}+\mathcal{K})\varphi =g_{D}-\mathcal{G}f, \tag{13}\] \[(\frac{1}{2}-\mathcal{K}^{\prime})\psi =g_{N}-\sigma\partial_{\mathbf{n}}\mathcal{G}f, \tag{14}\]
which are both Fredholm integral equations of the second kind and well-conditioned.
### Interface problem
Let \(G_{i}(\mathbf{q},\mathbf{p}),i=1,2\) be Green's functions such that for each fixed \(\mathbf{p}\in\mathcal{B}\),
\[\begin{split}\nabla_{\mathbf{q}}\cdot(\sigma_{i}(\mathbf{q})\nabla_{\bm {q}}G_{i}(\mathbf{q},\mathbf{p}))-\kappa_{i}(\mathbf{q})G_{i}(\mathbf{q};\mathbf{p})& =\delta(\mathbf{q}-\mathbf{p}),\qquad\text{ in }\mathcal{B},\\ G_{i}(\mathbf{q},\mathbf{p})&=0,\qquad\qquad\text{ on }\partial\mathcal{B}.\end{split} \tag{15}\]
Similarly, we can define the single layer, double layer, adjoint double layer, hyper-singular and volume integral operators \(\mathcal{S}_{i},\mathcal{K}_{i},\mathcal{K}_{i}^{\prime},\mathcal{D}_{i}, \mathcal{G}_{i},\partial_{\mathbf{n}}\mathcal{G}_{i},i=1,2\). By introducing two unknown densities functions \(\varphi=u_{1}\) and \(\psi=\sigma_{2}\partial_{\mathbf{n}}u_{2}\), the interface problem (3) can be reformulated as a system of boundary integral equations
\[\begin{split}\varphi-(\mathcal{K}_{1}-\mathcal{K}_{2})\varphi+( \mathcal{S}_{1}-\mathcal{S}_{2})\psi&=\frac{1}{2}g_{1}+ \mathcal{G}_{1}f_{1}+\mathcal{G}_{2}f_{2}+\mathcal{K}_{2}g_{1}-\mathcal{S}_{1 }g_{2},\\ \psi-(\mathcal{D}_{1}-\mathcal{D}_{2})\varphi+(\mathcal{K}_{1}^{ \prime}-\mathcal{K}_{2}^{\prime})\psi&=-\frac{1}{2}g_{2}+\sigma _{1}\partial_{\mathbf{n}}\mathcal{G}_{1}f_{1}+\sigma_{2}\partial_{\mathbf{n}} \mathcal{G}_{2}f_{2}+\mathcal{D}_{2}g_{1}-\mathcal{K}_{1}^{\prime}g_{2}.\end{split} \tag{16}\]
In the case of \(\kappa_{i}=0,i=1,2\) or \(\sigma_{1}/\sigma_{2}=\kappa_{1}/\kappa_{2}\), by dividing the two equations in (3) by \(\sigma_{i}\), respectively, it yields
\[\Delta u-\tilde{\kappa}u=\begin{cases}f_{1}/\sigma_{1},&\text{ in }\Omega_{1},\\ f_{2}/\sigma_{2},&\text{ in }\Omega_{2},\end{cases} \tag{17}\]
where \(\tilde{\kappa}=0\) or \(\tilde{\kappa}=\kappa_{1}/\sigma_{1}=\kappa_{2}/\sigma_{2}\). Let \(\psi=[\partial_{\mathbf{n}}u]\), we may also get a simpler boundary integral equation
\[\frac{1}{2}\psi+\mu\mathcal{K}^{\prime}\psi=\frac{g_{2}}{\sigma_{1}+\sigma_{2} }+\mu(\mathcal{D}g_{1}+\partial_{\mathbf{n}}\mathcal{G}f), \tag{18}\]
where \(\mu=(\sigma_{2}-\sigma_{1})/(\sigma_{2}+\sigma_{1})\in(-1,1)\) and \(\mathcal{K},\mathcal{D}\) and \(\mathcal{G}\) are the integral operators associated with the Green's function of the operator \(\Delta-\tilde{\kappa}\). One may refer to the reference [58] for detailed derivations of the boundary integral equations.
## 4 Kernel-free boundary integral method
In the kernel-free boundary integral method, values of the boundary integrals and volume integral at boundary or interface \(\Gamma\) are not evaluated with quadrature methods. Instead, they are evaluated by solving equivalent but much simpler interface problems for boundary and volume potentials.
For Green's function \(G(\mathbf{q},\mathbf{p})\), which is associated with the elliptic operator \(\sigma\Delta-\kappa\), define the single layer potential \(-S\psi\), the double layer potential \(D\varphi\) and the Newtonian potential \(Nf\) by
\[\begin{split}-S\psi(\mathbf{p})&=-\int_{\Gamma}G(\mathbf{q},\mathbf{p})\psi(\mathbf{q})\,d\mathbf{s}_{\mathbf{q}},\hskip 42.679134pt\mathbf{p}\in\mathcal{B}, \\ D\varphi(\mathbf{p})&=\int_{\Gamma}\sigma(\mathbf{q})\frac{ \partial G(\mathbf{q},\mathbf{p})}{\partial\mathbf{n}_{\mathbf{q}}}\varphi(\mathbf{q})\,d\mathbf{s}_{ \mathbf{q}},\hskip 14.226378pt\mathbf{p}\in\mathcal{B},\\ Nf(\mathbf{p})&=\int_{\Omega}G(\mathbf{q};\mathbf{p})f(\mathbf{q}) \,d\mathbf{q},\hskip 56.905512pt\mathbf{p}\in\mathcal{B}.\end{split} \tag{19}\]
Then the boundary integrals \(\mathcal{S}\psi\), \(\mathcal{K}\varphi\), \(\mathcal{K}^{\prime}\psi\), \(\mathcal{D}\varphi\) and the volume integrals \(\mathcal{G}f,\partial_{\mathbf{n}}\mathcal{G}f\) coincide with boundary values or normal derivatives of the potentials \(S\psi\), \(D\varphi\) and \(Nf\). The above three potential functions are not smooth at \(\Gamma\) and they satisfy equivalent interface problems, by classical potential theory (see [43; 57]). The equivalent interface problems for the single layer potential \(-S\psi\), the double layer potential \(D\varphi\) and the Newton potential \(Nf\) can be unified as
\[\begin{cases}\nabla\cdot(\sigma\nabla u)-\kappa u=F,&\text{in }\mathcal{B} \setminus\Gamma,\\ [u]=\Phi,&\text{on }\Gamma,\\ [\partial_{\mathbf{n}}u]=\Psi,&\text{on }\Gamma,\\ u=0,&\text{on }\partial\mathcal{B}.\end{cases} \tag{20}\]
The functions \(\Phi\), \(\Psi\) and \(F\) are specified for each potential by
* \(-S\psi\): \(\Phi=F=0\), \(\Psi=\psi\).
* \(D\psi\): \(\Phi=\varphi\), \(\Psi=F=0\).
* \(Nf\): \(\Phi=\Psi=0\). \(F\) is an arbitrary extension of \(f\) to the whole box \(\mathcal{B}\). For simplicity, we set the extended value as zero.
Once the interface problem (20) is solved for the potentials, the boundary integrals \(\mathcal{S}\psi\), \(\mathcal{K}\varphi\), \(\mathcal{K}^{\prime}\psi\), \(\mathcal{D}\varphi\) and the volume integrals \(\mathcal{G}f,\partial_{\mathbf{n}}\mathcal{G}f\) can be obtained from the grid data of these potentials with an interpolation method.
## 5 Equivalent simple interface problem
Solving the constant coefficient interface problem (20) is an essential part of the KFBI method. For simplicity, we drop the constant \(\sigma\) and proceed with the following problem
\[\begin{cases}\Delta u-\kappa u=f,&\text{in }\mathcal{B}\setminus\Gamma,\\ [u]=a,&\text{on }\Gamma,\\ [\partial_{\mathbf{n}}u]=b,&\text{on }\Gamma,\\ u=0,&\text{on }\partial\mathcal{B}.\end{cases} \tag{21}\]
where \(a\), \(b\) and \(f\) are given data and \(\kappa\) is a constant. The right-hand side \(f\) is possibly discontinuous across the interface \(\Gamma\). The constant coefficient interface problem is a much simpler case of the more general interface problem (3).
### Interface representation
In this work, the interface \(\Gamma\) is implicitly defined by the level set function \(H\) in the following way
\[\Gamma=\{\mathbf{x}\in\mathbb{R}^{d}|H(\mathbf{x})=0\},\quad\text{for }d=2,3. \tag{22}\]
We assume the level set function \(H\) is at least \(C^{4}\) (for the fourth-order method) and \(|\nabla H|>c_{0}\) for some \(c_{0}>0\) near the interface \(\Gamma\). The level set function allows us to easily determine the intersection points of the surface with grid lines. For instance, if we have an intersection point on the line segment between two grid nodes \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\), we expect the values \(H(\mathbf{x}_{1})\) and \(H(\mathbf{x}_{2})\) to have opposite signs. By solving the scalar algebraic equation for \(t\) as follows:
\[H(t\mathbf{x}_{1}+(1-t)\mathbf{x}_{2})=0,\quad t\in[0,1], \tag{23}\]
using methods such as the Newton method or the bisection method, one can obtain the coordinates of the intersection point. To compute the unit outward normal at a surface point, we utilize the gradient of the level set function. The unit outward normal vector \(\mathbf{n}(\mathbf{x})\) at a point \(\mathbf{x}\) on the surface is given by:
\[\mathbf{n}(\mathbf{x})=\frac{\nabla H(\mathbf{x})}{|\nabla H(\mathbf{x})|}. \tag{24}\]
The method described in [59] is employed in this work for the representation of the interface using only a subset of intersection points. For each \(r=1,\ldots,d\), \(\mathbf{e}_{r}\) represents the \(r\)-th Cartesian basis vector in \(\mathbb{R}^{d}\), and \(\alpha\in(\cos^{-1}(1/\sqrt{d}),\pi/2)\) is a fixed angle. We define the subset:
\[\Gamma_{r}=\{\mathbf{x}\in\Gamma:|\mathbf{n}(\mathbf{x})\cdot\mathbf{e}_{r}|> \cos\alpha\}, \tag{25}\]
which forms an overlapping surface decomposition of \(\Gamma\). The discrete representation of the interface \(\Gamma_{r}\) only considers the intersection points between \(\Gamma_{r}\) and the grid lines aligned with the \(\mathbf{e}_{r}\) direction (refer to Figure 2). We denote the set of these intersection points as \(\Gamma_{r}^{h}\). The union of all sets \(\Gamma_{r}^{h}\) for \(r=1,\ldots,d\) is denoted as \(\Gamma^{h}\), which represents the discrete set of points used to approximate \(\Gamma\) and allocate surface degrees of freedom. For more detailed information on the surface discretization algorithm, please refer to [59].
With the help of the overlapping surface decomposition-based discretization, the interface \(\Gamma\) can be locally parameterized by a reference coordinate plane. Candidate reference planes are
\[\Pi_{i}:\{(x_{1},x_{2},\cdots,x_{d})\in\mathbb{R}^{d}|x_{i}=0\},\quad i=1, \cdots,d, \tag{26}\]
for \(d=2\text{ or }3\). Suppose at a point \(\boldsymbol{x}\in\Gamma\), if the \(i\)-th component of the local normal \(\boldsymbol{n}(\boldsymbol{x})\) has the largest absolute value, then we choose \(\Pi_{i}\) as the reference plane of \(\Gamma\) near \(\boldsymbol{x}\). In such a way, the interpolation stencils on \(\Gamma\) can be easily found with the help of the Cartesian grid on the reference plane. Numerical integration and interpolation on \(\Gamma\)
can be done in a similar way as those on a planer domain as well. We remark that, in principle, the Cartesian grid used for the representation of \(\Gamma\) is not necessarily chosen as the same one for solving PDEs. In this work, we use the same Cartesian grid only for simplicity.
### Corrected finite difference scheme
For simplicity, the bounding box is assumed to be a unit cube, \(i.e.\)\(\mathcal{B}=(0,1)^{3}\). Given a positive integer \(N\), the domain \(\mathcal{B}\) is uniformly partitioned into a Cartesian grid with mesh parameter \(h=1/N\). Let \(P_{i,j,k}\) denote the grid node \((x_{i},y_{j},z_{k}),i,j,k=0,1,\cdots,\), where \(x_{i}=ih\), \(y_{j}=jh\) and \(z_{k}=kh\) are node coordinates. For an irregular domain \(\Omega\subset\mathcal{B}\), the interior and exterior grid nodes are defined as \(\Omega_{h}\) and \(\Omega_{h}^{C}\), respectively,
\[\Omega_{h} =\{P_{i,j,k}|(x_{i},y_{j},z_{k})\in\Omega,\quad i,j,k=1,\cdots-1\}, \tag{27}\] \[\Omega_{h}^{C} =\{P_{i,j,k}|(x_{i},y_{j},z_{k})\in\mathcal{B}\setminus\Omega, \quad i,j,k=1,\cdots-1\}. \tag{28}\]
In the absence of interfaces, it is known that the following two compact finite difference schemes (29) and (30) are fourth-order accurate for 2D and 3D cases, respectively.
\[-(\frac{10}{3h^{2}}+\frac{2}{3}\kappa)u_{i,j}+(\frac{2}{3h^{2}}- \frac{1}{12}\kappa)\sum_{|r|+|s|=1}u_{i+r,j+s}+\frac{1}{6h^{2}}\sum_{|r|+|s|=2 }u_{i+r,j+s}=f_{i,j}+\frac{h^{2}}{12}\Delta f_{i,j}. \tag{29}\] \[-(\frac{25}{6h^{2}}+\frac{1}{2}\kappa)u_{i,j,k}+(\frac{5}{12h^{2} }-\frac{1}{12}\kappa)\sum_{|r|+|s|+|t|=1}u_{i+r,j+s,k+t}+\frac{1}{8h^{2}}\sum_ {|r|+|s|+|t|=2}u_{i+r,j+s,k+t}\] \[+\frac{1}{48h^{2}}\sum_{|r|+|s|+|t|=3}u_{i+r,j+s,k+t}=f_{i,j,k}+ \frac{h^{2}}{12}\Delta f_{i,j,k}. \tag{30}\]
The two schemes are adopted to derive the corresponding corrected finite difference schemes for the interface problem (21). We write the finite difference schemes as a general form
\[\sum_{P_{i+r,j+s,k+t}\in\mathcal{S}_{i,j,k}}c_{r,s,t}u_{i+r,j+s,k+t}=F_{i,j,k}, \tag{31}\]
Figure 2: Illustrations of surface points: (a) points in \(\Gamma_{1}^{h}\) (in 2D); (b) points in \(\Gamma_{2}^{h}\) (in 2D); (c) points in \(\Gamma_{1}^{h}\) (in 3D).
where \(c_{r,s,t}\) is the coefficient of \(u_{i+r,j+s,k+t}\), \(F_{i,j,k}\) is the right-hand side of the finite difference equation and \(\mathcal{S}_{i,j,k}\) is the node set that contains all grid nodes with \(c_{r,s,t}\neq 0\) at \(P_{i,j,k}\). Then we define regular nodes \(\mathcal{R}_{h}\) and irregular nodes \(\mathcal{I}_{h}\) as follows,
\[\mathcal{R}_{h}=\{P_{i,j,k}|S_{i,j,k}\cap\Omega_{h}=\emptyset\text{ or }S_{i,j,k}\cap\Omega_{h}^{C}=\emptyset\}, \tag{32}\] \[\mathcal{I}_{h}=\{P_{i,j,k}|S_{i,j,k}\cap\Omega_{h}\neq\emptyset \text{ and }S_{i,j,k}\cap\Omega_{h}^{C}\neq\emptyset\}, \tag{33}\]
At irregular nodes, since the finite difference approximation is taken across the discontinuity at the interface, large local truncation errors may occur and result in inaccurate or even divergent results. Precisely, let \(\mathcal{A}_{h}\) denote the difference operator in the finite difference scheme (31). Suppose the local truncation error is on the order of \(\mathcal{O}(h^{p})\) at a regular node. The local truncation error at an irregular node is given by
\[E_{h}(x_{i},y_{j},z_{k})=\mathcal{A}_{h}u(x_{i},y_{j},z_{k})-F_{ i,j,k}\] \[=\left\{\begin{aligned} &\sum_{P_{i+r,j+s,k+t}\in\Omega_{h}\cap \mathcal{S}_{i,j,k}}c_{r,s,t}u^{+}(x_{i+r},y_{j+s},z_{k+t})\\ &+\sum_{P_{i+r,j+s,k+t}\in\Omega_{h}^{C}\cap\mathcal{S}_{i,j,k}}c_ {r,s,t}u^{-}(x_{i+r},y_{j+s},z_{k+t})-F_{i,j,k},\quad P_{i,j,k}\in\Omega_{h}, \\ &\sum_{P_{i+r,j+s,k+t}\in\Omega_{h}^{C}\cap\mathcal{S}_{i,j,k}}c_ {r,s,t}u^{-}(x_{i+r},y_{j+s},z_{k+t})\\ &+\sum_{P_{i+r,j+s,k+t}\in\Omega_{h}\cap\mathcal{S}_{i,j,k}}c_{r, s,t}u^{+}(x_{i+r},y_{j+s},z_{k+t})-F_{i,j,k},\quad P_{i,j,k}\in\Omega_{h}^{C}, \end{aligned}\right.\] \[=\left\{\begin{aligned} &\sum_{P_{i+r,j+s,k+t}\in\mathcal{S}_{i,j,k}}c_ {r,s,t}u^{+}(x_{i+r},y_{j+s},z_{k+t})\\ &+\sum_{P_{i+r,j+s,k+t}\in\Omega_{h}^{C}\cap\mathcal{S}_{i,j,k}}c_ {r,s,t}(u^{-}-u^{+})(x_{i+r},y_{j+s},z_{k+t})-F_{i,j,k},\quad P_{i,j,k}\in \Omega_{h},\\ &\sum_{P_{i+r,j+s,k+t}\in\mathcal{S}_{i,j,k}}c_{r,s,t}u^{-}(x_{i+ r},y_{j+s},z_{k+t})\\ &+\sum_{P_{i+r,j+s,k+t}\in\Omega_{h}\cap\mathcal{S}_{i,j,k}}c_ {r,s,t}(u^{+}-u^{-})(x_{i+r},y_{j+s},z_{k+t})-F_{i,j,k},\quad P_{i,j,k}\in \Omega_{h}^{C},\end{aligned}\right.\] \[=\left\{\begin{aligned} &\frac{1}{h^{2}}\sum_{P_{i+r,j+s,k+t}\in\Omega_{h}^{C}\cap \mathcal{S}_{i,j,k}}c_{r,s,t}(u^{-}-u^{+})(x_{i+r},y_{j+s},z_{k+t})+\mathcal{O }(h^{p}),\quad P_{i,j,k}\in\Omega_{h},\\ &\frac{1}{h^{2}}\sum_{P_{i+r,j+s,k+t}\in\Omega_{h}\cap\mathcal{S}_ {i,j,k}}c_{r,s,t}(u^{+}-u^{-})(x_{i+r},y_{j+s},z_{k+t})+\mathcal{O}(h^{p}), \quad P_{i,j,k}\in\Omega_{h}^{C},\end{aligned}\right. \tag{34}\]
where \(u^{+}\) and \(u^{-}\) are two smooth functions that coincide with \(u\) in the domain \(\Omega\) and \(\Omega^{C}\), respectively. It can be found that the leading term in the local truncation error at an irregular node is on the order of \(\mathcal{O}(h^{-2})\), which is not acceptable for the sake of accuracy. The problem can be fixed by including the leading terms of the local truncation error, as correction terms, into the final finite difference equations. Define the correction function
\(C(\mathbf{x})=u^{+}(\mathbf{x})-u^{-}(\mathbf{x})\). Then, the corrected finite difference scheme can be written as
\[\sum_{P_{i+r,j+s,k+t}\in\mathcal{S}_{i,j,k}}c_{r,s,t}u_{i+r,j+s,k+t}=F_{i,j,k}+C_ {i,j,k}. \tag{35}\]
where the correction term \(C_{i,j,k}\) is given by
\[C_{i,j,k}=\begin{cases}0,&P_{i,j,k}\in\mathcal{R}_{h},\\ -\dfrac{1}{h^{2}}\sum_{P_{i+r,j+s,k+t}\in\Omega_{k}^{C}\cap\mathcal{S}_{i,j,k} }c_{r,s,t}C(x_{i+r},y_{j+s},z_{k+t}),&P_{i,j,k}\in\Omega_{h}\cap\mathcal{I}_{h },\\ \dfrac{1}{h^{2}}\sum_{P_{i+r,j+s,k+t}\in\Omega_{h}\cap\mathcal{S}_{i,j,k} }c_{r,s,t}C(x_{i+r},y_{j+s},z_{k+t}),&P_{i,j,k}\in\Omega_{h}^{C}\cap\mathcal{I }_{h}.\end{cases} \tag{36}\]
_Remark 5.1_.: If exact values of the correction function \(C(\mathbf{x})\) are given, then the local truncation error of the corrected finite difference scheme (35) is on the order \(\mathcal{O}(h^{p})\) at each node. However, it happens only when the interface coincides with grid nodes and \(C(\mathbf{x})\) equals the Dirichlet jump condition \([u]\). In practice, approximate values of the correction function \(C(\mathbf{x})\) are used. For the fourth-order method in this work, the correction function only needs to be approximated with an error on the order of \(\mathcal{O}(h^{5})\) such that the local truncation error becomes \(\mathcal{O}(h^{3})\) at irregular nodes and \(\mathcal{O}(h^{4})\) elsewhere.
_Remark 5.2_.: If two interfaces are arbitrarily close, the line segment between two grid nodes may intersect interfaces more than once (see Figure 3). Let \(u^{(i)}\) be restrictions of the piece-wise smooth solution \(u\) in \(\Omega^{(i)}\) for \(i=0,1,2\). Denote by \(C^{(1)}=u^{(1)}-u^{(0)}\) and \(C^{(2)}=u^{(2)}-u^{(0)}\) two correction functions that are solved near \(\Gamma^{(1)}\) and \(\Gamma^{(2)}\). For the correction term \(C_{i,j}\), the value \(C(x_{i+r},y_{j+s})\) in (36) is computed by
\[\begin{split} C(x_{i+r},y_{j+s})&=u^{(1)}(x_{i+r},y_{j+s})-u^{(2)}(x_{i+r},y_{j+s})\\ &=u^{(1)}(x_{i+r},y_{j+s})-u^{(0)}(x_{i+r},y_{j+s})+u^{(0)}(x_{i+ r},y_{j+s})-u^{(2)}(x_{i+r},y_{j+s})\\ &=C^{(1)}(x_{i+r},y_{j+s})-C^{(2)}(x_{i+r},y_{j+s}).\end{split} \tag{37}\]
This is simply adding and subtracting a middle term and is similar to the technique used in [60].
### Local Cauchy problem
Suppose \(\Gamma\) is sufficiently smooth and the right-hand side \(f\) is also piece-wise smooth. Denote by \(\Omega_{\Gamma}\) a narrow band around \(\Gamma\) that covers all irregular nodes. Let \(f^{+}\) and \(f^{-}\) be two smooth extension functions of \(f\) in \(\Omega_{\Gamma}\) from two different sides \(\Omega_{i}\) and \(\Omega_{e}\), respectively. Then the function \(\tilde{f}=f^{+}-f^{-}\) is also smooth in \(\Omega_{\Gamma}\). The smoothness of \(\tilde{f}\) is relevant to the accuracy of \(C(x)\), see Remark 5.3. Notice that the correction function \(C(\mathbf{x})\) satisfies the Cauchy problem
\[\begin{split}\Delta C(\mathbf{x})-\kappa C(\mathbf{x})&= \tilde{f}(\mathbf{x}),\qquad\mathbf{x}\in\Omega_{\Gamma},\\ C(\mathbf{x})&=a(\mathbf{x}),\qquad\mathbf{x}\in\Gamma,\\ \partial_{\mathbf{n}}C(\mathbf{x})&=b(\mathbf{x}),\qquad\mathbf{x} \in\Gamma.\end{split} \tag{38}\]
. Even though the Cauchy problem is known as ill-posed in the sense of Hadmond that small perturbation in the boundary data will grow exponentially away from the boundary and it is difficult to obtain a global numerical solution. Since the correction function is only required at irregular nodes that are close to the boundary \(\Gamma\), we are only interested in the local solution of the Cauchy problem. In that case, numerical errors can be bounded from above. The localness of the Cauchy problem also suggests that numerical schemes with a small stencil, such as compact finite difference schemes, are preferred to work with for the correction function method.
To locally solve the Cauchy problem (38), we approximate the local solution in the narrow band \(\Omega_{\Gamma}\) with a partition of unity approach. Using the quasi-uniform point set \(\{\mathbf{p}_{i}\}_{i=1}^{N_{p}}\subset\Gamma\) that are primary points on the boundary \(\Gamma\). Let \(\Omega_{\Gamma,i}\) be a neighborhood of the point \(\mathbf{p}_{i}\). Define \(\Omega_{\Gamma}\) as the union of the neighborhoods
\[\Omega_{\Gamma}=\bigcup_{i=1}^{N_{p}}\Omega_{\Gamma,i}. \tag{39}\]
Then \(\{\Omega_{\Gamma,i}\}_{i=1}^{N_{p}}\) forms an overlapping decomposition of \(\Omega_{\Gamma}\). Note that each \(\Omega_{\Gamma,i}\) should be chosen such that \(\Omega_{\Gamma}\) covers all irregular grid nodes. Unlike the original CFM [33], where \(\Omega_{\Gamma}\) is defined as some particular grid patches relying on the cut pattern of \(\Gamma\) with grid cells, the current definition of \(\Omega_{\Gamma}\) is flexible since it only depends on the location of surface points. This decomposition gives us a simple way to represent \(C(\mathbf{x})\) in \(\Omega_{\Gamma}\).
For the partitions \(\Omega_{\Gamma,i},i=1,2,\cdots,N_{p}\), define the compactly supported weight functions \(\omega_{i}(\mathbf{x})\) such that \(\operatorname{supp}(\omega_{i})=\Omega_{\Gamma,i}\) and
\[\sum_{i=1}^{N_{p}}\omega_{i}(\mathbf{y})\equiv 1,\quad\mathbf{y}\in\Omega_{\Gamma}= \bigcup_{i=1}^{N_{p}}\Omega_{\Gamma,i}. \tag{40}\]
In practice, the weight function \(\omega_{i}\) can be constructed in many ways, such as Shepard's
Figure 3: An illustration of a line segment intersecting two interfaces.
method [61]. In this work, we use a simple non-smooth weight function,
\[\omega_{i}(\mathbf{x})=\begin{cases}1,&\text{if $\mathbf{p}_{i}$ is the closest point to $\mathbf{x}$ for $i=1,2,\cdots,N_{p}$,}\\ 0,&\text{otherwise.}\end{cases} \tag{41}\]
We remark that the smoothness of the weight function has a neglectable effect on the algorithm. The above simple weight function works very well for all numerical experiments. Suppose \(C_{h,i}(\mathbf{x})\) is an approximation to \(C(\mathbf{x})\) for \(\mathbf{x}\in\Omega_{\Gamma,i}\). With the partition of unity, the complete approximate solution \(C_{h}(\mathbf{x})\) for \(\mathbf{x}\in\Omega_{\Gamma}\) is constructed by the linear combination of local solutions \(C_{h,i}\),
\[C_{h}(\mathbf{x})=\sum_{i=1}^{N_{p}}\omega_{i}(\mathbf{x})C_{h,i}(\mathbf{x}),\quad\mathbf{x}\in \Omega_{\Gamma}. \tag{42}\]
To this end, we restrict the Cauchy problem (38) to the partition \(\Omega_{\Gamma,i}\) and consider numerically solving a sequence of subproblems for \(i=1,2,\cdots N_{p}\),
\[\Delta C_{i}(\mathbf{x})-\kappa C_{i}(\mathbf{x}) =\tilde{f}(\mathbf{x}), \mathbf{x}\in\Omega_{\Gamma,i}, \tag{43}\] \[C_{i}(\mathbf{x}) =a(\mathbf{x}), \mathbf{x}\in\Gamma\cap\Omega_{\Gamma,i},\] \[\partial_{\mathbf{n}}C_{i}(\mathbf{x}) =b(\mathbf{x}), \mathbf{x}\in\Gamma\cap\Omega_{\Gamma,i}.\]
to obtain numerical solutions \(C_{h,i}(\mathbf{x})\). The restricted problems (43) are both temporally and spatially local, which explains the terminology "local Cauchy problem".
The method is more understandable if one regards the normal direction of \(\Gamma\) as a time variable and the problems (43) as initial-boundary value problems (IBVPs). Solving the restricted problems for the full Cauchy problem (38) refers to the explicit method for time-dependent PDEs. In the correction function method, one does not need to be concerned with the stability of the explicit method since the solution is only solved by one step away from the boundary \(\Gamma\).
_Remark 5.3_.: To obtain an accurate correction function \(C(\mathbf{x})\), the right-hand side \(\tilde{f}(\mathbf{x})\) should be sufficiently smooth. For a fourth-order method, \(C(\mathbf{x})\) is required to be at least \(C^{4}\), and, consequently, \(\tilde{f}(\mathbf{x})\) is required to be at least \(C^{2}\). Numerically, we can use the same partition of unity approach to represent \(\tilde{f}\) in \(\Omega_{\Gamma}\). In each \(\Omega_{\Gamma,i}\), \(\tilde{f}\) is replaced by a simple quadratic function using the jump information of \(f\) (for example, in 2D, we use \([f]\), \([f_{x}]\), \([f_{y}]\), \([f_{xx}]\), \([f_{yy}]\), and \([f_{xy}]\)). There are also several different ways to obtain smooth \(f^{+}\) and \(f^{-}\), such as the PDE-based method [62] and the partition of unity extension (PUX) method [63].
#### 5.3.1 A mesh-free collocation method
Let \(\{\phi_{l,m,n}(\mathbf{x})\}_{l+m+n\leq p}\) denote the basis of Taylor polynomial of degree no more than \(p\), where the subscripts \(l\), \(m\) and \(n\) are non-negative integers. The elements of the basis are given by, for example,
\[\phi_{0,0,0}(x,y,z) =1, \tag{44}\] \[\phi_{1,0,0}(x,y,z) =x,\quad\phi_{0,1,0}(x,y,z)=y,\quad\phi_{0,0,1}(x,y,z)=z,\] \[\phi_{2,0,0}(x,y,z) =x^{2},\quad\phi_{0,2,0}(x,y,z)=y^{2},\quad\phi_{0,0,2}(x,y,z)=z^{ 2},\] \[\phi_{1,1,0}(x,y,z) =xy,\quad\phi_{1,0,1}(x,y,z)=xz,\quad\phi_{0,1,1}(x,y,z)=yz,\] \[\cdots.\]
The approximate solution \(C_{h,i}(\mathbf{x})\) is expressed as the linear combination of the basis
\[C_{h,i}(\mathbf{x})=\sum_{l+m+n\leq p}d_{l,m,n}\phi_{l,m,n}(\xi,\eta,\zeta),\quad\mathbf{ x}\in\Omega_{\Gamma,i}, \tag{45}\]
where \(\xi\), \(\eta\) and \(\zeta\) are scaled local coordinate of \(\mathbf{x}\). Suppose \(\mathbf{p}_{i}=(x^{(i)},y^{(i)},z^{(i)})\) is the center point of the local domain \(\Omega_{\Gamma,i}\). The scaled local coordinate \(\tilde{\mathbf{x}}=(\xi,\eta,\zeta)\) of \(\mathbf{x}=(x,y,z)\) is defined as
\[\xi=(x-x^{(i)})/h,\quad\eta=(y-y^{(i)})/h,\quad\zeta=(z-z^{(i)})/h, \tag{46}\]
where \(h\) is the mesh parameter. To determine the coefficients \(d_{l,m,n}\), we replace \(C_{i}\) with \(C_{h,i}\) in the problem (43) and let the equations be exactly satisfied at multiple points. The resulting method is essentially mesh-free and falls into the category of collocation methods. Then the chosen points are called "collocation points". Since the problem (43) involves both bulk PDE and boundary conditions, it involves collocation points both in \(\Omega_{\Gamma,i}\) and \(\Gamma\cap\Omega_{\Gamma,i}\). Collocation points can be classified into three types based on the equations at which they are satisfied. Let \(\mathbf{x}_{j}^{pde},j=1,2,\cdots,m_{1}\) be the points in \(\Omega_{\Gamma,i}\) where the PDE is satisfied. Let \(\mathbf{x}_{j}^{D},j=1,2,\cdots,m_{2}\) and \(\mathbf{x}_{j}^{N},j=1,2,\cdots,m_{3}\) be the points on \(\Gamma\cap\Omega_{\Gamma,i}\) where the Dirichlet and Neumann conditions are satisfied, respectively. The problem (43) is approximated by the finite-dimensional problem
\[\sum_{l+m+n\leq p}(\Delta-\kappa)\phi_{l,m,n}(\tilde{\mathbf{x}}_{j}^ {pde})d_{l,m,n} =\tilde{f}(\mathbf{x}_{j}^{pde}), \text{for }j=1,2,\cdots,m_{1}\] \[\sum_{l+m+n\leq p}\phi_{l,m,n}(\tilde{\mathbf{x}}_{j}^{D})d_{l,m,n} =a(\mathbf{x}_{j}^{D}), \text{for }j=1,2,\cdots,m_{2}, \tag{47}\] \[\sum_{l+m+n\leq p}\mathbf{n}(\mathbf{x}_{j}^{N})\cdot\nabla\phi_{l,m,n}( \tilde{\mathbf{x}}_{j}^{N})d_{l,m,n} =b(\mathbf{x}_{j}^{N}), \text{for }j=1,2,\cdots,m_{3}.\]
The approximate problem (47) forms a linear system
\[\mathbf{MU}=\mathbf{Q}, \tag{48}\]
where the unknown vector \(\mathbf{U}\) consists of the coefficients \(d_{l,m,n}\).
_Remark 5.4_.: The collocation method is closely related to the local coordinate-transformation approach used in previous works [43; 59; 58; 57]. The coordinate-transformation approach can also be viewed as a method for solving the local Cauchy problem (43) since the correction function \(C(\mathbf{x})\) can also be approximated with the jump of derivatives \([u],[u_{x}],[u_{y}],[u_{z}],[u_{xx}]\cdots\) in terms of Taylor polynomial. However, the derivation of derivative jumps in the coordinate-transformation approach involves repeatedly taking tangential derivatives and applying the chains rule, which requires tedious calculation, especially for high-order and 3D cases. The collocation introduced here is much simpler since applying the chains rule is not required.
### Selection of collocation points
Selecting collocation points is an essential part of the mesh-free collocation method to ensure accuracy and stability of the algorithm. Different selection procedures for
collocation points result in different systems (48) and different results. For example, one can choose many collocation points such that their number is much more than the number of unknowns. In that case, the linear system (48) becomes overdetermined and can be solved in the least-square sense, which is similar to the method in [33]. Here, an interpolation-type method is employed so that each equation in the system (48) is accurately satisfied. An advantage of using an interpolation-type method is that when a boundary point coincides with a grid node, the correction function is accurate at the point since the Dirichlet jump condition \([u]\) is enforced accurately.
Before describing the selection procedure of collocation points, we emphasize a few key rules:
1. Collocation points should be chosen in \(\Omega_{\Gamma,i}\) for the PDE and on \(\Gamma\cap\Omega_{\Gamma,i}\) for boundary conditions.
2. Collocation points of the same type should be well-separated such that the resulting linear system is non-singular.
3. For each equation in (43), the number of collocation points should be chosen to meet the formal accuracy requirement.
Rule (a) is a basic requirement for consistency of the collocation method. Rule (b) is addressed to avoid nearly singular or rank-deficient matrix \(\mathbf{M}\) and to ensure stability of the method. For collocation points of the same type to be well-separated, the distance between two different points should have a positive lower bound. Moreover, the number of projections of these points onto each spatial direction should be sufficiently large such that the interpolation bases associated with the points can span the polynomial space. Rule (c) ensures accuracy of the collocation method. Note that the three equations in (43) have different times of derivatives of \(C_{i}\), and thus a polynomial approximation of \(C_{i}\) results in different orders of accuracy for each equation. Since the equations in (43) are enforced accurately at collocation points, the collocation problem is also referred to an interpolation problem. With the error estimation of polynomial interpolation, one can find that the approximation errors at a point \(\mathbf{x}\), away from collocation points, satisfy
\[(\Delta-\kappa)C_{h,i}(\mathbf{x})-\tilde{f}(\mathbf{x}) =\mathcal{O}(h^{p-1}), \mathbf{x}\in\Omega_{\Gamma,i}, \tag{49}\] \[C_{h,i}(\mathbf{x})-a(\mathbf{x}) =\mathcal{O}(h^{p+1}), \mathbf{x}\in\Gamma\cap\Omega_{\Gamma,i},\] \[\partial_{\mathbf{n}}C_{h,i}(\mathbf{x})-b(\mathbf{x}) =\mathcal{O}(h^{p}), \mathbf{x}\in\Gamma\cap\Omega_{\Gamma,i}.\]
To take into account the consistency and stability requirement and to balance the approximation errors, we choose collocation points as interpolation points such that the corresponding Lagrange interpolant on these points has the same order of accuracy as shown in (49). Precisely, collocation points are chosen as interpolation points of a polynomial of degree (i) \((p-2)\) for the PDE ; (ii) \(p\) for the Dirichlet boundary condition; and (iii) \((p-1)\) for the Neumann boundary condition. It should be mentioned that the Lagrangian interpolant associated with the PDE is in \(d\) space dimensions and those for the boundary conditions are in the \((d-1)\) space dimensions. Therefore, to choose collocation points for boundary conditions, we first project the boundary \(\Gamma\) into its reference plane locally such that we can find the local stencil by working with the Cartesian grid on the planer domain. A good choice of the distribution of collocation points is illustrated in Figure 4. Similar point selection strategies for multivariate interpolation are used in [57, 58, 59].
If collocation points are chosen as above, the number of collocation points equals the number of degree of freedoms. For example, in three space dimensions, the collocation equation numbers for the PDE, and Dirichlet and Neumann boundary condition are \(\sum_{i=0}^{p-2}(i+1)(i+2)/2\), \((p+1)(p+2)/2\) and \(p(p+1)/2\), respectively. Obviously, it yields
\[N^{eqn}=\sum_{i=0}^{p-2}\frac{(i+1)(i+2)}{2}+\frac{(p+1)(p+2)}{2}+\frac{p(p+1)} {2}=\sum_{i=0}^{p}\frac{(i+1)(i+2)}{2}=N^{dof}. \tag{50}\]
Then the system (48) is a square one. One can easily verify that similar results hold for the two space dimensional case as well. The invertibility of the matrix \(\mathbf{M}\) is difficult to prove since it depends on the geometry of \(\Gamma\). Nevertheless, if the collocation points are chosen as aforementioned, the linear system is always uniquely solvable with a standard decomposition method, such as the QR decomposition method.
_Remark 5.5_.: We suggest using the scaled local coordinate \((\xi,\eta,\zeta)\) instead of the original coordinate \((x,y,z)\) for solving the problem (47). It is equivalent to rescaling the local Cauchy problem such that its characteristic length changes from \(\mathcal{O}(h)\) to \(\mathcal{O}(1)\). Thus, the condition number of the problem (48) is essentially independent of the grid size \(h\) and the accuracy and the stability of the algorithm can be improved to reduce the effect of round-off error. In the numerical experiments, the condition number \(\mathrm{cond}(\mathbf{M})\) is always on the order of \(10^{2}\sim 10^{3}\) regardless of how small the grid size is.
### Extracting boundary data
After solving the linear system of the corrected finite difference scheme, one can obtain the numerical solution at Cartesian grid nodes. However, in the KFBI method, one needs to frequently use boundary/interface data, such as boundary value or normal derivative of the solution, at boundary nodes rather than Cartesian grid nodes. In order to extract boundary data of the numerical solution, Lagrange interpolation is used to compute off-grid data. One should also take into account the jump values of the potential function such that the Lagrange interpolation has high-order accuracy. The correction
Figure 4: Schematics of collocation points in 3D for (a)Dirichlet boundary condition; (b)Neumann boundary condition; (c)PDE. For center points of the local Cauchy problem that are located in the shaded region, collocation points are marked as black circles. Figures (a) and (b) show the projections of collocation points on the reference plane.
function \(C(\mathbf{x})\) introduced before now offers a suitable way to take into account the non-smoothness of the solution. With the correction function, it is simple to reconstruct smooth data for interpolation using the piece-wise smooth grid value.
For example, given a boundary point \(\mathbf{p}\in\Gamma\), we try to obtain the one-sided limit boundary data of the numerical solution \(v_{h}\) in \(\Omega^{+}\). Let \(\mathbf{q}_{i},i=1,2,\cdots\) be a grid node in the interpolation stencil near \(\mathbf{p}\). Suppose the numerical solution \(v_{h}\) is piece-wise smooth and coincides with the smooth functions \(v_{h}^{+}\) and \(v_{h}^{-}\) in \(\Omega^{+}\) and \(\Omega^{-}\), respectively. We add the correction function \(C(\mathbf{q}_{i})\) to the grid value \(v_{h}(\mathbf{q}_{i})\) if \(\mathbf{q}_{i}\in\Omega^{-}\) so that the interpolation data is smooth. By making Taylor expansion at \(\mathbf{p}\), it yields
\[v_{h}(\mathbf{q}_{i})=\sum_{l+m+n\leq p}\frac{p!}{l!m!n!}\xi^{l}\eta^{m}\zeta^{n} \frac{\partial^{l+m+n}}{\partial x^{l}\partial y^{m}\partial z^{n}}v_{h}^{+}( \mathbf{p})+\mathcal{O}(|\mathbf{q}_{i}-\mathbf{p}|^{p+1}),\quad\text{if }\mathbf{q}_{i}\in\Omega^{+}, \tag{51}\]
\[v_{h}(\mathbf{q}_{i})+C(\mathbf{q}_{i})=\sum_{l+m+n\leq p}\frac{p!}{l!m!n!}\xi^{l}\eta ^{m}\zeta^{n}\frac{\partial^{l+m+n}}{\partial x^{l}\partial y^{m}\partial z^{ n}}v_{h}^{+}(\mathbf{p})+\mathcal{O}(|\mathbf{q}_{i}-\mathbf{p}|^{p+1}),\quad\text{if }\mathbf{q}_{i}\in\Omega^{-}. \tag{52}\]
where \((\xi,\eta,\zeta)^{T}=\mathbf{q}_{i}-\mathbf{p}\). Now, by solving the interpolation problem, the function value and derivatives of \(v_{h}^{+}(\mathbf{p})\) are obtained.
## 6 Algorithm Summary
In this section, we summarize the proposed method. We take the boundary integral equation (16) as an example. The algorithms for the boundary integral equations (13),(14),(17) are similar. The algorithm is summarized in Algorithm 1.
```
1: Compute the right-hand side of (16), in which the integral operators \(\mathcal{S}_{i},\mathcal{K}_{i},\mathcal{K}^{\prime}_{i},\mathcal{D}_{i}, \mathcal{G}_{i},\partial_{\mathbf{n}}\mathcal{G}f,i=1,2\) are computed using the same approach in Step 3;
2: Given an initial guess for \(\varphi\) and \(\psi\);
3: Compute the integral operators \(\mathcal{S}_{i},\mathcal{K}_{i},\mathcal{K}^{\prime}_{i},\mathcal{D}_{i},i=1,2\) by solving the equivalent interface problem (20);
4: Compute the correction function in \(\Omega_{\Gamma}\) by solving the local Cauchy problem (38);
5: Compute the correction terms in the right-hand side of (35);
6: Solve the linear system of the finite difference scheme (35) with FFT;
7: Compute the integral operator values by interpolation from the grid solution;
8: Generate the next \(\varphi\) and \(\psi\) using the GMRES method and repeat Step 3 until the residual is less than a given tolerance.
```
**Algorithm 1** Correction function-based KFBI method
In each iteration, individually computing the integral operators would require a total of eight calls of the FFT solver. We stress that the number can be reduced to two since
the terms \(\mathcal{K}_{i}\varphi-\mathcal{S}_{i}\psi\) and \(\mathcal{D}_{i}\varphi-\mathcal{K}_{i}^{\prime}\psi\) for \(i=1\) or \(2\) can be computed by calling the FFT solver only once. By the principle of linear superposition, one only needs to solve the interface problem for the potential \(D\varphi-S\psi\) and interpolate the function value and normal derivative on \(\Gamma\) to obtain the terms. In this way, it only requires two calls of the FFT solver in each GMRES iteration.
## 7 Numerical results
In this section, numerical results for boundary and interface problems in both two and three space dimensions are presented. In the following examples, irregular domains and interfaces are given in their level-set forms, which will be specified for each case. Irregular domains and interfaces are embedded into a bounding box \(\mathcal{B}\), which is chosen as a square in 2D and a cube in 3D. The box \(\mathcal{B}\) is uniformly partitioned into \(N\) intervals in each direction for simplicity. The total number of primary boundary points representing the interface \(\Gamma\) is denoted by \(N_{b}\).
The following numerical experiments are performed on a personal computer with 3.80 GHz Intel Core i7 processors. The codes for conducting the numerical experiments are written in C++ computer language. The tolerance in the GMRES method is fixed as \(10^{-10}\). GMRES iteration numbers and CPU times (in seconds) are reported to quantify the computational complexity. Numerical errors on the grid node set \(\Omega^{h}\) in the \(L_{2}\) and maximum norms are defined as
\[\|e\|_{2}=\sqrt{\frac{\sum_{\mathbf{x}\in\Omega^{h}}|v(\mathbf{x})-u(\mathbf{x})|^{2}}{M}},\quad\|e\|_{\infty}=\max_{\mathbf{x}\in\Omega^{h}}|v(\mathbf{x})-u(\mathbf{x})|, \tag{53}\]
where \(M\) is the number of grid nodes in \(\Omega^{h}\), \(v\) and \(u\) are the numerical and exact solution, respectively.
### Two space dimensional examples
#### 7.1.1 Boundary value problem
In the first example, we solve the 2D Dirichlet BVP of the Poisson equation on a rotated ellipse-shaped domain \(\Omega\)
\[\Omega=\left\{(x,y)\in\mathbb{R}^{2}:\frac{(x\cos\theta+y\sin\theta)^{2}}{a^{2 }}+\frac{(y\cos\theta-x\sin\theta)^{2}}{b^{2}}<1\right\}, \tag{54}\]
with \(a=1,b=0.5,\theta=-\pi/6\). The ellipse is embedded into the bounding box \(\mathcal{B}=[-1.2,1.2]^{2}\). Boundary condition and right-hand side are taken such that the exact solution satisfies
\[u(x,y)=\exp(x)\sin(\cos(\pi/3)x+\sin(\pi/3)y). \tag{55}\]
Numerical results are summarized in Table 1. Nearly fifth-order accuracy in both the \(L_{2}\) and maximum norms can be observed. The increase in convergence order may be caused by the error of quartic polynomial interpolation, which is fifth-order accurate and dominates the numerical error in the vicinity of the boundary. As the grid refines, the GMRES iteration number is essentially independent of grid size, which is a main advantage of the present method. Taking into account FFT solvers and boundary operations in each iteration, the overall computational complexity of the method is given
by \(\mathcal{O}(N^{2}\log N+N_{b})\) in two space dimensions. On coarse grids, the CPU time scaling is close to \(\mathcal{O}(N_{b})\), implying that boundary operations dominate the computational cost. On finer grids, the CPU time is roughly linearly proportional to \(\mathcal{O}(N^{2}\log N)\), which implies that the computational cost is dominated by the FFT solver. Isocontours of the numerical solution are also presented in Figure 5.
#### 7.1.2 Interface problem with multiple interfaces
In the second example, we solve the 2D Poisson interface problem with multiple disjoint interfaces, which are eight circles and a five-fold star, on the domain \(\mathcal{B}=[-1.7,1.7]^{2}\). The circles are given by
\[\Gamma_{m}^{cir}=\left\{(x,y)\in\mathbb{R}^{2}:(x-\cos(m\pi/4))^{2}+(y-\sin(m \pi/4))^{2}=r^{2}\right\},\quad m=1,2,\cdots,8, \tag{56}\]
with \(r=0.383\). The five-fold star is given by
\[\Gamma^{star}=\left\{(x,y)\in\mathbb{R}^{2}:\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b ^{2}}=(1.0+\varepsilon\sin(m\arctan(\frac{y}{x})))^{2}\right\}, \tag{57}\]
with \(a=b=0.514,\varepsilon=0.2,m=5\). Two adjacent interfaces may become very close to each other, and, as a result, there may be more than one intersection point between two adjacent grid nodes. Boundary condition, interface condition and right-hand side are chosen such that the exact solution is given by
\[u(x,y)=\left\{\begin{array}{ll}\exp(0.6x+0.8y),&\mbox{in }\Omega_{i},\\ \sin(\pi(x+1)/2)\sin(\pi(y+1)/2),&\mbox{in }\Omega_{e},\end{array}\right. \tag{58}\]
where \(\Omega_{i}\) denotes the union of the interiors of the circles and the star and \(\Omega_{e}\) denotes the exterior domain. The diffusion coefficients are chosen as \(\sigma_{i}=1\) in \(\Omega_{i}\) and \(\sigma_{e}=3\) in \(\Omega_{e}\). For this and the following examples, the subscripts \(i\) and \(e\) represent variables in the interior and exterior regions, respectively.
Numerical results are summarized in Table 2. The solution in both interior and exterior domains has fourth-order accuracy. The GMRES iteration number is essentially independent of grid size, even if there are arbitrarily close interfaces. It can be observed that the iteration number is slightly larger on the coarsest grid \(N=64\). A coarse Cartesian grid may not be able to accurately capture the geometry of complex interfaces. It affects the well-conditioned property of the discrete boundary integral equation and causes the increase in iteration number. Isocontours of the numerical solution are shown in Figure 6.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline grid size & 64\(\times\)64 & 128\(\times\)128 & 256\(\times\)256 & 512\(\times\)512 & 1024\(\times\)1024 \\ \hline \(N_{b}\) & 116 & 230 & 460 & 918 & 1838 \\ \hline itr no. & 10 & 10 & 9 & 9 & 9 \\ \hline \(\|e\|_{2}\) & 7.40E-06 & 1.12E-07 & 3.03E-09 & 6.86E-11 & 2.31E-12 \\ \hline \(\|e\|_{\infty}\) & 1.31E-04 & 3.69E-06 & 1.03E-07 & 3.56E-09 & 1.24E-10 \\ \hline CPU time & 3.91E-03 & 6.35E-03 & 1.86E-02 & 5.81E-02 & 2.33E-01 \\ \hline \end{tabular}
\end{table}
Table 1: Numerical results for the Dirichlet BVP of the Poisson equation on an ellipse-shaped domain.
### Three space dimensional examples
To demonstrate the applicability of the present method, we consider solving three space dimensional problems.
#### 7.2.1 Poisson BVP
This example is the Neumann BVP of the Poisson equation on a torus in 3D. The torus is given by
\[\Omega=\left\{(x,y,z)\in\mathbb{R}^{3}:(1-\sqrt{x^{2}+y^{2}})^{2}+z^{2}<0.4^{2} \right\}. \tag{59}\]
The bounding box is taken as \(\mathcal{B}=[-1.5,1.5]^{3}\). The boundary condition and right-hand side are taken such that the exact solution satisfies
\[u(x,y,z)=\exp(z)(\cos(2x)+\cos(3y)). \tag{60}\]
Note that the solution to the Poisson Neumann BVP is only determined up to an additive constant. We first subtract a constant from the right-hand side of the linear system such that it has zero mean. At the same time, the matrix-vector products in the GMRES iterations are subtracted by a constant such that their means are zero. To compute numerical errors, we need to add a constant to the numerical solution such that it matches the exact solution at a point.
Numerical results and the numerical solution are presented in Table 3 and Figure 7, respectively. Fourth-order accuracy in both the \(L_{2}\) and maximum norms is reached for
the Neumann BVP. In this example, the GMRES iteration number decreases slightly as the grid refines. Since the discrete linear system mimics the original well-conditioned BIE and the approximation with a fine grid is more accurate. We believe that the better approximation property with a fine grid gives a linear system with a better condition number and is the main reason that is responsible for the faster convergence of the GMRES method.
Theoretically, the computational complexity in three space dimensions is \(\mathcal{O}(N^{3}\log N+N_{b})\). The cost of boundary operations is more important than that in two space dimensions since the polynomial approximation for the correction function needs more terms in this case. As a result, the overall computational cost is closer to \(\mathcal{O}(N^{2})\) since we have \(N_{b}=\mathcal{O}(N^{2})\).
#### 7.2.2 Modified Helmholtz BVP
As in the preceding example, we solve the Dirichlet BVP of the modified Helmholtz equation with \(\kappa=100\) on the domain \(\Omega\), which is given by
\[\Omega=\left\{(x,y,z)\in\mathbb{R}^{3}:(1+4x^{2})(1+4y^{2})(1+4z^{2})+64xyz+4x ^{2}+4y^{2}+4z^{2}<3\right\}, \tag{61}\]
This domain has relatively large curvature and is difficult to be captured by a coarse grid. The bounding box is taken as \([-0.7,0.7]^{3}\). Boundary condition and right-hand side are chosen such that the exact solution satisfies
\[u(x,y,z)=\exp(z)(\cos(5x)+\cos(2y)). \tag{62}\]
Numerical results are summarized in Table 4. The numerical solution is presented in Figure 8. One can observe that the numerical error is large on the grid \(N=64\) and decreases rapidly when the grid is refined to \(N=128\). It can be explained by the fact that the coarse grid \(N=64\) may not be able to fully capture the fast changes of the boundary and cause large errors for near-interface corrections and surface interpolations. As the grid refines, the decrease in numerical errors matches the fourth-order accuracy, as anticipated. The coarse grid with \(N=64\) also requires more GMRES iterations to converge. In each iteration, the CPU time scaling is close to \(\mathcal{O}(N_{b})\) due to the dominance of boundary operations.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline grid size & 64\(\times\)64\(\times\)64 & 128\(\times\)128\(\times\)128 & 256\(\times\)256\(\times\)256 & 512\(\times\)512\(\times\)512 \\ \hline \(N_{b}\) & 6168 & 24656 & 98668 & 394548 \\ \hline itr no. & 23 & 21 & 18 & 17 \\ \hline \(\|e\|_{2}\) & 2.39E-04 & 2.76E-05 & 2.38E-06 & 1.68E-07 \\ \hline \(\|e\|_{\infty}\) & 1.18E-03 & 7.97E-05 & 5.98E-06 & 4.01E-07 \\ \hline CPU time & 1.62E+00 & 6.35E+00 & 2.50E+01 & 1.35E+02 \\ \hline \end{tabular}
\end{table}
Table 3: Numerical results for the Neumann BVP of the Poisson equation on a torus.
#### 7.2.3 Interface problem with high-contrast coefficients
In this example, we solve the Poisson interface equation with a four-atom molecular-shaped interface in the domain \(\mathcal{B}=[-1.2,1.2]^{3}\). The interface \(\Gamma\) is given by
\[\Gamma=\left\{\mathbf{x}=(x,y,z)\in\mathbb{R}^{3}:\sum_{k=1}^{4}\exp(-\frac{|\mathbf{x}- \mathbf{x}_{k}|^{2}}{r^{2}})=0.6\right\}, \tag{63}\]
with \(\mathbf{x}_{1}=(\sqrt{3}/3,0,-\sqrt{6}/12)\), \(\mathbf{x}_{2}=(-\sqrt{3}/6,0.5,-\sqrt{6}/12)\), \(\mathbf{x}_{3}=(-\sqrt{3}/6,-0.5,-\sqrt{6}/12)\) and \(\mathbf{x}_{4}=(0,0,\sqrt{6}/4)\).
\[u(x,y,z)=\left\{\begin{array}{ll}\sin^{2}(2x)\cos^{2}(2y)\cos(z),&\quad\text {in the interior $\Omega_{i}$},\\ \cos(x)\cos(y)\cos(z),&\quad\text{in the exterior $\Omega_{e}$}.\end{array}\right. \tag{64}\]
The coefficient ratio \(\sigma_{e}/\sigma_{i}\) varies from \(10\) to \(10^{4}\), and its effect on the performance of the present method is studied in this example. This effect was also studied by [29; 14; 35]. The numerical solution is shown in Figure 9. According to the numerical results presented in Table 5, high-contrast coefficients only have a small effect on the numerical accuracy, even for the extreme case \(\sigma_{e}/\sigma_{i}=10^{4}\). The GMRES iteration number is slightly affected by the coefficient ratio on coarse grids. As the grid refines, the GMRES iteration number is rather stable and is independent of the coefficient ratio. This is also due to the fact that a fine grid has a better approximation property, as aforementioned.
#### 7.2.4 Interface problem with arbitrarily close interfaces
In this case, we solve the Poisson interface problem with the presence of arbitrarily close interfaces in three space dimensions. Interfaces are taken as a torus and an ellipsoid. The torus-shaped interface \(\Gamma^{tor}\) is given by the boundary of the domain \(\Omega\) defined in (59). The ellipsoid-shaped interface is given by
\[\Gamma^{ell}=\left\{(x,y,z)\in\mathbb{R}^{3}:\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b ^{2}}+\frac{z^{2}}{c^{2}}=1\right\}, \tag{65}\]
with \(a=b=0.6,c=1\). The two interfaces are very close to each other near the curve
\[S=\left\{(x,y,z)\in\mathbb{R}^{3}:\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}=1, \quad z=0\right\}. \tag{66}\]
In this configuration, since the curve \(S\) is a dimension one object, the number of multi-intersection grid line segments, a grid line segment that intersects with interfaces multiple times, is on the order of \(\mathcal{O}(N)\). The problem is challenging for classical body-fitted approaches, since it is nearly impossible to be resolved by a body-fitted mesh since the two interface is too close. The bounding box \(\mathcal{B}\) is taken as \([-1.5,1.5]^{3}\). Boundary condition, interface condition and right-hand side are chosen such that the exact solution reads
\[u(x,y,z)=\left\{\begin{array}{ll}\sin^{2}(2x)\cos^{2}(2y)\cos(z),&\text{ in torus }\Omega_{i,1},\\ \exp(z)(\cos(2x)+\cos(3y)),&\text{ in ellipsoid }\Omega_{i,2},\\ \cos(x)\cos(y)\cos(z),&\text{ in exterior region }\Omega_{e}.\end{array}\right. \tag{67}\]
The coefficients are chosen as \(\sigma_{i}=1\) in \(\Omega_{i,1}\cup\Omega_{i,2}\) and \(\sigma_{e}=3\) in \(\Omega_{e}\). Numerical results are summarized in Table 6. The numerical solution is visualized and shown in Figure 10. It is observed that fourth-order accuracy is achieved in all regions, except for an accuracy loss on the coarsest grid \(N=64\) due to similar reasons that were mentioned before.
#### 7.2.5 Heterogeneous interface problem
In the final example, we consider the heterogeneous interface problem in three space dimensions. Interfaces are taken as three spheres with radius \(r=0.7\) whose centers
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \(\sigma_{e}:\sigma_{i}\) & N & itr no. & \(\|e\|_{2,\Omega_{i}}\) & \(\|e\|_{\infty,\Omega_{i}}\) & \(\|e\|_{2,\Omega_{e}}\) & \(\|e\|_{\infty,\Omega_{e}}\) \\ \hline \multirow{3}{*}{\(10:1\)} & 128 & 11 & 5.15E-08 & 2.18E-07 & 1.43E-08 & 2.33E-07 \\ \cline{2-7} & 256 & 10 & 3.25E-09 & 1.22E-08 & 9.03E-10 & 1.33E-08 \\ \cline{2-7} & 512 & 10 & 2.04E-10 & 7.59E-10 & 5.76E-11 & 8.08E-10 \\ \hline \multirow{3}{*}{\(10^{2}:1\)} & 128 & 13 & 5.78E-08 & 2.74E-07 & 1.89E-08 & 2.92E-07 \\ \cline{2-7} & 256 & 11 & 3.63E-09 & 1.61E-08 & 1.18E-09 & 1.71E-08 \\ \cline{2-7} & 512 & 10 & 2.29E-10 & 1.02E-09 & 7.59E-11 & 1.06E-09 \\ \hline \multirow{3}{*}{\(10^{4}:1\)} & 128 & 14 & 5.86E-08 & 2.81E-07 & 1.94E-08 & 2.99E-07 \\ \cline{2-7} & 256 & 11 & 3.68E-09 & 1.66E-08 & 1.21E-09 & 1.75E-08 \\ \cline{1-1} \cline{2-7} & 512 & 10 & 2.33E-10 & 1.05E-09 & 7.83E-11 & 1.09E-09 \\ \hline \end{tabular}
\end{table}
Table 5: Numerical results for the Poisson interface problems with varying coefficient ratios.
are chosen as \(\mathbf{x}_{1}=(0.5,0.5,0.5)\), \(\mathbf{x}_{2}=(-0.5,-0.5,0.5)\) and \(\mathbf{x}_{3}=(0.5,-0.5,-0.5)\), respectively. The coefficients on each side of the interfaces are given as
\[\sigma_{i}=1,\quad\kappa_{i}=0,\quad\sigma_{e}=4,\quad\kappa_{e}=10, \tag{68}\]
such that the unknown function \(u\) satisfies the Poisson equation in the interior region and the modified Helmholtz equation in the exterior region. It is called a heterogeneous interface problem since the elliptic differential operator on two sides of the interfaces are of different types. The heterogeneous interface problem is a linearized version of the Poisson-Boltzmann equation, which appears in the Poisson-Boltzmann theory in biophysics for modeling solvated biomolecular systems. Table 7 and Figure 11 show the numerical results and the visualization of the numerical solution, respectively. Once again we observe the fourth-order convergence in both regions. The number of GMRES iterations is essentially independent of the grid size.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline grid size & 64\(\times\)64\(\times\)64 & 128\(\times\)128\(\times\)128 & 256\(\times\)256\(\times\)256 & 512\(\times\)512\(\times\)512 \\ \hline \(N_{b}\) & 6987 & 27939 & 111774 & 447264 \\ \hline itr no. & 21 & 20 & 20 & 20 \\ \hline \(\|e\|_{\infty,\Omega_{i}}\) & 6.71E-06 & 4.79E-07 & 3.32E-08 & 2.26E-09 \\ \hline \(\|e\|_{\infty,\Omega_{e}}\) & 3.89E-06 & 2.65E-07 & 1.73E-08 & 1.15E-09 \\ \hline CPU time & 1.34E+00 & 6.59E+00 & 4.08E+01 & 3.20E+02 \\ \hline \end{tabular}
\end{table}
Table 7: Numerical results for the heterogeneous interface problem.
## 8 Discussion
This work proposes a new version of the kernel-free boundary integral method for solving elliptic partial differential equations in two and three space dimensions with high accuracy. The KFBI method solves boundary and interface problems with their boundary integral formulations. It computes boundary and volume integrals by solving equivalent interface problems with fast PDE solvers and then obtains boundary values by interpolation.
The equivalent interface problems are simpler than the original problem and are essential for the KFBI method. To accommodate the jump conditions in the interface, a correction function is introduced in the vicinity of the interface to derive corrected finite difference schemes and the boundary interpolation scheme. Unlike the original KFBI method, which applies a local coordinate transformation to calculate correction terms, the new approach obtains correction terms by solving a local Cauchy problem for the correction function. The local Cauchy problem is solved with a mesh-free collocation method, for which we also proposed a strategy to choose collocation points such that the resulting linear system is accurate and stable. The resulting method avoids repeatedly taking tangent derivatives on the jump condition and significantly simplifies the derivation procedure.
The presented method is efficient and accurate, which is demonstrated through several challenging numerical experiments. The efficiency of the method relies on the well-conditionness of the boundary integral equations and the applicability of fast PDE solvers (FFT, geometric multigrid methods) on a Cartesian grid. Even though the presented numerical results are based on a fourth-order implementation of the method, the method can be extended to arbitrary accuracy in principle [33].
Finally, we emphasize that the present method is designed for implicitly defined interfaces with level-set formulations. Although this work uses an analytic expression of the level-set function, extending the method to cases when the level-set function is only given at Cartesian grid nodes is straightforward. It may have advantages for solving moving interface problems and free boundary problems when combined with the level-set method [64; 65].
Figure 11: Numerical solution to the heterogeneous interface problem.
## Acknowledgments
This work is financially supported by the Shanghai Science and Technology Innovation Action Plan in Basic Research Area (Project No. 22JC1401700). It is also partially supported by the National Key R&D Program of China (Project No. 2020YFA0712000), the Strategic Priority Research Program of Chinese Academy of Sciences (Grant No. XDA25010405) and the National Natural Science Foundation of China (Grant No. DMS-11771290).
|
2310.20476 | Global Transformer Architecture for Indoor Room Temperature Forecasting | A thorough regulation of building energy systems translates in relevant
energy savings and in a better comfort for the occupants. Algorithms to predict
the thermal state of a building on a certain time horizon with a good
confidence are essential for the implementation of effective control systems.
This work presents a global Transformer architecture for indoor temperature
forecasting in multi-room buildings, aiming at optimizing energy consumption
and reducing greenhouse gas emissions associated with HVAC systems. Recent
advancements in deep learning have enabled the development of more
sophisticated forecasting models compared to traditional feedback control
systems. The proposed global Transformer architecture can be trained on the
entire dataset encompassing all rooms, eliminating the need for multiple
room-specific models, significantly improving predictive performance, and
simplifying deployment and maintenance. Notably, this study is the first to
apply a Transformer architecture for indoor temperature forecasting in
multi-room buildings. The proposed approach provides a novel solution to
enhance the accuracy and efficiency of temperature forecasting, serving as a
valuable tool to optimize energy consumption and decrease greenhouse gas
emissions in the building sector. | Alfredo V Clemente, Alessandro Nocente, Massimiliano Ruocco | 2023-10-31T14:09:32Z | http://arxiv.org/abs/2310.20476v1 | # Global Transformer Architecture for Indoor Room Temperature Forecasting1
###### Abstract
A thorough regulation of building energy systems translates in relevant energy savings and in a better comfort for the occupants. Algorithms to predict the thermal state of a building on a certain time horizon with a good confidence are essential for the implementation of effective control systems. This work presents a global Transformer architecture for indoor temperature forecasting in multi-room buildings, aiming at optimizing energy consumption and reducing greenhouse gas emissions associated with HVAC systems. Recent advancements in deep learning have enabled the development of more sophisticated forecasting models compared to traditional feedback control systems. The proposed global Transformer architecture can be trained on the entire dataset encompassing all rooms, eliminating the need for multiple room-specific models, significantly improving predictive performance, and simplifying deployment and maintenance. Notably, this study is the first to apply a Transformer architecture for indoor temperature forecasting in multi-room buildings. The proposed approach provides a novel solution to enhance the accuracy and efficiency of temperature forecasting, serving as a valuable tool to optimize energy consumption and decrease greenhouse gas emissions in the building sector.
## 1 Introduction and Related Work
According to the latest IPPC report [2] the building industry has the potential to reduce its GHG emissions by up to 66%. Building operation is one of the main contributor to this impact, and most of it is to be attributed to heating and cooling of residential and commercial building.
Indoor temperature forecasting plays a critical role in optimizing the performance of HVAC systems, which are responsible for a significant portion of the energy consumption and associated greenhouse gas emissions in buildings. Traditional feedback control systems based on a set-point value do not always take into account the dynamic nature of the thermal environment and can result in inefficient energy use, including overshooting the set-point and unnecessary heating or cooling.
Recent advancements in machine learning and deep learning [8] have enabled the development of more sophisticated indoor temperature forecasting models that can capture the complex interactions between internal and external factors that influence the thermal state of a space. These models are based on a range of inputs, including weather data, occupancy patterns,
building characteristics, and HVAC system performance, and use advanced algorithms to generate accurate and reliable predictions of the thermal state of a space.
In this work, we propose a global Transformer architecture, based on the original vanilla Transformer [9], to forecast indoor room temperature in a multi-room building. The Transformer architecture offers several advantages over other statistical machine learning approaches and other deep learning based architecture, such as LSTM networks, including the ability to manage inputs of different lengths and include future covariates. Additionally, the Transformer architecture is highly parallelizable, allowing us to perform experiments on a large-scale dataset. The proposed model is trained on the total set of data over all the rooms, providing the advantage of having a single model for all rooms, as opposed to a single model per room, which can be challenging to maintain. To incorporate the room ID information into the unified architecture, we introduce a novel approach that avoids the need for a separate model for each room. To the best of our knowledge, we are the first to employ a Transformer architecture for indoor temperature forecasting in a multi-room building. It also offers several benefits over traditional approaches to indoor temperature forecasting. By utilizing a single model for all rooms, we can reduce the burden of maintaining multiple models and improve the overall efficiency of the forecasting process.
## 2 Methods and Dataset
### Dataset
We considered a dataset containing data from 133 rooms \(r\in\mathbf{R}\) in a single building, with a total of 839 time series. These are distributed as follows:
* 29 building sensors that are common across all rooms, such as water flows, water temperatures, solar shading, among others.
* 5 weather forecast variables shared across all rooms such as solar radiation, relative humidity, air temperature, dew point and cloud coverage.
* 7 variables related to the date and time, such as the day of the week and hour of the day, shared across all rooms.
* 5 room-specific variables, such as air temperature setpoint and whether cooling was applied to the room.
* The target variable: room air temperature.
The dataset has an hourly resolution and covers approximately two years, consisting of 19,115 hours. The data is split into train, validation, and test sets with an 82%, 14%, and 4% split, respectively, in chronological order.
The time series are categorized into two sets based on their availability at inference time. The target series \(Y_{r_{i}}\) and past covariates \(C_{r_{i}}^{p}\) are known only until the inference point, while the future covariates \(C_{r_{i}}^{f}\) are known for the forecasting horizon and the input window. All timeseries are past covariates, while only the weather forecasts, date and time related variables and the known setpoints are future covariates.
Finally, each room is assigned with an id value of id \(0<i\leq|\mathbf{R}|-1\).
### Proposed Models
The goal is to produce a model that is able to predict the room temperature of 133 rooms in a large office building using building sensors, weather forecasts and other available data. In this study, we compare three different types of models, namely a baseline persistence model, a multi-layer LSTM neural network, and a proposed transformer model. For each neural network, we perform a hyperparameter search to determine the best hyperparameters. For all the considered
models (a part from the persistence model) the objective is to approximate the function
\[f(Y_{r_{i},(t-k...t)},C^{p}_{r_{i},(t-k...t)},C^{f}_{r_{i},(t+1...t+n)})=Y_{r_{i},( t+1...t+n)} \tag{1}\]
In Equation 1, \(Y_{r_{i}}\) denotes the temperature of room \(r_{i}\), \(C^{p}_{r_{i}}\) represents the covariates that are known only within the range \([t-k,t]\), and \(C^{f}_{r_{i}}\) is the set of covariates that are known within the range \([t-k,t+n]\), known as _future covariates_.
Both neural network models are residual models relative to the persistence model, this means they predict
\[F(Y_{r_{i},(t-k...t)},C^{p}_{r_{i},(t-k...t)},C^{f}_{r_{i},(t+1...t+n)},i|\theta )=\bar{Y}_{r_{i},(t+1...t+n)} \tag{2}\]
where
\[Y_{r_{i},(t+1...t+n)}\approx\bar{Y}_{r_{i},(t+1...t+n)}+1Y_{r_{i},(t)} \tag{3}\]
Here follow more details about the model considered.
**Persistence model**. Given that indoor room temperatures are highly correlated in time, a simple and reasonable baseline mode is a persistence model. This model is defined as
\[F(Y_{r_{i},(t-k...t)},C^{p}_{r_{i},(t-k...t)},C^{f}_{r_{i},(t+1...t+n)})={\bf 1 }Y_{r_{i},(t)} \tag{4}\]
meaning the model simply uses the room temperature \(Y_{r_{i},(t)}\) as the estimate for the room temperature for the next \(n\) hours.
**LSTM**. This neural network is based on an encoder-decoder architecture that uses LSTM [4] layers. The network includes 8 LSTM layers in both the encoder and decoder, with each layer having 32 units. To feed the encoder, the past covariates \(C^{p}\) and target \(Y\) are concatenated along the channels axis. The hidden state and cell state of the last encoder layer are used to initialize the first decoder layer's hidden state and cell state. The decoder layer's input consists of future covariates \(C^{f}\). The output of the last decoder layer at each timestep is flattened and passed through a linear layer with a RELU non-linearity and a size of 256. The resulting output is then passed through another linear layer with a size of \(n\) to produce the final output.
**Transformer**. The proposed model is an encoder-decoder transformer [10] improved using well known modern methods. The original sinusoidal positional encoding is replaced with a rotary position encoding (RPE) [7], the RELU activation is replaced with gated linear units (GLU) [3] activation, the LayerNorm [1] is replaced with ScaleNorm [5], finally the normalization layer is moved to be the first layer of a block instead of the last (PreNorm) [11]. These improvements were chosen as they have been shown to increase the performance of Transformers when modeling sequences [7, 6, 5, 1, 11].
Each encoder block is comprised of a ScaleNorm layer, followed by a self-attention layer of size 32, a linear layer of size 128 and finally a GLU activation. There are residual connections between the encoder blocks. The encoder consists of 4 encoder blocks. Similary, each decoder block is comprised of a ScaleNorm layer, followed by a self-attention layer of size 32, a cross-attention layer of size 32 with the output of the encoder, a linear layer of size 128, and finally a GLU activation. The output of the decoder is flattened and passed to a linear layer of size \(n\).
## 3 Experimental Settings
In order to fairly compare the methods, each experiment is repeated 8 times with different random seeds.
### Hyperparameter selection
The input window \(k\) was set to 96 hours using an informal hyperparameter search. The forecasting horizon \(n\) was set to 12 hours as this was a requirement.
Other hyperparameters of the neural networks were tuned using random search with 128 runs each. These hyperparameters are tuned on the global model version of each neural network and not re-tuned for other experiments.
### Global vs local models
To assess the effectiveness of global models, we evaluated two versions of each model. The first version is a global model that predicts the room temperature for all rooms using a single model trained with all the data. The second version is a local model where a separate model is trained for each room, denoted by the underscore \(p\). Additionally, we evaluated an alternative version of the transformer model that does not utilize a room embedding, denoted by the underscore _ne_.
We set the input window \(k\) to 96 hours into the past, and the forecasting horizon \(n\) to 12 hours into the future. Both the LSTM and Transformer models are residual models, and we followed Equation 3 as this improved model performance.
### Data Pre-processing
To fairly compare global and local models, two different scaling strategies were used. An _individual scaling strategy_, in which each of the 839 time series are individually scaled to be in the range \([0,1]\) and a _common scaling strategy_ in which all air temperatures, including room
Figure 1: Encoder decoder transformer model following the original paper [10] with the addition of the room id.
temperature, outside temperature, common areas, among others, are all scaled together to be in the range \([0,1]\) while all other time series are scaled individually.
### Evaluation Metrics
The performance of the models was assessed using the mean average error (MAE) of the predicted room temperatures, this is is defined in Equation 5.
\[MAE(Y,\hat{Y})=\frac{1}{N}\sum_{n=1}^{N}|Y_{n}-\hat{Y}_{n}| \tag{5}\]
Overall, the proposed models were evaluated and compared based on their ability to accurately predict the room temperature in industrial buildings.
Due to compute limitations each model was trained 8 times with different seeds, these results are reported in Table 1.
## 4 Results and discussion
A summary of the results of our experiments is reported in Table 1.
We find that the best Transformer model outperforms the best LSTM model for both common and individual scaling, with statistical significance (\(p\)\(<\)\(1.11e-06\) for common scaling, and \(p\)\(<\)\(2.98e-08\) for individual scaling). These results indicate that Transformer models are more effective than LSTM models for our task.
In terms of the impact of the room embedding, we find that for common scaling, including the room embedding significantly improves the performance of the global transformer model, with statistical significance (\(p\)\(<\)\(5.49e-06\)). However, for individual scaling, the global transformer model without the room embedding performs significantly better (\(p\)\(<\)\(1.19-05\)) than the model with the room embedding. The common scaling strategy simplifies batch inference by uniformly scaling all outputs, eliminating the need to track the room ID for each sample in a batch. This approach ensures consistent scaling across all outputs, facilitating the process of output interpretation and analysis.
Our results also show that the global models outperform the local models. Specifically, the global model lead to a performance increase of 16% for the LSTM model and 40% for the Transformer model for common scaling. For individual scaling, the global models perform 8% better for the LSTM model and 27% better for the Transformer model.
Finally, we find that the choice of scaling strategy depends on the specific model being used. For the Transformer model, individual scaling performs 4% better than common scaling. For the \(Transformer_{ne}\) model, common scaling provides a 1.5% performance increase. In the case
\begin{table}
\begin{tabular}{l r r} \hline \hline Model & MAE & Std \\ \hline \(Persistence\) & 0.007400 & – \\ \(LSTM_{p}\) & 0.007163 & 0.000140 \\ \(Transformer_{p}\) & 0.006995 & 0.000061 \\ \(Transformer\) & 0.004180 & 0.000052 \\ \(LSTM\) & 0.004161 & 0.000036 \\ \(Transformer_{ne}\) & **0.004033** & 0.000027 \\ \hline \hline \end{tabular}
\begin{tabular}{l r r} \hline \hline Model & MAE & Std \\ \hline \(Persistence\) & 0.049500 & – \\ \(LSTM_{p}\) & 0.039025 & 0.000339 \\ \(Transformer_{p}\) & 0.037900 & 0.000203 \\ \(LSTM\) & 0.028783 & 0.000404 \\ \(Transformer_{ne}\) & 0.027735 & 0.000223 \\ \(Transformer\) & **0.027012** & 0.000215 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Averaged results across 8 runs for each model for the common scaling (left) and individual scaling (right).
of the LSTM model, individual scaling performs the best, with a 1% improvement over common scaling.
Overall, our results suggest that Transformer models are superior to LSTM models in terms of performance for our task, and that the choice of scaling strategy should be tailored to the specific model being used.
## 5 Conclusion and future work
This work presented a global Transformer architecture for indoor temperature forecasting in multi-room buildings, aiming to optimize energy consumption and reduce greenhouse gas emissions. The results demonstrated that Transformer models outperform LSTM models, with statistical significance, for both common and individual scaling approaches. The inclusion of room embedding significantly improves performance for common scaling, while the global Transformer model without room embedding performs better for individual scaling. Notably, the global models consistently outperformed the local models, offering the additional advantage of employing a single model for the entire building and resolving maintenance complexities. The choice of scaling strategy depends on the specific model used. Overall, the proposed Transformer architecture provides an efficient solution for accurate temperature forecasting, enabling energy optimization and emissions reduction in the building sector. Future research directions include analyzing the topology of the room embedding space, exploring the representation of the embedding, incorporating interpretability techniques, and leveraging pretrained Transformer models for generating synthetic data. These efforts will enhance the interpretability, performance, and robustness of the proposed approach for temperature forecasting in multi-room buildings.
|
2309.05971 | Free boundary regularity for tumor growth with nutrients and diffusion | In this paper, we study a tumor growth model where the growth is driven by
nutrient availability and the tumor expands according to Darcy's law with a
mechanical pressure resulting from the incompressibility of the cells. Our
focus is on the free boundary regularity of the tumor patch that holds beyond
topological changes. A crucial element in our analysis is establishing the
regularity of the hitting time T, which records the first time the tumor patch
reaches a given point. We achieve this by introducing a novel
Hamilton-Jacobi-Bellman (HJB) interpretation of the pressure, which is of
independent interest. The HJB structure is obtained by viewing the model as a
limit of the Porous Media Equation (PME) and building upon a new variant of the
AB estimate. Using the HJB structure, we establish a new Hopf-Lax type formula
for the pressure variable. Combined with barrier arguments, the formula allows
us to show that T is C^{\alpha}, where \alpha depends only on the dimension,
which translates into a mild nondegeneracy of the tumor patch evolution.
Building on this and obstacle problem theory, we show that the tumor patch
boundary is regular in spacetime except on a set of Hausdorff dimension at most
$d-\alpha$. On the set of regular points, we further show that the tumor patch
is locally $C^{1,\alpha}$ in space-time. This conclusively establishes that
instabilities in the boundary evolution do not amplify arbitrarily high
frequencies. | Carson Collins, Matt Jacobs, Inwon Kim | 2023-09-12T05:43:12Z | http://arxiv.org/abs/2309.05971v1 | # Free boundary regularity for tumor growth with nutrients and diffusion
###### Abstract.
In this paper, we study a tumor growth model where the growth is driven by nutrient availability and the tumor expands according to Darcy's law with a mechanical pressure resulting from the incompressibility of the cells. Our focus is on the free boundary regularity of the tumor patch that holds beyond topological changes. A crucial element in our analysis is establishing the regularity of the _hitting time_\(T(x)\), namely the first time the tumor patch reaches a given point. We achieve this by introducing a novel Hamilton-Jacobi-Bellman (HJB) interpretation of the pressure, which is of independent interest. The HJB structure is obtained by viewing the model as a limit of the Porous Media Equation (PME) and building upon a new variant of the AB estimate. Using the HJB structure, we establish a new Hopf-Lax type formula for the pressure variable. Combined with barrier arguments, the formula allows us to show that \(T\) is \(C^{\alpha}\) with \(\alpha=\alpha(d)\), which translates into a mild nondegeneracy of the tumor patch evolution. Building on this and obstacle problem theory, we show that the tumor patch boundary is regular in \(\mathbb{R}^{d}\times(0,\infty)\) except on a set of Hausdorff dimension at most \(d-\alpha\). On the set of regular points, we further show that the tumor patch is locally \(C^{1,\alpha}\) in space-time. This conclusively establishes that instabilities in the boundary evolution do not amplify arbitrarily high frequencies.
## 1. Introduction
In this paper, we consider the following tumor growth model:
\[\partial_{t}\rho-\nabla\cdot(\rho\nabla p)=n\rho,\quad p(1-\rho)=0,\quad\rho\leq 1, \tag{1.1}\]
where \(\rho\) denotes the density of tumor cells, \(p\) denotes the pressure, and \(n\) is a nutrient variable that evolves according to the diffusion equation
\[\partial_{t}n-\Delta n=-n\rho. \tag{1.2}\]
The form of the pressure-density relation reflects the incompressibility of the tumor cells, namely the pressure variable \(p\) acts as the _Lagrange multiplier_ for the constraint \(\rho\leq 1\). In short, the system (1.1-1.2) describes a cell growth system where the growth rate is mediated by nutrient availability and the tumor region expands according to Darcy's law with a mechanical pressure driven by the incompressibility of the cells. Models of the form (1.1-1.2) have been extensively studied by both the mathematical and biological communities with various different assumptions on the growth term and density pressure coupling [1, 2, 10, 14, 15], to name just a few. Nonetheless, many mathematical questions remain outstanding, in particular, those regarding the long-time behavior of the tumor boundary region.
Our focus on the specific source term \(n\rho\) is due to the fact that the model (1.1-1.2) generates particularly interesting behavior of the tumor patch despite the apparent simplicity of the coupling between the tumor and nutrient. It is well-known in the biology literature (through numerical and physical experiments) that the tumor patch generated by this model exhibits a fingering instability (c.f. the discussion in [10], [13], [15], [16]). In particular, it has been unclear whether this fingering phenomenon occurs at some discrete scale or whether it leads to an immediate or eventual loss of regularity in the tumor boundary. Investigating this behavior will be the main goal of this paper.
Although the tumor system nearly corresponds to that of the classical _Hele-Shaw flow_, a mathematically rigorous study of the boundary behavior has remained elusive, due to the difficulties presented by the source term \(\rho n\). In the classical setting, which we will call the _injection problem_, the Hele-Shaw flow is given with no source (namely \(n=0\)) and with a fixed boundary from which the flow is injected at a given rate. For the injection problem, the global structure of the boundary \(\partial\{\rho=1\}\) is well understood by now,
mainly through comparison principle type arguments [14, 15, 16] or via connections to the obstacle problem [1, 17, 18, 19, 20].
For our problem, the comparison approach is immediately ruled out, as the full system (1.1-1.2) does not have comparison (though note that the individual equations when considered separately do have comparison principles). As such, we shall proceed via the obstacle problem analysis. However, there is a highly nontrivial roadblock that must be overcome. Indeed, the source term \(\rho n\) necessarily depends on the space-time geometry of the free boundary, while for the injection case, the source is concentrated at a fixed boundary that is safely away from the free boundary. This makes the analysis of the tumor system considerably more difficult, as the influence of the source term cannot be ignored when blowing up the problem at free boundary points (the fundamental technique for the obstacle problem approach). In particular, to use the obstacle problem toolbox, one must first establish the regularity of the _hitting time_\(T(x)\), which records the first time that the tumor patch reaches the point \(x\) (ignoring the regularity issues, one can formulate \(T(x)\) as \(\inf\{t>0:\rho(t,x)=1\}\), see equation (1.9) for a more careful definition). This is essentially equivalent to establishing a quantitative non-degeneracy property for the tumor expansion speed, a highly nontrivial task.
To establish the regularity of \(T(x)\) we first derive a novel Hopf-Lax type estimate for the pressure (c.f. Theorem 1.1). To the best of our knowledge, such Hopf-Lax type formulas have not previously appeared in the Hele-Shaw literature, perhaps in part due to the difficulty of controlling the time derivative of \(p\). We get around this by viewing equation (1.1) as the incompressible limit of the Porous Media Equation (PME). Given some parameter \(\gamma\in(1,\infty)\), the PME analogue of (1.1) is the equation
\[\partial_{t}\rho_{\gamma}-\nabla\cdot(\rho_{\gamma}\nabla p_{\gamma})=\rho_{ \gamma}n_{\gamma},\quad p_{\gamma}=\rho_{\gamma}^{\gamma}, \tag{1.3}\]
where \(n_{\gamma}\) will solve (1.2) with \(\rho\) replaced by \(\rho_{\gamma}\), and (1.1) can be recovered by sending \(\gamma\to\infty\) (see for instance [16, 17, 18, 19]). Since the pressure-density coupling \(p_{\gamma}=\rho_{\gamma}^{\gamma}\) is explicit for PME, one can rewrite (1.3) solely in terms of the pressure, namely,
\[\partial_{t}p_{\gamma}-|\nabla p_{\gamma}|^{2}-\gamma p_{\gamma}(\Delta p_{ \gamma}+n_{\gamma})=0. \tag{1.4}\]
Interestingly, we ignore the parabolic structure of this equation and instead focus on the Hamilton-Jacobi-Bellman (HJB) structure of the first two terms. We then build upon the recent improved versions of the Aronson-Benilan estimate introduced in [19] to show that the positive part of \(u_{\gamma}:=-\gamma(\Delta p_{\gamma}+n_{\gamma})\) is uniformly bounded with respect to \(\gamma\) in a BMO type space, implying that our limiting \(p\) must be a supersolution to the HJB equation
\[\partial_{t}p-|\nabla p|^{2}+pu_{+}\geq 0 \tag{1.5}\]
where \(u:=\lim_{\gamma\to\infty}u_{\gamma}.\) From here we finally obtain the Hopf-Lax formula by adapting the techniques of [12] for HJB equations with unbounded coefficients. It is highly intriguing to speculate whether it is possible to obtain (1.5) or Hopf-Lax estimates directly from (1.1), however, we will not consider this line of inquiry further in this work.
Once we have the Hopf-Lax formula, we combine this with a powerful barrier-type argument to prove that for any point \(x\notin\operatorname{spt}(\rho_{0})\) and any sufficiently small radius \(r>0\) there exists an explicit time \(t_{r}(x)<T(x)\) such that the tumor patch does not occupy any point in \(B_{r}(x)\). From here, it will follow that the hitting time is Holder continuous with an exponent that depends on the dimension only. With the Holder continuity of \(T\) in hand, we can turn to the obstacle problem formulation to address the regularity of the free boundary. Here, the novelty in our analysis lies in establishing the global space-time regularity of the free boundary, with data that is far less regular than the typical injection problems that have previously been considered.
Ultimately, through the obstacle problem analysis, we are able to show that the free boundary is regular except at topological singularities, which are unavoidable for general initial data. This conclusively demonstrates that the observed instabilities for the system (1.1-1.2) do not amplify arbitrarily high frequencies and must occur at some fixed scale. In particular, we show that the tumor patch boundary is regular in \(\mathbb{R}^{d}\times(0,\infty)\) except on a relatively closed set of Hausdorff dimension at most \(d-\alpha\) for some \(\alpha\in(0,1)\) depending only on the dimension. On the set of regular points, we further show that the tumor patch is \(C^{1,\alpha}\) in space, locally uniformly in time. It then follows that the associated pressure gradient at
regular boundary points is well-defined and uniformly positive in space-time. Moreover, the direction of the pressure gradient on the set of regular points is continuous in space-time.
In the remainder of the introduction, we give a more complete explanation of the obstacle formulation of our problem and the connection to the hitting time. We then summarize our main results and give a roadmap for the rest of the paper.
### The obstacle problem and the hitting time
To better understand the aforementioned difficulties and the importance of the hitting time, let us describe some properties of the tumor patch and formally introduce the obstacle problem associated to (1.1-1.2). Since our main interest is the regularity properties of the tumor patch, throughout the paper, we will assume that
\[\rho(x,0)\text{ is a characteristic function and }n(x,0)\text{ is uniformly positive.} \tag{1.6}\]
Under these assumptions, \(\rho\) will remain a characteristic function for all times and \(t\mapsto\rho(x,t)\) will be nondecreasing for a.e. \(x\in\mathbb{R}^{d}\).
Transitioning to the obstacle problem formulation, if we integrate the pressure variable in time,
\[w(x,t):=\int_{0}^{t}p(x,s)ds, \tag{1.7}\]
the new variable \(w\), the so-called _Baoicchi transform_, will satisfy an obstacle problem [3]. Since the density is nondecreasing in time, the relation \((1-\rho)p=0\) implies that \((1-\rho)w=0\). Using the patch property for the density, this coupling can be upgraded to the even stronger relation that the sets \(\{w>0\}\) and \(\{\rho=1\}\) coincide spacetime almost everywhere (c.f. Lemma 2.7). This key relation can then be combined with the time integral of (1.1) to see that \(w\) solves the elliptic obstacle problem
\[\Delta w=(1-\rho_{0}-\eta)\chi_{\{w>0\}}, \tag{1.8}\]
where \(\eta(x,t):=\int_{0}^{t}\rho(x,s)n(x,s)\,ds\) (c.f. Lemma 2.9).
The main challenge in analyzing (1.8) is the presence of the term \(\eta\), which is absent in the obstacle formulation of the classical injection case (due to local regularity results, \(\rho_{0}\) does not affect the free boundary regularity at positive times away from the support of \(\rho_{0}\)). Since \(\rho\) is a characteristic function, it is not clear whether \(\eta\) has any nice regularity. This is crucial, as obstacle problem regularity theory breaks down without Dini continuity of the coefficients (see [1]). Hence, one must hope that the time integral induces some smoothing effect. At the very least, this can only happen if the tumor boundary is strictly expanding. Indeed, if any part of the free boundary stagnates in time, then \(\eta\) will become discontinuous across that portion of the boundary. Note that such stagnation would correspond to a jump in the values of the hitting time function \(T\) introduced earlier. Hence, the smoothness of \(\eta\) and \(T\) are highly intertwined. In fact, it will turn out that we can express \(\eta\) solely in terms of the hitting time \(T\) and \(n\).
To see the connection between \(\eta\) and \(T\), we need to first give a proper definition of the hitting time. Recall that the hitting time \(T(x)\) records the first time that the tumor patch arrives at a point \(x\). We will formally define it using \(w\), the most regular variable at our disposal. Given a point \(x\in\mathbb{R}^{d}\) we set
\[T(x):=\inf\{t>0:w(x,t)>0\}. \tag{1.9}\]
Since the positivity set of \(w\) coincides almost everywhere with the tumor patch, we have \(\rho(x,t)=\operatorname{sgn}_{+}(t-T(x))\) almost everywhere. Hence, \(\eta\) can be rewritten in terms of \(T\) and \(n\) as
\[\eta(x,t)=\operatorname{sgn}_{+}(t-T(x))\int_{T(x)}^{t}n(x,s)\,ds. \tag{1.10}\]
From the above formula, we now see that the spatial regularity of \(\eta\) is more or less equivalent to the regularity of \(T\) and \(n\).
Note that generically \(T\) is at best Lipschitz continuous, as it is easy to cook up a scenario where two different parts of the tumor patch collide with different velocities. In addition, topological changes of the tumor boundary can cause the pressure to suddenly jump with highly nonlocal effects. For instance, the merger of two portions of the boundary can cause far away parts of the boundary to instantaneously start moving faster. Since \(n\) is much better than Lipschitz continuous, it is \(T\) that will determine the regularity of \(\eta\). While we are inclined to believe that the Lipschitz continuity of \(T\) is true, our methods are only able
to show that \(T\) is Holder continuous with a dimensionally dependent exponent. Nonetheless, the Holder continuity is sufficient for us to deduce free boundary regularity using the obstacle problem approach. However, let us note that we are forced to work in a much lower regularity regime than what is typically considered in the obstacle problem literature, requiring us to develop new arguments.
### Main results
We are now ready to present the main results of our paper. All of our results will use the following mild assumptions on the initial data.
1. \(\rho(\cdot,0)\in L^{1}(\mathbb{R}^{d})\cap\mathrm{BV}(\mathbb{R}^{d})\) and \(\rho(x,0)\in\{0,1\}\) for almost every \(x\in\mathbb{R}^{d}\).
2. \(n(\cdot,0)\in W^{1,\infty}(\mathbb{R}^{d})\) and there exists \(c>0\) such that \(n(x,0)\geq c\) for all \(x\in\mathbb{R}^{d}\).
The main results of the first half of the paper are the HJB structure and Hopf-Lax formula for the pressure, along with the Holder continuity of the hitting time.
**Theorem 1.1**.: _The following holds for the unique weak solution \(p\) to the system (1.1)-(1.2)._
1. _[_Cor 3.4_]_ \(p\) _solves, in the sense of weak solutions,_ \[\partial_{t}p-|\nabla p|^{2}+pu_{+}\geq 0,\] _where for any_ \(\tau>0\) _there exists_ \(b=b(\tau,d)>0\) _such that_ \(bu_{+}e^{bu_{+}}\in L^{1}([0,\tau];\mathbb{R}^{d})\)_._
2. _[_Prop. 3.7_]_ _Given points_ \((x_{1},t_{1})\)_,_ \((x_{0},t_{0})\) _with_ \(t_{0}<t_{1}\) _and any decreasing function_ \(\lambda\in L^{1}([0,t_{1}-t_{0}])\)_, there exist constants_ \(C=C(t_{1},d)\) _and_ \(b=b(t_{1},d)\) _such that_ \[p(x_{0},t_{0})\leq e^{\Lambda(t_{1}-t_{0})}\Big{(}p(x_{1},t_{1})+\frac{|x_{1}- x_{0}|^{2}}{4\int_{0}^{t_{1}-t_{0}}e^{\Lambda(s)}\,ds}+C(t_{1}-t_{0})^{7/10}e^{- \lambda(t_{1}-t_{0})}\Big{)}\] _where_ \(\Lambda(t):=\frac{5}{4b}\int_{0}^{t}\lambda(s)\,ds+\frac{t}{b}\log(1+\frac{C }{t})\)_._
3. _[Theorem 4.2]_ \(T\) _is locally Holder continuous on the set_ \(\{x\in\mathbb{R}^{d}:0<T(x)<\infty\}\) _with an exponent that depends only on the dimension._
Let us note that Theorem 1.1 parts (a) and (b) represent a significant improvement to our understanding of the Hele-Shaw equation. In particular, any control on the time derivative of the pressure has been previously missing in the literature. Furthermore, the delicate control that we obtain from the Hopf-Lax formula in part (b) is completely new and unexpected.
As we mentioned earlier, we establish the HJB structure by first going through the PME (3.2). For the classic PME without a source term, bounds on the negative part of \(\Delta p_{\gamma}\) are known through the celebrated Aronson-Benilan estimate [1]. In the presence of a source term, AB-type bounds on quantities taking a similar form to \(u_{\gamma}=-\gamma(\Delta p_{\gamma}+n)\) have been studied in the literature [10, 11, 12, 13, 14], however except for [14], these bounds do not scale well with respect to \(\gamma\). We adapt the arguments from [14] to show that \([u_{\gamma}]_{+}\) can be bounded uniformly with respect to \(\gamma\) in a BMO-type space. Once we have the uniform control on \(u_{\gamma,+}\) we can pass to the limit in (3.4) to obtain the result (a). A direct derivation of (a) from the Hele-Shaw flow or the meaning of the singular limit \(u=\lim_{\gamma\to\infty}u_{\gamma}\) in terms of the Hele-Shaw flow remains open.
To obtain (b), we cannot take the usual approach to proving Hopf-Lax type formulas (i.e. differentiating \(p\) along paths) due to the potential unboundedness of \(u_{+}\). To overcome this, we adapt the approach developed in [10], which handles unbounded coefficients by averaging over paths indexed by the unit ball. Our calculation is somewhat different however, as we can exploit the specific structure of \(pu_{+}\) to decompose \(pu_{+}\leq\lambda p+p(u-\lambda)_{+}\) for some scalar \(\lambda\geq 0\). By choosing \(\lambda\) appropriately we can force \(p(u-\lambda)_{+}\) to be small while using a Gronwall argument to handle \(\lambda p\). This allows us to obtain a much more favorable error term in our Hopf-Lax formula compared to [10].
Although Theorem 1.1 (a) and (b) are stated for our particular system (1.1-1.2), equivalent results can be proved for more general tumor growth models where the growth term \(\rho n\) is replaced by \(\rho G\) for some general growth rate \(G\). In particular, our arguments only need \(G\geq c(\tau)>0\) along with some control on \([\partial_{t}G]_{-}\).
As a consequence of the Hopf-Lax formula, we obtain the \(C^{\alpha}\) regularity of \(T\) where \(\alpha=\alpha(d)\). We do this by combining the formula with a novel barrier type argument. Given a point \((x,T(x))\) on the free boundary
and some time \(t_{0}\in(0,T(x))\), we use the Hopf-Lax formula and the values of \(p\) at time \(T(x)\) to construct an explicit supersolution \(\psi\) that dominates \(p\) on \((t_{0},T(x))\times\mathbb{R}^{d}\). The key is that the Hopf-Lax formula allows us to choose the values in such a way that \(\psi\) is zero in a neighborhood of \(x\) up until the hitting time \(T(x)\). Since we are able to explicitly calculate and invert \(t(r)=\inf\{t\in(t_{0},T(x)):\sup_{y\in B_{r}(x)}\psi(y,t)>0\}\) we obtain an upper bound on \(\sup_{y\in B_{r}(x)}T(x)-T(y)\), which implies the Holder continuity of \(T\).
Some remarks on the previous literature for hitting times are in order. Quantitative regularity of the hitting time for PME has been obtained in [10] for the classical PME and in [11] for the PME with a source term and drift. Nevertheless, both of these results obtain estimates that blow up as \(\gamma\) tends to infinity, due to the lack of a uniform AB estimate on \(u_{\gamma}\). As a result, their approaches are not suitable for our problem. Let us also note that these papers used a rather different approach that did not involve the Hopf-Lax approach that we use here. Estimates on the hitting time for a simpler version of (1.1) where \(n\) is replaced by a decreasing function of \(p\), were obtained in [12] for dimensions \(d\leq 3\). Their proof strongly relies on the specific structure of their growth term, which allows them to relate the Holder continuity of \(T\) to that of the pressure through a clever trick. Again, this approach is not applicable to our problem. Although we also focus on a specific source term, our method is much more general and can be applied to other instances of the Hele-Shaw or Porous Media equation.
The remaining analysis in the paper is devoted to the study of the obstacle problem (1.8), based on the \(C^{\alpha}\) regularity of \(T\). We build on the low-regularity obstacle problem analysis of Blank [1] to establish the space-time regularity of the tumor patch. A crucial fact we use is that the solution of the obstacle problem with \(C^{\alpha}\) data has a unique blow-up limit at each point, allowing us to decompose the boundary into a regular part and a singular part (the regular points have blow up limits that look like half-planes). A direct application of this dichotomy yields that the boundary has locally finite \(H^{d-1}\) measure for each time, as mentioned for instance in [12]. However, this standard description lacks the geometric information of the free boundary over time. Indeed, the main novelty of our obstacle problem analysis is that we are able to stitch together information from each time \(t\) to obtain regularity of the full space time boundary \(\Gamma:=\{(x,T(x):x\in\mathbb{R}^{d}\}\subset\mathbb{R}^{d}\times(0,\infty)\). In particular, we show that \(\Gamma\) is regular in space-time outside of a set of at most Hausdorff dimension \(d-\alpha\), and its outward normal is Holder continuous in space-time. While space-time analysis of the singular set has been carried out before for the injection problem ([13], [14], [15]), these results have utilized smoothness (at least \(C^{4}\)) of the fixed boundary data in an essential way. A more general time-varying source term was considered in [14], but only for a short range of time that ensures that no topological singularity occurs during the evolution.
Our results are summarized in the following Theorem.
**Theorem 1.2**.: _Let \(\Gamma\) denote the space-time boundary set of the tumor region i.e. \(\Gamma=\{(x,T(x)):x\in\mathbb{R}^{d}\}\)_
* _[Prop. 5.4, Prop. 5.15]_ _The set_ \(\{0<T(x)<\infty\}\subset\mathbb{R}^{d}\) _decomposes as_ \(R\cup\Sigma\)_, where the set_ \(R\) _of regular points is open in_ \(\mathbb{R}^{d}\) _and the set_ \(\Sigma\) _of singular points is locally contained in a_ \(C^{1}\) _manifold of dimension_ \(d-1\)_._
* _[Prop. 5.12]_ _At any_ \(x\in R\)_, the free boundary near_ \((x,T(x))\in\Gamma\) _can be locally represented as a graph_ \(\{x_{n}=f(x^{\prime},t)\}\) _where_ \(f\) _is_ \(C^{1,1}\) _in_ \(x^{\prime}\) _and Lipschitz in time._
* _[Prop. 5.9, Cor.5.10 ]_ \(p(\cdot,T(x))\) _has linear growth at_ \(x\in R\)_, with locally uniform growth rates. In particular_ \(T\) _is Lipschitz in_ \(R\)_._
* _[Prop. 5.6]_ _The map_ \(\nu:R\to\mathcal{S}^{d}\)_, where_ \(\nu(x)\) _denotes the spatial outward normal of_ \(\Gamma\) _at_ \((x,T(x))\)_, is Holder continuous. In particular_ \(\nabla p(x,T(x))\) _is well-defined for_ \(x\in R\) _and has continuous direction._
Note that, while Theorem 1.2 (d) yields the continuity of the direction of \(\nabla p\) on \(\Gamma\), we cannot expect the same for \(|\nabla p|\): this can be easily seen from examples where a topological change occurs far away from the given free boundary point. In terms of the quadratic blow up limit of \(w\), Theorem 1.2 (a) and (d) yield its continuity at free boundary points, along \(\Sigma\) and along \(R\). As a consequence, \(D^{2}w(x,T(x))\) exists on \(\Sigma\) and exists in a one-sided sense on \(R\), and is continuous on each set (though not necessarily on their union).
Our last theorem discusses the Hausdorff measure of the free boundary in space-time coordinate. Let us introduce the notation
\[\Omega_{t}:=\{w(\cdot,t)>0\},\quad\Gamma_{t}:=\partial\Omega_{t}. \tag{1.11}\]
**Theorem 1.3** (Corollary 5.16).:
1. _The free boundary_ \(\partial\{w>0\}\) _has Hausdorff dimension_ \(d\) _in_ \((x,t)\)_-coordinates._
2. \(Graph(R)=\{(x,t):x\in\Gamma_{t},x\in R\}\) _is relatively open with locally finite_ \(H^{d}\) _measure._
3. \(Graph(\Sigma)=\{(x,t):x\in\Gamma_{t},x\in\Sigma\}\) _has locally finite_ \(H^{d-\alpha}\) _measure._
Let us mention that we expect \(T\) to be Lipschitz for all points, not just in \(R\). For instance, in the classical setting with constant Dirichlet fixed boundary data, the Lipschitz continuity of \(T\) was shown by [11] by a simple comparison principle. The remaining challenge in our setting lies in the analysis of the singular points. This is an intriguing question as the blow-up profile of the tumor patch at these points suggests that the evolution at these points should be non-degenerate in general. In fact, one might even expect the gradient of \(T\) to vanish at these points. Nonetheless, accurately capturing the hitting time behavior near singular points appears to be out of reach for the moment. It would also be interesting to improve upon our estimate of the singular set, using a generic notion of initial data. While it seems plausible, new ideas seem to be necessary to obtain such a result.
The rest of the paper is organized as follows. In Section 2, we review the basic properties of the system (1.1-1.2) and the connection to the obstacle problem. In Section 3, we establish the HJB structure of the pressure along with the Hopf-Lax formula. In Section 4, we construct a barrier supersolution using the Hopf-Lax formula which allows us to establish the Holder regularity of the hitting time map \(T(x)\). Section 5 builds on the regularity of \(T\) and the existing obstacle theory to investigate the global regularity of the free boundary \(\Gamma:=\partial\{w(x,t)>0\}\).
## 2. Basic properties of the system
Here we recall the notion and show basic properties of solutions for (1.1)-(1.2). We first introduce the weak notion of our solutions, parallel to those introduced in [1] for a similar model.
**Definition 2.1**.: A triple \((\rho,p,n)\) is a weak solution to (1.1)-(1.2) for initial data \(\rho_{0}\in L^{1}(\mathbb{R}^{d})\cap BV(\mathbb{R}^{d})\) and \(n_{0}\in L^{\infty}(\mathbb{R}^{d})\cap BV(\mathbb{R}^{d})\) if for any \(\tau>0\),
1. \(p(1-\rho)=0\) in \(\mathcal{D}^{\prime}(\mathbb{R}^{d}\times[0,\tau])\)
2. For any \(\psi\in H^{1}(\mathbb{R}^{d}\times[0,\tau])\) vanishing at time \(\tau\), (2.1) \[\int_{0}^{\tau}\int_{\mathbb{R}^{d}}\nabla\psi\cdot\nabla p-\rho\partial_{t} \psi\,dx\,dt=\int_{\mathbb{R}^{d}}\psi(x,0)\rho_{0}(x)\,dx+\int_{0}^{\tau} \int_{\mathbb{R}^{d}}\psi n\rho\,dx\,dt.\]
3. \[\partial_{t}n-\Delta n=-\rho n\text{ in }\mathcal{D}^{\prime}(\mathbb{R}^{d} \times[0,\tau]),\quad n(x,0)=n_{0}(x)\]
4. We have \(\rho\in C([0,\tau];L^{1}(\mathbb{R}^{d}))\cap L^{\infty}([0,\tau];BV(\mathbb{R }^{d}))\), \(p\in L^{2}([0,\tau];H^{1}(\mathbb{R}^{d}))\), and \(n\in L^{\infty}(Q_{\tau})\cap L^{\infty}([0,\tau];BV(\mathbb{R}^{d}))\).
Here, \(BV(\mathbb{R}^{d})\) is the space of (not necessarily integrable) functions with finite total variation.
We also record a few useful properties of a weak solution.
**Lemma 2.2**.: _For any \(\tau>0\), \(\varepsilon\in(0,1)\), we have_
1. \(\rho\in C^{0,1}_{t}L^{1}_{x}([0,\tau];\mathbb{R}^{d})\)_._
2. _The support of_ \(\rho\) _in_ \(\mathbb{R}^{d}\times[0,\tau]\) _is compact, and_ \(0\leq\rho\leq 1\)_._
3. _For a.e._ \(x\in\mathbb{R}^{d}\)_,_ \(\rho(x,\cdot)\) _is increasing in time._
4. \(n\in L^{\infty}([0,\tau]\times\mathbb{R}^{d}),\;\partial_{t}n\in\operatorname{ BMO}([0,\tau]\times\mathbb{R}^{d}),\;D^{2}n\in\operatorname{BMO}([0,\tau] \times\mathbb{R}^{d})\)_._
5. _We have_ (2.2) \[p(\Delta p+n)=0\text{ in }\mathcal{D}^{\prime}(\mathbb{R}^{d}\times[0,\tau])\] _Also,_ \(p\in L^{\infty}(\mathbb{R}^{d}\times[0,\tau])\)_._
Proof.: Statements (i) and (ii) are proved in [11], Theorem 2.2 and Proposition 3.6.
(iii) follows from Lemma 3.11 of [11], which provides comparison for the equation
\[\partial_{t}\rho^{i}-\nabla\cdot(\rho^{i}\nabla p^{i})=f^{i}\]
Namely, if \(\rho^{0}(x,0)\leq\rho^{1}(x,0)\), \(f^{0}\leq f^{1}\), and \(p^{i}(1-\rho^{i})=0\), then \(\rho^{0}(x,t)\leq\rho^{1}(x,t)\). For our system, since \(n\rho\geq 0\), comparison to the system with \(p^{0}=f^{0}=0\) implies that \(\rho(x,t_{0})\leq\rho(x,t_{1})\) for any \(t_{0}\leq t_{1}\) and a.e. \(x\). The measure zero set where this fails depends on \(t_{0},t_{1}\), so we conclude by applying this with a countable basis of intervals.
Item (iv) follows from parabolic estimates for the heat equation with \(L^{\infty}\) coefficients (see e.g. [10]).
The distributional equation \(p(\Delta p+n)=0\) is also proved in [11], Theorem 2.2. The \(L^{\infty}\) bound for \(p\) follows from the compact support of \(\rho\), and the boundedness of \(n\); one can take a sufficiently large paraboloid supersolution to \(\Delta p=-n\) to get an upper bound.
From the definition of the weak solution, we can derive that \(w\) satisfies an elliptic equation at each time.
**Lemma 2.3**.: _We have \(w(1-\rho)=0\) a.e. in space-time. For each \(t>0\), \(w\) solves_
\[\Delta w(x,t)=\rho(x,t)-\rho(x,0)-\int_{0}^{t}n(x,s)\rho(x,s)\,ds\text{ in }\mathcal{D}^{\prime}(\mathbb{R}^{d}) \tag{2.3}\]
_In particular, for any \(\tau>0\) and \(\varepsilon\in(0,1)\), we have \(w\in C^{0,1}(\mathbb{R}^{d}\times[0,\tau])\times L^{\infty}_{t}C^{1,1- \varepsilon}([0,\tau],\mathbb{R}^{d}).\)_
Proof.: For the first statement, we simply note that the monotonicity of \(\rho\) in time from Lemma 2.2 implies
\[0\leq w(x,t)(1-\rho(x,t))=\int_{0}^{t}p(x,s)(1-\rho(x,t))\,ds\leq\int_{0}^{t}p (x,s)(1-\rho(x,s))\,ds\equiv 0\]
For the second statement, we first note that both \(\rho\) and the function \(\int_{0}^{t}n\rho\,ds\) are continuous in time into any \(L^{p}\) with \(p<\infty\); this follows from the weak solution definition, since we have \(\rho\in C_{t}L^{1}\) with values in \([0,1]\) and compact support for bounded time intervals, while \(n\) is spacetime continuous from Lemma 2.2. Then to derive a distributional equation for \(w\), we consider Definition 2.1(ii) with \(\psi\) of the form \(\psi(x,t)=\varphi(x)\chi(t)\), where \(\varphi\in C^{\infty}_{c}(\mathbb{R}^{d})\) and \(\chi\in C^{\infty}_{c}([0,\tau))\) with \(\chi\equiv 1\) near \(0\). Using that \(w=\int p\,dt\), we obtain
\[\int_{\mathbb{R}^{d}}\chi\nabla\varphi\cdot\nabla w(\cdot,\tau)\,dx-\int_{ \mathbb{R}^{d}}\int_{0}^{\tau}\partial_{t}\chi\varphi\rho\,dx\,dt=\int_{ \mathbb{R}^{d}}\varphi(x)\rho_{0}(x)\,dx+\int_{\mathbb{R}^{d}}\varphi\int_{0} ^{\tau}\chi n\rho\,dt\,dx\]
If we take for \(\chi\) a sequence of cutoffs valued in \([0,1]\) and converging pointwise to the indicator of \([0,T)\), then we can apply the aforementioned time continuity of \(\rho\) and \(\int_{0}^{t}n\rho\,ds\) to obtain the limiting equation
\[\int_{\mathbb{R}^{d}}\nabla\varphi\cdot\nabla w(\cdot,\tau)+\varphi\rho( \cdot,\tau)\,dx=\int_{\mathbb{R}^{d}}\varphi(\rho_{0}+\eta(\cdot,\tau))\,dx\]
from which we conclude (2.3).
Since \(w(x,t)=\int_{0}^{t}p(x,s)\,ds\), the upper bound for \(p\) implies that \(w\) is Lipschitz in time uniformly in space. This will improve to Lipschitz in spacetime once we have \(w\in L^{\infty}_{t}C^{1,1-}_{x}\).
Since \(n\) is bounded on \(\mathbb{R}^{d}\times[0,\tau]\), \(\eta\) is bounded on \(\mathbb{R}^{d}\times[0,\tau]\). Then since \(\Delta w=(1-\eta)\chi_{\{w>0\}}\) is uniformly bounded in \(L^{\infty}\), and up to time \(\tau\), \(w(\cdot,t)\) is compactly supported in \(\overline{\Omega_{\tau}}\), it follows that for any \(p\in(1,\infty)\), \(\Delta w(\cdot,t)\in L^{p}(\mathbb{R}^{d})\). Calderon-Zygmund estimates then give \(w(\cdot,t)\in W^{2,p}(\mathbb{R}^{d})\), and thus \(w(\cdot,t)\in C^{1,1-\varepsilon}(\mathbb{R}^{d})\) for any \(\varepsilon>0\), uniformly in \(t\in[0,\tau]\).
From now on we make the assumptions (A1) and (A2) on our initial data \(\rho_{0},n_{0}\).
**Lemma 2.4**.: _Let \(\bar{n}(t):=\inf_{x\in\mathbb{R}^{d}}n(t,x)\) For any \(t>0\), \(\bar{n}(t)\geq e^{-t}\bar{n}(0).\)_
Proof.: Suppose that \(\tilde{n}\) satisfies the equation \(\partial_{t}\tilde{n}-\alpha\Delta\tilde{n}=-\tilde{n}\) with constant initial data \(\tilde{n}(0,x)=\bar{n}(0).\) The comparison principle for the heat equation implies that \(\tilde{n}\leq n\) almost everywhere. If we define \(\tilde{N}=e^{t}\tilde{n}\), then \(\tilde{N}\) satisfies \(\partial_{t}\tilde{N}-\alpha\Delta\tilde{N}=0\) with initial data \(\tilde{N}(0,x)=\bar{n}(0)\). Hence, \(\tilde{N}(t,x)=\tilde{N}(0,x)=\bar{n}(0)\) and thus it follows that \(\tilde{n}(t,x)=e^{-t}\bar{n}(0)\), which implies the result.
The following characterization of the pressure variable replaces the formal description of \(p\) solving the elliptic problem \(-\Delta p=n\) in \(\{\rho=1\}\) with zero Dirichlet data, to avoid ambiguity rising from potentially irregular boundary of \(\{\rho=1\}\). The argument is similar to ones that have previously appeared in the literature [13, 14, 15, 16], here we include a proof since our setting is slightly different.
**Lemma 2.5**.: _For almost every time \(t\), the pressure is a solution to the variational problem_
\[p(\cdot,t)=\operatorname*{argmin}_{\varphi(\cdot)(1-\rho(\cdot,t))=0,\;\varphi \geq 0}\,\int_{\mathbb{R}^{d}}\frac{1}{2}|\nabla\varphi(x)|^{2}-\varphi(x)n(x,t) \,dx.\]
Proof.: Given \(\epsilon>0\) define \(p_{\epsilon}(x,t):=\frac{1}{\epsilon}\int_{t-\epsilon}^{t}p(x,s)\,ds\) where we set \(p(x,s)=p(x,0)\) if \(s<0\) and \(p^{\epsilon}(x,t):=\frac{1}{\epsilon}\int_{t}^{t+\epsilon}p(x,s)\,ds\). Fix a time \(t_{0}\) such that \(p_{\epsilon}(t_{0},\cdot),p^{\epsilon}(t_{0},\cdot)\) converge to \(p(t_{0},\cdot)\) in \(H^{1}(\mathbb{R}^{d})\). Choose some nonnegative function \(\varphi\in H^{1}(\mathbb{R}^{d})\) such that \(\varphi(x)(1-\rho(x,t_{0}))=0\) for almost every \(x\in\mathbb{R}^{d}\) (note that space integrals of \(\rho(\cdot,t_{0})\) against functions in \(H^{1}(\mathbb{R}^{d})\) are well defined at any time \(t_{0}\) since \(\partial_{t}\rho\in L^{2}([0,T];H^{-1}(\mathbb{R}^{d}))\), which itself is a consequence of the continuity equation and \(p\in L^{2}([0,T];H^{1}(\mathbb{R}^{d}))\) ).
Integrating equation (1.1) from time \(t_{0}-\epsilon\) to \(t_{0}\), dividing by \(\epsilon\), and integrating against \(\varphi\) we see that
\[\int_{\mathbb{R}^{d}}\varphi(x)\frac{\rho(x,t_{0})-\rho(x,t_{0}-\epsilon)}{ \epsilon}+\nabla\varphi(x)\cdot\nabla p_{\epsilon}(x,t_{0})\,dx=\int_{\mathbb{ R}^{d}}\varphi(x)\frac{1}{\epsilon}\int_{t_{0}-\epsilon}^{t_{0}}\rho(x,s)n(x,s) \,ds\,dx\]
The condition \(\varphi(x)(1-\rho(x,t_{0}))=0\) implies that \(\varphi(x)\frac{\rho(x,t_{0})-\rho(x,t_{0}-\epsilon)}{\epsilon}=\varphi(x) \frac{1-\rho(x,t_{0}-\epsilon)}{\epsilon}\). Combined with the constraint \(\rho\leq 1\), we can conclude that
\[\int_{\mathbb{R}^{d}}\nabla\varphi(x)\cdot\nabla p_{\epsilon}(x,t_{0})\,dx \leq\int_{\mathbb{R}^{d}}\varphi(x)\frac{1}{\epsilon}\int_{t_{0}-\epsilon}^{t_ {0}}\rho(x,s)n(x,s)\,ds\,dx.\]
Applying the same logic to the time integral over the interval \([t_{0},t_{0}+\epsilon]\), we find that
\[\int_{\mathbb{R}^{d}}\nabla\varphi(x)\cdot\nabla p^{\epsilon}(x,t_{0})\,dx \geq\int_{\mathbb{R}^{d}}\varphi(x)\frac{1}{\epsilon}\int_{t_{0}}^{t_{0}+ \epsilon}\rho(x,s)n(x,s)\,ds\,dx.\]
Sending \(\epsilon\to 0\) we can conclude that
\[\int_{\mathbb{R}^{d}}\nabla\varphi(x)\cdot\nabla p(x,t_{0})=\int_{\mathbb{R}^ {d}}\varphi(x)\rho(x,t_{0})n(t_{0},x)=\int_{\mathbb{R}^{d}}\varphi(x)n(x,t_{0}) \tag{2.4}\]
where the final equality follows from the fact that \(\varphi(x)(1-\rho(x,t_{0}))=0\) almost everywhere. The above equation is the Euler-Lagrange equation for the variational problem, thus, combined with the strong convexity of the variational problem, we see that \(p\) solves the variational problem at every time \(t_{0}\) where \(p_{\epsilon}(\cdot,t_{0}),p^{\epsilon}(\cdot,t_{0})\) converge to \(p(\cdot,t_{0})\) in \(H^{1}(\mathbb{R}^{d}).\) Since this must hold for almost every \(t_{0}\in[0,T]\) we are done.
A straightforward consequence of the previous Lemma is the following Lemma which gives a crude comparison between the pressure values at different times. We will obtain a much sharper comparison property in Section 3 when we establish the Hopf-Lax type formula for the pressure.
**Lemma 2.6**.: _Fix some time \(\tau>0\). Given almost any times \(s,t\in[0,\tau]\) such that \(s<t\), there exists a constant \(C(\tau)\) such that_
\[p(s,x)\leq C(\tau)p(t,x).\]
_In particular, this implies_
\[\{x\in\mathbb{R}^{d}:w(s,x)>0\}\subset\{x\in\mathbb{R}^{d}:p(s,x)>0\}\subset\{ x\in\mathbb{R}^{d}:w(t,x)>0\}.\]
Proof.: By Lemma 2.4, there exists a constant \(C(\tau)>0\) such that \(n(x,s)<C(\tau)n(x,t)\) for all \(x\in\mathbb{R}^{d}\). Since \(\rho\) is increasing with respect to time and \(\rho\leq 1\), we know that \(\varphi(x)(1-\rho(x,t))=0\) for any nonnegative function \(\varphi(x)\) such that \(\varphi(x)(1-\rho(x,s))=0\). Let us choose \(\varphi(x)=(p(s,x)-C(\tau)p(t,x))_{+}\). It then follows from Lemma 2.5 that
\[\int_{\mathbb{R}^{d}}\nabla(p(x,s)-C(\tau)p(x,t))_{+}\cdot\nabla p(x,t)\,dx= \int_{\mathbb{R}^{d}}(p(x,s)-C(\tau)p(x,t))_{+}n(x,t)\,dx,\]
and
\[\int_{\mathbb{R}^{d}}\nabla(p(x,s)-C(\tau)p(x,t))_{+}\cdot\nabla p(x,s)\,dx=\int_{ \mathbb{R}^{d}}(p(x,s)-C(\tau)p(x,t))_{+}n(x,s)\,dx.\]
Hence,
\[\int_{\mathbb{R}^{d}}\nabla(p(x,s)-C(\tau)p(x,t))_{+}\cdot\nabla(p(x,s)-C(\tau )p(x,t))\,dx=\int_{\mathbb{R}^{d}}(p(x,s)-C(\tau)p(x,t))_{+}(n(x,s)-C(\tau)n(x, t))\,dx.\]
The left-hand side of the above equation is nonnegative while the right-hand side of the equation is nonpositive. This is only possible if \((p(x,s)-C(\tau)p(x,t))_{+}=0\) almost everywhere.
**Lemma 2.7**.: _Up to a set of measure zero, for any \(t>0\) we have \(\{x\in\mathbb{R}^{d}:\rho(x,t)=1\}=\{x\in\mathbb{R}^{d}:w(x,t)>0\}\)._
Proof.: This is nearly Lemma 4.6 of [13], except that in the diffusion case we lack an explicit formula for the nutrient. Nevertheless, we proceed along the same lines.
From Lemma 2.3, we have \(w(1-\rho)=0\), and thus
\[\{x\in\mathbb{R}^{d}:w(x,t)>0\}\subset\{x\in\mathbb{R}^{d}:\rho(x,t)=1\}\]
Thus, we must show that the set \(A_{t}:=\{x:\rho(x,t)=1,w(x,t)=0\}\) has measure zero.
For this, we observe that \(\Delta w\) vanishes a.e. where \(w\) vanishes, and thus (2.3) implies that
\[\rho(x,0)+\int_{0}^{t}n(x,s)\rho(x,s)\,ds=\rho(x,t)=1\text{ a.e. on }A_{t}\]
From the pressure equation (2.2), any interior point of \(\{\rho(x,0)=1\}\) has positive pressure at every positive time, and our assumptions on the initial data provide that the boundary of this set has zero measure.
Thus, we need only consider the case where \(\int_{0}^{t}n\rho\,ds=1\). Since the nutrient is uniformly positive due to Lemma 2.4, this occurs for at most one time for a given \(x\). On the other hand, since the nutrient is uniformly bounded, this function is continuous in time, and \(x\) must be in \(A_{t}\) for an open set of times before \(\int_{0}^{t}n\rho\,ds=1\) is satisfied. It follows directly that \(A_{t}\) (and, in fact, \(\bigcup_{t}A_{t}\)) is null.
**Lemma 2.8**.: \[T(x):=\inf\{t\geq 0:w(x,t)>0\}=\inf\{t\geq 0:\rho(x,t)=1\}\text{ for a.e. }x\in\mathbb{R}^{d}\]
Proof.: For conciseness, write \(\widetilde{T}(x)=\inf\{t\geq 0:\rho(x,t)=1\}\).
Suppose that for some \(x\), we have \(\widetilde{T}(x)<T(x)\). Then we have \(w(x,t)=0\) for all \(t<T(x)\). Then, since Lemma 2.2 gives that \(\rho\) is monotone in time except for a null set of \(x\) which we ignore, we have \(x\in\{y:w(y,t)=0,\rho(y,t)=1\}\) for all \(t\in(\widetilde{T}(x),T(x))\). Then any such \(x\) is contained in \(\bigcup_{t\in\mathbb{O}\cap(0,\infty)}\{y:w(y,t)=0,\rho(y,t)=1\}\), which is null by Lemma 2.7.
The other direction is similar. Suppose instead that for some \(x\), we have \(T(x)<\widetilde{T}(x)\). Then \(w(x,t)>0\) for all \(t>T(x)\), while monotonicity implies that for a.e. \(x\) we have \(\rho(x,t)=0\) for all \(t<\widetilde{T}(x)\). Then any such \(x\) is contained in \(\bigcup_{t\in\mathbb{O}\cap(0,\infty)}\{y:w(y,t)>0,\rho(y,t)=0\}\), which is null by Lemma 2.7.
**Lemma 2.9**.: \(w(\cdot,t)\) _solves the obstacle problem (1.8)._
Proof.: This follows from applying Lemma 2.8 to Lemma 2.3. We have
\[\rho(x,t)=\chi_{\{T<t\}}(x)=\chi_{\{w(\cdot,t)>0\}}(x)\]
for a.e. \(x\) and all \(s\neq T(x)\). Thus, to obtain (1.8) we modify (2.3) by replacing the occurrence of \(\rho\) in \(\int_{0}^{t}n\rho\,ds\) with \(\chi_{\{T<t\}}\) and the other occurrence of \(\rho\) with \(\chi_{\{w(\cdot,t)>0\}}\).
Recalling the notation of (1.11), we additionally define
\[\Omega_{\infty}:=\{0\leq T(x)<\infty\}=\bigcup_{t>0}\Omega_{t},\quad\mathcal{O }:=\{0<T(x)<\infty\}=\Omega_{\infty}\setminus\overline{\Omega_{0}}. \tag{2.5}\]
We now prove that the hitting time \(T\) is continuous. While this justifies the characterization of the level sets of \(T\) as the free boundary \(\Omega_{t}\), it also is an important first step that initiates the regularity analysis of
in section 4. The main idea will be to show that a discontinuity must result in a point \(x_{0}\) and times \(t_{0}<t_{1}\) such that \(x_{0}\) is in \(\partial\Omega_{t}\) for \(t_{0}\leq t\leq t_{1}\). Then \(w(\cdot,t_{1})-w(\cdot,t_{0})\) is a positive superharmonic function on \(\Omega_{t_{0}}\), so one would like to apply the Hopf lemma to draw a contradiction between \(w(x_{0},t_{0})=w(x_{0},t_{1})=0\) and \(\nabla w(x_{0},t_{0})=\nabla w(x_{0},t_{1})=0\). Unfortunately, \(\Omega_{t_{0}}\) does not a priori have the regularity needed to apply the Hopf lemma, so we must first use obstacle problem techniques to shift to a setting where we do have such regularity. The key tool in doing so will be the quadratic blowup of \(w\) at free boundary points:
**Lemma 2.10** ([1] Corollary 2.5).: _Let \(u\) be a nonnegative solution to \(\Delta u=f\chi_{\{u>0\}}\), for some \(f\) which is strictly positive and bounded near the free boundary \(\partial\{u>0\}\). Then if \(x_{0}\) is a free boundary point, then the quadratic blowup sequence \(r^{-2}w(r(x-x_{0})+x_{0},t)\) is compact in \(C^{1,\alpha}(B_{1}(x_{0}))\) as \(r\to 0^{+}\). Moreover, if \(f\) is continuous at \(x_{0}\), then the subsequential limits solve \(\Delta v=f(x_{0})\chi_{\{v>0\}}\)._
The subsequential limit enjoys better geometry, due to the following property of global solutions to the constant-source obstacle problem:
**Lemma 2.11** ([1] Corollary 7).: _A nonnegative solution to \(\Delta u=\chi_{\{u>0\}}\) on \(\mathbb{R}^{d}\) is convex._
**Proposition 2.12**.:
* \(T\) _is continuous._
* \(x\in\partial\Omega_{t}\) _if and only if_ \(x\in\mathcal{O}\) _and_ \(t=T(x)\)_, for all_ \(x\in\mathbb{R}^{d}\) _and_ \(t>0\)_._
Proof.: First, we verify that the \(\overline{\Omega_{t}}\) are continuous from above, in the sense that for any \(t\) we have:
\[\overline{\Omega_{t}}=\bigcap_{\varepsilon>0}\overline{\Omega_{t+\varepsilon}} \tag{2.6}\]
The forward inclusion is trivial by the monotonicity of \(w\). For the reverse inclusion, we suppose for contradiction that there exists \(x\in\bigcap_{\varepsilon>0}\overline{\Omega_{t+\varepsilon}}\setminus \overline{\Omega_{t}}\). Let \(r\) be sufficiently small that \(B_{r}(x)\subset\{w(\cdot,t)=0\}\). From (2.3), \(\Delta w(\cdot,t+\varepsilon)\geq 1-\varepsilon\|n_{0}\|_{\infty}\) on \(B_{r}(x)\cap\{w(\cdot,t+\varepsilon)>0\}\), and by assumption we have \(x\in\overline{\Omega_{t+\varepsilon}}\) for all \(\varepsilon>0\). It follows by quadratic nondegeneracy for the obstacle problem (Lemma 6.1) that if \(\varepsilon<\frac{\|n_{0}\|_{\infty}}{2}\), then
\[\sup_{B_{r}(x)}w(\cdot,t+\varepsilon)\geq Cr^{2}\]
uniformly in \(\varepsilon\). Since \(w(\cdot,t)\equiv 0\) on \(B_{r}(x)\), we get a contradiction with the Lipschitz continuity of \(w\) in time by shrinking \(\varepsilon\).
Now, we introduce
\[T_{0}(x):=\inf\{t>0:x\in\overline{\Omega_{t}}\} \tag{2.7}\]
It is immediate that \(T_{0}(x)\leq T(x)=\inf\{t>0:x\in\Omega_{t}\}\). We claim that \(x\in\partial\Omega_{t}\) if and only if \(t\in[T_{0}(x),T(x)]\). It is clear that for \(t<T_{0}(x)\), \(x\notin\overline{\Omega_{t}}\), and for \(t>T(x)\), \(x\in\Omega_{t}\). Since \(x\in\Omega_{T(x)+\varepsilon}\) for every \(\varepsilon>0\), (2.6) gives \(x\in\overline{\Omega_{T(x)}}\). On the other hand, by continuity of \(w\) and minimality of \(T\), we have \(x\notin\Omega_{T(x)}\), so \(x\in\partial\Omega_{T(x)}\). Monotonicity implies that \(x\) is a boundary point for all \(t\leq T(x)\) for which \(x\in\overline{\Omega_{t}}\). We have \(x\in\overline{\Omega_{T_{0}(x)}}\) by using (2.6) with the definition of \(T_{0}\), so we get the claim.
For purely topological reasons related to how each is defined, \(T_{0}\) is lower semicontinuous and \(T\) is upper semicontinuous. To check lower semicontinuity of \(T_{0}\), let \((x_{n})\) be a sequence converging to \(x\) with \(\liminf T_{0}(x_{n}):=t\). Then for any \(\varepsilon>0\), the \(x_{n}\) are eventually in \(\overline{\Omega_{t+\varepsilon}}\), and thus \(x\in\overline{\Omega_{t+\varepsilon}}\). It follows that \(T_{0}(x)\leq t\). To check upper semicontinuity of \(T\), we note that \(x\in\Omega_{T(x)+\varepsilon}\) for all \(\varepsilon>0\). Any sequence \(x_{n}\) converging to \(x\) eventually has \(T(x_{n})\leq T(x)+\varepsilon\) since \(\Omega_{T(x)+\varepsilon}\) is open, so we conclude that \(\limsup T(x_{n})\leq T(x)\).
Therefore, both parts of the proposition will follow if we can show that \(T_{0}\equiv T\). For this, we will need the following useful property:
\[\lim_{\Omega_{T_{0}(x)}\ni x_{n}\to x}T(x_{n})=T_{0}(x) \tag{2.8}\]
To see this, we note that if \((x_{n})\) is such a sequence, then \(T(x_{n})\leq T_{0}(x)\) for each \(n\). On the other hand, by minimality of \(T_{0}\), \(x\notin\overline{\Omega_{T_{0}(x)-\varepsilon}}\), and so we eventually have \(T(x_{n})\geq T_{0}(x)-\varepsilon\) for any \(\varepsilon>0\).
Finally, we proceed to the proof that \(T_{0}=T\). Suppose that \(x_{0}\in\partial\Omega_{t}\) with \(T_{0}(x_{0})<T(x_{0})\). We will use the obstacle problem theory to compare blowups of \(w\) at \((x_{0},T_{0}(x_{0}))\) and at \((x_{0},t)\) with \(t>T_{0}(x_{0})\)
to derive a contradiction. First let us ensure that the blow-up profiles are well-defined. Due to (1.8) it follows that
\[0\leq\eta(x,t)\leq(t-T(x))_{+}\|n_{0}\|_{\infty}\]
We also get a continuity estimate. Assuming \(T(y)\leq T(x)\), we have either \(T(y)\leq t\leq T(x)\), giving
\[|\eta(x,t)-\eta(y,t)|=\eta(x,t)\leq\|n_{0}\|_{\infty}|T(x)-T(y)|\]
or else \(t\leq T(y)\leq T(x)\), giving
\[|\eta(x,t)-\eta(y,t)| =\left|\int_{T(x)}^{t}n(x,s)-n(y,s)\,ds-\int_{T(y)}^{T(x)}n(y,s) \,ds\right|\] \[\leq|t-T(x)|\sup_{T(x)\leq s\leq t}|n(x,s)-n(y,s)|+\|n_{0}\|_{ \infty}|T(x)-T(y)|\]
Thus, using the nutrient regularity from Lemma 2.2 and the result of (2.8), we conclude that \(\eta(\cdot,T_{0}(x_{0}))\) restricted to \(\Omega_{T_{0}(x_{0})}\) is continuous at \(x_{0}\), and in a sufficiently small neighborhood of \(x_{0}\), we can ensure that it is less than \(\frac{1}{2}\). Then Lemma 2.10 gives that the family of rescalings \(x\mapsto r^{-2}w(r(x-x_{0})+x_{0},T_{0}(x_{0}))\) are compact as \(r\to 0\) in \(C^{1,\alpha}_{loc}\), and their subsequential limits are nonzero global solutions of
\[\Delta u=\chi_{\{u>0\}}. \tag{2.9}\]
Now, choose \(\tau\in(T_{0}(x),T(x))\), sufficiently small such that we still have \(\eta(x,\tau)<\frac{1}{2}\) in some neighborhood of \(x_{0}\). By taking a further subsequence, the discussion above yields a sequence \(r_{n}\to 0\) such that
\[r_{n}^{-2}w(r_{n}(x-x_{0})+x_{0},T(x_{0}))\to u\text{ and }r_{n}^{-2}w(r_{n}(x-x_{0}) +x_{0},\tau)\to v,\]
for some \(u,v\) in \(C^{1,\alpha}_{loc}(\mathbb{R}^{d})\). Unlike with \(u\), \(\eta(\cdot,\tau)\) restricted to \(\Omega_{\tau}\) is not known to be continuous at \(x_{0}\), since we do not yet know that \(T\) is continuous. In particular, we do not know that \(v\) solves a constant Laplacian obstacle problem. However, we do get \(u(x_{0})=v(x_{0})=0\) and \(\nabla u(x_{0})=\nabla v(x_{0})=0\) from the convergence, and we also have that \(v\geq u\) since \(w(\cdot,t)\geq w(\cdot,T_{0}(x_{0}))\).
We will apply the Hopf lemma to \(v-u\) in the domain \(U:=\{u>0\}\). First observe that from the definition of \(u\), we have \(r_{n}(x-x_{0})+x_{0}\in\Omega_{T(x_{0})}\) if \(x\in U\) and if \(n\) is sufficiently large depending on \(x\). We have checked above that \(\eta(\cdot,\tau)\) restricted to \(\Omega_{T_{0}(x_{0})}\) is continuous at \(x_{0}\). Thus, for any \(x\in U\), we have \(\eta(r_{n}(x-x_{0})+x_{0},\tau)\to\eta(x_{0},\tau)\), and so we conclude that
\[\Delta v=1-\eta(x_{0},\tau)\text{ in }U.\]
Comparing this equation to (2.9), it follows that \(v-u\) satisfies
\[\Delta(v-u)=-\eta(x_{0},\tau)\leq-(\tau-T_{0}(x_{0}))\inf_{t\in[T_{0}(x_{0}), \tau]}n(x_{0},t)<0\]
with the last inequality following from the nutrient lower bound in Lemma 2.4 and the assumption that the initial nutrient is bounded away from \(0\). This implies that \(v-u\) is strictly superharmonic inside \(U\), and so our previous observation that \(v-u\geq 0\) by the monotonicity in time of \(w\) improves to \(v-u>0\) inside \(U\). Lastly let us observe that, from Lemma 2.11, the complement of \(U\) is convex and so \(U\) satisfies the interior ball condition at \(x_{0}\). Putting together the above information, the Hopf lemma applied at \(x_{0}\) implies that \(\nabla v(x_{0})-\nabla u(x_{0})\neq 0\), which is a contradiction. It follows that \(T_{0}=T\), so we finish.
_Remark 2.13_.: An important consequence of Proposition 2.12 is that the spacetime interface is exactly the graph of \(T\) on \(\mathcal{O}\). In other words,
\[\{(x,t):t\in(0.\infty),x\in\partial\Omega_{t}\}=\operatorname{Graph}_{T}( \mathcal{O}):=\{(x,T(x)):x\in\mathcal{O}\} \tag{2.10}\]
This also means that the interface is a \(d\)-dimensional topological manifold, and the regularity of its parametrization in \(d\) spatial variables is exactly that of \(T\). We will use the notation \(\operatorname{Graph}_{T}\) with subsets of \(\mathcal{O}\), which may be understood in this light as projections of the spacetime interface into \(\mathbb{R}^{d}\).
Finally, in light of the regularity of \(w\) and \(T\), we note a natural way to standardize \(\rho\) on measure zero sets. A corresponding standardization of the pressure will need to wait until the next Section, due to the need to preserve certain delicate structures.
**Lemma 2.14**.: \(\partial\Omega_{t}\) _has zero measure in \(\mathbb{R}^{d}\) for all \(t>0\). The weak solution \((\rho,p,n)\) can be taken such that \(\rho\) is upper semicontinuous in space and time, with \(\{\rho(\cdot,t)=1\}=\overline{\Omega_{t}}\) for each \(t\). In particular, the support of \(p\) is then contained in \(\{\rho=1\}\)._
Proof.: By Proposition 2.12, for any \(t>0\) and \(\varepsilon\in(0,t)\), we have \(\partial\Omega_{t}\subset\Omega_{t+\varepsilon}\setminus\Omega_{t-\varepsilon}\). By Lemma 2.7, up to measure zero sets we can replace the right-hand-side with \(\{\rho(\cdot,t+\varepsilon)=1\}\setminus\{\rho(\cdot,t-\varepsilon)=1\}\), and by time continuity of \(\rho\) in \(L^{1}\), the measure of this set goes to \(0\) with \(\varepsilon\). Thus, \(\partial\Omega_{t}\) has zero measure.
Then we claim that \((\rho,p,n)\) with \(\rho\) redefined as \(\chi_{\overline{\{w>0\}}}\) and \(p\) redefined to vanish outside \(\overline{\{w>0\}}\) remains a weak solution as defined in Definition 2.1. Indeed, since this changes \(\rho,p\) by measure zero sets for each time, equation (2.1) is unaffected. On the other hand, we have \(p(1-\rho)=0\) by construction. To check that \(\rho\) is spacetime upper semicontinuous, we only need to show that the set \(\{\rho=1\}:=\{(x,t):t\geq 0,x\in\overline{\Omega_{t}}\}\) is closed. Let \((x_{n},t_{n})\) be a sequence in this set converging to \((x,t)\). Since the \(t_{n}\) converge, they are bounded, and so the sequence is contained in an \(\overline{\Omega_{\tau}}\) for \(\tau\) sufficiently large. Then \(T(x)<\infty\), so we have \(T(x_{n})\to T(x)\) by continuity, and since \(T(x_{n})\leq t_{n}\) for each \(n\) by Proposition 2.12, we have \(T(x)\leq t\). It follows that \(x\in\overline{\Omega_{t}}\), so we conclude.
## 3. AB estimates and the Hopf-Lax bound
In this section, we will show that there exists a nonnegative function \(u_{+}\) such that the pressure is a super solution to the following Hamilton-Jacobi equation
\[\partial_{t}p-|\nabla p|^{2}\geq pu_{+}. \tag{3.1}\]
We will then use (3.1) to obtain a Hopf-Lax type formula for the pressure. In particular, given a fixed time \(t_{0}\), this will allow us to give lower bounds for the pressure at times \(t>t_{0}\) and upper bounds for the pressure at times \(t<t_{0}\) in terms of \(p(t_{0},\cdot)\). This will give us a very precise way of constructing pressure super solutions that lead to powerful barrier-type arguments and eventually Holder regularity of the hitting times (c.f. Section 4).
Let us emphasize that to the best of our knowledge, the Hopf-Lax type bounds we obtain have not previously appeared in the literature for Hele-Shaw type equations and they require some highly nontrivial efforts to obtain. First, to establish (3.1), we go through the Porous Media Equation (PME) and use the fact that our solution \((\rho,p)\) can be obtained as the incompressible limit of solutions \((\rho_{\gamma},p_{\gamma},n_{\gamma})\) of the PME-nutrient system
\[\partial_{t}\rho_{\gamma}-\nabla\cdot(\rho_{\gamma}\nabla p_{\gamma})=\rho_{ \gamma}n_{\gamma},\quad p_{\gamma}=\rho_{\gamma}^{\gamma}, \tag{3.2}\]
\[\partial_{t}n_{\gamma}-\Delta n_{\gamma}=-\rho_{\gamma}n_{\gamma} \tag{3.3}\]
as the scalar parameter \(\gamma\) is sent to infinity, and where the nutrient variable from our original system is held fixed. The advantage of the PME system is that it is possible to use the relation \(p_{\gamma}=\rho_{\gamma}^{\gamma}\) to rewrite (3.2) solely in terms of the pressure variable \(p_{\gamma}\), which yields the equation
\[\partial_{t}p_{\gamma}-|\nabla p_{\gamma}|^{2}-\gamma p_{\gamma}(\Delta p_{ \gamma}+n)=0. \tag{3.4}\]
The main difficulty in obtaining (3.1) is to show that as \(\gamma\to\infty\), \(u_{\gamma}=-\gamma(\Delta p_{\gamma}+n)\) converges to a meaningful limit object \(u\), whose positive part can be controlled. For the classic PME without a source term, bounds on the negative part of \(\Delta p_{\gamma}\) are known through the celebrated Aronson-Benilan estimate [1]. In the presence of a source term, AB-type bounds on quantities taking a similar form to \(\gamma(\Delta p_{\gamma}+n)\) have been studied in the literature [12, 13, 14, 15], however except for [14], these bounds do not scale well with respect to \(\gamma\). We adapt the arguments from [14] to show that \([u_{\gamma}]_{+}\) can be bounded uniformly with respect to \(\gamma\) in BMO-type spaces. Note that we are unable to get \(L^{\infty}\) bounds on \(u_{\gamma}\) essentially because two key quantities in the estimate \(\partial_{t}n\) and \(\nabla n\cdot\nabla p\) are not in general bounded in \(L^{\infty}\). It would be interesting to see whether equation (3.1) could be obtained directly from the original system without going through PME, but we leave this question to a future work.
Once we have obtained equation (3.1), there is still significant work required to obtain a Hopf-Lax type control for \(p\). Here the difficulty is that \(u_{+}\) is not bounded in \(L^{\infty}\). Noting that \(p\) should satisfy
\[\partial_{t}p-|\nabla p|^{2}+pu_{+}\geq 0,\]
the derivative of \(p\) along an arbitrary path \(x(t)\) gives
\[\frac{d}{dt}p(x(t),t)=\partial_{t}p(x(t),t)+x^{\prime}(t)\cdot\nabla p(x(t),t)\geq\]
\[|\nabla p(x(t),t)|^{2}-p(x(t),t)u_{+}(x(t),t)+x^{\prime}(t)\cdot\nabla p(x(t),t )\geq-p(x(t),t)u_{+}(x(t),t)-\frac{1}{4}|x^{\prime}(t)|^{2}.\]
Unfortunately, without \(L^{\infty}\) control on \(u_{+}\) it is not clear that time integrals of the final quantity will be well-defined. This prevents the usual approach to proving Hopf-Lax type bounds.
To overcome this, we adapt the approach developed in [1], which handles unbounded coefficients by instead considering an average over paths indexed by the unit ball. Our calculation is somewhat different however, as we can exploit the specific structure of \(pu_{+}\) to write \(pu_{+}=\lambda p+p(u-\lambda)_{+}\) for some scalar \(\lambda\geq 0\). By choosing \(\lambda\) appropriately we can force \(p(u-\lambda)_{+}\) to be small while using a Gronwall argument to handle \(\lambda p\). This allows us to obtain a much more favorable error term in our Hopf-Lax formula compared to [1] (c.f. Proposition 3.7).
We begin with the aforementioned result (well-known) that says we can approximate our system (1.1-1.2) with a sequence of smooth solutions to PME.
**Proposition 3.1** (see e.g. [1, 1, 2]).: _There exists a sequence of smooth solutions \((\rho_{\gamma},p_{\gamma},n_{\gamma})\) to the PME-nutrient system (3.2-3.3) with initial data \((\rho_{0,\gamma},n_{0,\gamma})\) such that for any \(\tau>0\) we have that \(\rho_{\gamma}\) converges strongly in \(L^{1}([0,\tau]\times\mathbb{R}^{d})\), \(p_{\gamma},n_{\gamma}\) converge strongly in \(L^{2}([0,\tau];H^{1}(\mathbb{R}^{d}))\) to the unique solution \((\rho,p,n)\) to the system (1.1-1.2) with initial data \((\rho_{0},n_{0})\) as \(\gamma\to\infty\). Furthermore, one may choose \(\rho_{0,\gamma}\) such that \(u_{\gamma,+}(\cdot,0)\) is bounded in \(L^{\infty}(\mathbb{R}^{d})\) uniformly in \(\gamma\)._
Next, we record the following simple result for solutions to the heat equation with \(L^{\infty}\) source.
**Lemma 3.2**.: _There exists some \(b_{0}>0\) such that \(n\) satisfies the bound_
\[\exp(b_{0}\frac{|\partial_{t}n|+|\Delta n|}{n})-1\in L^{1}([0,\tau]\times \mathbb{R}^{d})\]
Proof.: Thanks to Lemma 2.2 we know that \(\partial_{t}n\) and \(D^{2}n\) are bounded in BMO. Since we also have
\[\frac{1}{2}\|\partial_{t}n\|^{2}_{L^{2}([0,\tau]\times\mathbb{R}^{d})}+\| \nabla n\|^{2}_{L^{2}(\{\tau\}\times\mathbb{R}^{d})}\leq\|\nabla n\|^{2}_{L^{2 }(\{0\}\times\mathbb{R}^{d})}+\frac{1}{2}\|n\|_{L^{\infty}([0,\tau]\times \mathbb{R}^{d})}\|\rho\|_{L^{2}([0,\tau]\times\mathbb{R}^{d})},\]
the BMO bound implies the existence of a constant \(c>0\) such that \(\exp(c|\partial_{t}n|)-1,\exp(c|\Delta n|)-1\in L^{1}([0,T]\times\mathbb{R}^{ d})\). Following the logic of Lemma 2.4, it follows that \(n\) is uniformly bounded from below on any time interval. Hence, there must be an appropriate choice of \(b_{0}\) where the result holds.
**Proposition 3.3**.: _If \((p_{\gamma},n_{\gamma})\) is a smooth solution to the system (3.3-3.4) for some \(\gamma\in(1,\infty)\), then for any \(\tau>0\) there exists \(b>0\) that only depends on \(\tau\) such that \((b[u_{\gamma}]_{+}-1)\exp(b[u_{\gamma}]_{+})+1\) is uniformly bounded in \(L^{1}([0,\tau]\times\mathbb{R}^{d})\) with respect to \(\gamma\), where \(u_{\gamma}:=-\gamma(\Delta p_{\gamma}+n_{\gamma})\)._
Proof.: If we differentiate \(\frac{1}{\gamma}u_{\gamma}\) with respect to time, we get
\[\partial_{t}\frac{1}{\gamma}u_{\gamma}=-\partial_{t}n_{\gamma}-\Delta\partial_ {t}p_{\gamma}=-\partial_{t}n_{\gamma}-\Delta(|\nabla p|^{2}-p_{\gamma}u_{ \gamma})\]
Expanding the Laplacian, we see that
\[\partial_{t}\frac{1}{\gamma}u_{\gamma}=-\partial_{t}n_{\gamma}-2|D^{2}p_{ \gamma}|^{2}-2\nabla\Delta p_{\gamma}\cdot\nabla p_{\gamma}+2\nabla p_{\gamma }\cdot\nabla u_{\gamma}+p_{\gamma}\Delta u_{\gamma}+u_{\gamma}\Delta p_{\gamma}.\]
Noting that \(-\Delta p_{\gamma}=n_{\gamma}+\frac{1}{\gamma}u_{\gamma}\) we can rewrite the previous line as
\[\partial_{t}\frac{1}{\gamma}u_{\gamma}=2\nabla n_{\gamma}\cdot\nabla p_{\gamma }-\partial_{t}n_{\gamma}-2|D^{2}p_{\gamma}|^{2}+2(1+\frac{1}{\gamma})\nabla p_{ \gamma}\cdot\nabla u_{\gamma}+p_{\gamma}\Delta u_{\gamma}-n_{\gamma}u_{ \gamma}-\frac{1}{\gamma}u_{\gamma}^{2}.\]
Hence, eliminating \(|D^{2}p|^{2}\), we can conclude that
\[n_{\gamma}u_{\gamma}+\frac{1}{\gamma}(\partial_{t}u_{\gamma}+u_{\gamma}^{2}) \leq 2\nabla n_{\gamma}\cdot\nabla p_{\gamma}-\partial_{t}n_{\gamma}+2(1+\frac{1}{ \gamma})\nabla p_{\gamma}\cdot\nabla u_{\gamma}+p_{\gamma}\Delta u_{\gamma}. \tag{3.5}\]
Now let \(f:\mathbb{R}\to\mathbb{R}\) be a \(C^{2}\) convex function such that \(f^{\prime}\geq 0\) everywhere and \(f=0\) on \((-\infty,0]\). If we integrate (3.5) against \(f^{\prime}(u_{\gamma})\) on \([0,\tau]\times\mathbb{R}^{d}\) we find that
\[\int_{\mathbb{R}^{d}\times\{\tau\}}\frac{1}{\gamma}f(u_{\gamma})+ \int_{\mathbb{R}^{d}\times[0,\tau]}n_{\gamma}u_{\gamma}f^{\prime}(u_{\gamma})+ \frac{1}{\gamma}u_{\gamma}^{2}f^{\prime}(u_{\gamma})\leq\\ \int_{\mathbb{R}^{d}\times\{0\}}\frac{1}{\gamma}f(u_{\gamma})+ \int_{\mathbb{R}^{d}\times[0,\tau]}f^{\prime}(u_{\gamma})\big{(}2\nabla n_{ \gamma}\cdot\nabla p_{\gamma}-\partial_{t}n_{\gamma}\big{)}+2(1+\frac{1}{ \gamma})\nabla p_{\gamma}\cdot\nabla\big{(}f(u_{\gamma})\big{)}+p_{\gamma}f^{ \prime}(u_{\gamma})\Delta u_{\gamma}. \tag{3.6}\]
Noting that \(\Delta\big{(}f(u_{\gamma})\big{)}=f^{\prime}(u_{\gamma})\Delta u_{\gamma}+f^{ \prime\prime}(u_{\gamma})|\nabla u_{\gamma}|^{2}\), we can integrate by parts in (3.6) to obtain
\[\int_{\mathbb{R}^{d}\times\{\tau\}}\frac{1}{\gamma}f(u_{\gamma})+ \int_{\mathbb{R}^{d}\times[0,\tau]}n_{\gamma}u_{\gamma}f^{\prime}(u_{\gamma})+ \frac{1}{\gamma}u_{\gamma}^{2}f^{\prime}(u_{\gamma})+p_{\gamma}f^{\prime \prime}(u_{\gamma})|\nabla u_{\gamma}|^{2}\leq\\ \int_{\mathbb{R}^{d}\times\{0\}}\frac{1}{\gamma}f(u_{\gamma})+ \int_{\mathbb{R}^{d}\times[0,\tau]}f^{\prime}(u_{\gamma})\big{(}2\nabla n_{ \gamma}\cdot\nabla p_{\gamma}-\partial_{t}n_{\gamma}\big{)}-(1+\frac{2}{ \gamma})f(u_{\gamma})\Delta p_{\gamma} \tag{3.7}\]
We then integrate by parts in \(\nabla n_{\gamma}\cdot\nabla p_{\gamma}\) to get
\[\int_{\mathbb{R}^{d}\times\{\tau\}}\frac{1}{\gamma}f(u_{\gamma})+ \int_{\mathbb{R}^{d}\times[0,\tau]}n_{\gamma}u_{\gamma}f^{\prime}(u_{\gamma}) +\frac{1}{\gamma}u_{\gamma}^{2}f^{\prime}(u_{\gamma})+p_{\gamma}f^{\prime \prime}(u_{\gamma})|\nabla u_{\gamma}|^{2}\leq\\ \int_{\mathbb{R}^{d}\times\{0\}}\frac{1}{\gamma}f(u_{\gamma})- \int_{\mathbb{R}^{d}\times[0,\tau]}f^{\prime}(u_{\gamma})\big{(}2p_{\gamma} \Delta n_{\gamma}+\partial_{t}n_{\gamma}\big{)}+p_{\gamma}f^{\prime\prime}(u_ {\gamma})\nabla u_{\gamma}\cdot\nabla n_{\gamma}+(1+\frac{2}{\gamma})f(u_{ \gamma})\Delta p_{\gamma}. \tag{3.8}\]
Once again using \(-\Delta p_{\gamma}=n_{\gamma}+\frac{1}{\gamma}u_{\gamma}\) and using the quadratic Young's inequality on \(\nabla u_{\gamma}\cdot\nabla n_{\gamma}\) we find that
\[\int_{\mathbb{R}^{d}\times\{\tau\}}\frac{1}{\gamma}f(u_{\gamma})+ \int_{\mathbb{R}^{d}\times[0,\tau]}n_{\gamma}u_{\gamma}f^{\prime}(u_{\gamma}) +\frac{1}{\gamma}u_{\gamma}^{2}f^{\prime}(u_{\gamma})+\frac{1}{2}p_{\gamma}f^{ \prime\prime}(u_{\gamma})|\nabla u_{\gamma}|^{2}\leq\\ \int_{\mathbb{R}^{d}\times\{0\}}\frac{1}{\gamma}f(u_{\gamma})+ \int_{\mathbb{R}^{d}\times[0,\tau]}(1+\frac{2}{\gamma})f(u_{\gamma})(n_{ \gamma}+\frac{1}{\gamma}u_{\gamma})-f^{\prime}(u_{\gamma})\big{(}2p_{\gamma} \Delta n_{\gamma}+\partial_{t}n_{\gamma}\big{)}+\frac{1}{2}p_{\gamma}f^{ \prime\prime}(u_{\gamma})|\nabla n_{\gamma}|^{2}. \tag{3.9}\]
Next, to help compare the left and right hand sides, we divide and multiply by multiples of \(n_{\gamma}\) to get
\[\int_{\mathbb{R}^{d}\times\{\tau\}}\frac{1}{\gamma}f(u_{\gamma})+ \int_{\mathbb{R}^{d}\times[0,\tau]}n_{\gamma}u_{\gamma}f^{\prime}(u_{\gamma}) +\frac{1}{\gamma}u_{\gamma}^{2}f^{\prime}(u_{\gamma})+\frac{1}{2}p_{\gamma}f^{ \prime\prime}(u_{\gamma})|\nabla u_{\gamma}|^{2}\leq\\ \int_{\mathbb{R}^{d}\times\{0\}}\frac{1}{\gamma}f(u_{\gamma})+ \int_{\mathbb{R}^{d}\times[0,\tau]}(1+\frac{2}{\gamma})f(u_{\gamma})(n_{ \gamma}+\frac{1}{\gamma}u_{\gamma})-\frac{n_{\gamma}}{2}f^{\prime}(u_{\gamma}) \big{(}\frac{4p_{\gamma}\Delta n_{\gamma}+2\partial_{t}n_{\gamma}}{n_{\gamma} }\big{)}+n_{\gamma}f^{\prime\prime}(u_{\gamma})\frac{p_{\gamma}|\nabla n_{ \gamma}|^{2}}{2n_{\gamma}}. \tag{3.10}\]
Using the identity \(uf^{\prime}(u)-f(u)=f^{*}(f^{\prime}(u))\) and applying Young's inequality to \(-\frac{n_{\gamma}}{2}f^{\prime}(u_{\gamma})\big{(}\frac{2p_{\gamma}\Delta n_{ \gamma}+\partial_{t}n_{\gamma}}{n_{\gamma}}\big{)}\), we get
\[\int_{\mathbb{R}^{d}\times\{\tau\}}\frac{1}{\gamma}f(u_{\gamma})+ \int_{\mathbb{R}^{d}\times[0,\tau]}(n_{\gamma}+\frac{1}{\gamma}u_{\gamma})f^{ *}(f^{\prime}(u_{\gamma}))+\frac{1}{2}p_{\gamma}f^{\prime\prime}(u_{\gamma})| \nabla u_{\gamma}|^{2}\leq\\ \int_{\mathbb{R}^{d}\times\{0\}}\frac{1}{\gamma}f(u_{\gamma})+ \int_{\mathbb{R}^{d}\times[0,\tau]}\frac{2}{\gamma}f(u_{\gamma})(n_{\gamma}+ \frac{1}{\gamma}u_{\gamma})+\frac{n_{\gamma}}{2}f^{*}(f^{\prime}(u_{\gamma}))+ \frac{n_{\gamma}}{2}f\big{(}\frac{4p_{\gamma}\Delta n_{\gamma}+2\partial_{t}n_{ \gamma}}{n_{\gamma}}\big{)}+n_{\gamma}f^{\prime\prime}(u_{\gamma})\frac{p_{\gamma}| \nabla n_{\gamma}|^{2}}{2n_{\gamma}}, \tag{3.11}\]
which is finally in a form that will allow us to estimate.
Fix some \(b\leq\frac{b_{0}}{4\max\big{(}1,\sup_{\gamma}\|p_{\gamma}\|_{L^{\infty}([0,\tau] \times\mathbb{R}^{d})}\big{)}}\) where \(b_{0}\) is the constant from Lemma 3.2. If we choose \(f\) such that \(f\) grows like \(\exp(bu)\) at infinity, then \(f^{*}(f^{\prime}(u))\) grows like \(bu\exp(bu)\) at infinity, and hence \(f^{*}(f^{\prime}(u))\) dominates both \(f(u)\) and \(f^{\prime\prime}(u)\) at infinity. PME has finite propagation in time (uniform in \(\gamma\)) [13], thus, there exists a radius \(R=R_{\tau}>0\) sufficiently large such that \((\rho_{\gamma},p_{\gamma})\) is supported in \(B_{R}\) independently of \(\gamma\). Recalling that \(u_{\gamma}=-\gamma(\Delta p_{\gamma}+n_{\gamma})\) and \(n_{\gamma}\geq 0\), it follows that \(f(u_{\gamma}),f^{\prime}(u_{\gamma}),f^{\prime\prime}(u_{\gamma})\) are all supported on \(B_{R}\) independently of \(\gamma\) as well. Since we are integrating functions with uniformly bounded support and \(f\big{(}\frac{4p_{\gamma}\Delta n_{\gamma}+2\partial_{t}n_{\gamma}}{n_{\gamma}} \big{)}\) is bounded by Lemma 3.2 for our choice of \(b\), it follows that the left-hand side of (3.11) dominates the right-hand side and so the result follows.
It essentially immediately follows that \(p\) is a weak supersolution to the appropriate HJB equation.
**Corollary 3.4**.: _Given any \(L^{2}_{\rm loc}([0,\infty);L^{2}(\mathbb{R}^{d}))\) weak limit point \(u_{+}\) of the family \(u_{\gamma,+}\)\(p\) solves, in the sense of weak solutions,_
\[\partial_{t}p-|\nabla p|^{2}+u_{+}p\geq 0, \tag{3.12}\]
_where for any \(\tau>0\) there exists \(b=b(\tau,d)>0\) such that \((bu_{+}-1)e^{bu_{+}}+1\in L^{1}([0,\tau];\mathbb{R}^{d})\)._
Although we now know that \(p\) is a supersolution to an HJB equation, it is somewhat annoying to directly obtain the Hopf-Lax formula from (3.12), due to the fact that \(p\) is not continuous. Instead, we will work towards the Hopf-Lax formula by once again going through the \(\gamma\) limit. Here, we will still need to deal with the difficulty that \(u_{\gamma,+}\) is not uniformly bounded in \(L^{\infty}\). We proceed by adapting an argument from [10], which provides a method to obtain Hopf-Lax type formulas for Hamilton-Jacobi equations with unbounded coefficients. A key difference in our setting is that the right-hand side has the specific form \(p_{\gamma}u_{\gamma,+}\). This structure allows us to combine their approach with Gronwall-type estimates to obtain much stronger bounds.
**Lemma 3.5**.: _Choose a decreasing nonnegative function \(\lambda\in L^{1}([0,t_{1}-t_{0}])\). Given any \(\gamma\in(1,\infty)\) and any points \((x_{1},t_{1}),(x_{0},t_{0})\) with \(t_{0}<t_{1}\) there exists a constant \(C=C(t_{1},d)\) such that_
\[p_{\gamma}(x_{0},t_{0})\leq e^{\Lambda_{\gamma}(t_{1}-t_{0})}\Big{(}p_{\gamma }(x_{1},t_{1})+\frac{|x_{1}-x_{0}|^{2}}{4\int_{0}^{t_{1}-t_{0}}e^{\Lambda_{ \gamma}(s)}\,ds}+C(t_{1}-t_{0})^{7/10}e^{-\lambda(t_{1}-t_{0})}\Big{)} \tag{3.13}\]
where \(b\) is the constant from Proposition 3.3 and
\[\Lambda_{\gamma}(t):=\frac{5}{4b}\int_{0}^{t}\lambda(a)\,da+\frac{1}{b}\int_{0 }^{t}\log(1+\|\mathrm{exp}(bu_{\gamma,+})-1\|_{L^{1}(\{t_{1}-a\}\times\mathbb{ R}^{d})})\,da.\]
Proof.: Define \(\varphi_{\gamma}(x,t):=p_{\gamma}(x,t_{1}-t)\). It then follows that \(\varphi_{\gamma}\) satisfies the differential inequality
\[\partial_{t}\varphi_{\gamma}(x,t)+|\nabla\varphi_{\gamma}(x,t)|^{2}\leq\varphi _{\gamma}(x,t)u_{\gamma,+}(x,t_{1}-t)\]
almost everywhere. Define
\[\bar{\lambda}_{\gamma}(s):=\lambda(s)+\frac{1}{b}\log(\|\mathrm{exp}(bu_{ \gamma,+})-1\|_{L^{1}(\{a\}\times\mathbb{R}^{d})})\]
and split
\[\varphi_{\gamma}(x,t)u_{\gamma,+}(x,t_{1}-t)\leq\varphi_{\gamma}(x,t)\bar{ \lambda}_{\gamma}(t)+\varphi_{\gamma}(x,t)(u_{\gamma}(x,t_{1}-t)-\bar{ \lambda}_{\gamma}(t))_{+}\]
Multiplying both sides of the differential inequality by \(e^{-\Lambda_{\gamma}(t)}\) we see that
\[\partial_{t}(e^{-\Lambda_{\gamma}(t)}\varphi_{\gamma})+e^{-\Lambda_{\gamma}(t )}|\nabla\varphi_{\gamma}|^{2}\leq e^{-\Lambda_{\gamma}(t)}\varphi_{\gamma}(u _{\gamma}-\bar{\lambda}_{\gamma}(t))_{+}.\]
Let \(q_{\gamma}(t,x):=e^{-\Lambda_{\gamma}(t)}\varphi_{\gamma}(t,x)\), we then have
\[\partial_{t}q_{\gamma}+e^{\Lambda_{\gamma}(t)}|\nabla q_{\gamma}|^{2}\leq q_ {\gamma}(u_{\gamma}-\bar{\lambda}_{\gamma}(t))_{+}.\]
Fix any two points \(x_{1},x_{0}\in\mathbb{R}^{d}\). We now introduce a family of paths \(x_{\sigma}\) in the spirit of the path optimization argument introduced in [10]. For each \(\sigma\) in the unit ball \(B_{1}\) let \(x_{\sigma}:[0,t_{1}-t_{0}]\to\mathbb{R}^{d}\) be a path such that \(x_{\sigma}(0)=x_{1}\) and \(x_{\sigma}(t_{1}-t_{0})=x_{0}\). Consider
\[\frac{d}{dt}\big{[}q_{\gamma}(x_{\sigma}(t),t)-\frac{1}{4}\int_{t_{0}}^{t}e^{- \Lambda_{\gamma}(s)}|x_{\sigma}^{\prime}(s)|^{2}\,ds\big{]}= \tag{3.14}\]
\[\partial_{t}q_{\gamma}(x_{\sigma}(t),t)+\nabla q_{\gamma}(x_{\sigma}(t),t) \cdot x_{\sigma}^{\prime}(t)-\frac{e^{-\Lambda_{\gamma}(t)}}{4}|x_{\sigma}^{ \prime}(t)|^{2}\leq q_{\gamma}(x_{\sigma}(t),t)(u_{\gamma}(,x_{\sigma}(t)t_{1 }-t)-\bar{\lambda}_{\gamma}(t))_{+}\]
Thus,
\[q_{\gamma}(x_{0},t_{1}-t_{0})\leq q_{\gamma}(x_{1},0)+\frac{1}{4}\int_{0}^{t_{1 }-t_{0}}e^{-\Lambda_{\gamma}(s)}|x_{\sigma}^{\prime}(s)|^{2}+q_{\gamma}(x_{ \sigma}(s),s)(u_{\gamma}(x_{\sigma}(s),t_{1}-s)-\bar{\lambda}_{\gamma}(s))_{+ }\,ds,\]
It then follows that
\[\varphi_{\gamma}(x_{0},t_{1}-t_{0})e^{-\Lambda_{\gamma}(t_{1}-t_{0})}\leq \varphi_{\gamma}(x_{1},0)+\frac{1}{4}\int_{0}^{t_{1}-t_{0}}e^{-\Lambda_{ \gamma}(s)}\Big{(}|x_{\sigma}^{\prime}(s)|^{2}+\varphi_{\gamma}(s,x_{\sigma}(s ))(u_{\gamma}(t_{1}-s,x_{\sigma}(s))-\bar{\lambda}_{\gamma}(s))_{+}\Big{)}\,ds. \tag{3.15}\]
We now assume that \(x_{\sigma}\) has the form
\[x_{\sigma}(s)=\sigma\xi(s)+x_{0}+z(s)(x_{1}-x_{0}),\]
where \(\xi:[0,t_{1}-t_{0}]\to[0,1]\) satisfies \(\xi(0)=\xi(t_{1}-t_{0})=0\) and \(z:[0,t_{1}-t_{0}]\to[0,1]\) is an increasing function such that \(z(0)=0\) and \(z(t_{1}-t_{0})=1\). For notational simplicity, we will write \(\alpha_{\gamma}=\varphi_{\gamma}(u_{\gamma}-\bar{\lambda}_{\gamma})_{+}\). Averaging (3.15) over \(B_{1}\) we see that
\[\varphi_{\gamma}(x_{0},t_{1}-t_{0})e^{-\Lambda_{\gamma}(t_{1}-t_{0})}\leq \varphi_{\gamma}(x_{1},0)+\frac{1}{4|B_{1}|}\int_{0}^{t_{1}-t_{0}}\int_{B_{1} }e^{-\Lambda_{\gamma}(s)}\Big{(}|\sigma|^{2}|\xi^{\prime}(s)|^{2}+|x_{1}-x_{0} |^{2}z^{\prime}(s)^{2}+\alpha_{\gamma}(s,x_{\sigma}(s))\Big{)}\,d\sigma\,ds.\]
The optimality condition for \(z\) implies that \((z^{\prime}(s)e^{-\Lambda_{\gamma}(s)})^{\prime}=0\), therefore \(z^{\prime}(s)=\frac{e^{\Lambda_{\gamma}(s)}}{\int_{0}^{t_{1}-t_{0}}e^{\Lambda_ {\gamma}(s)}\,ds}\). Thus, making this choice we see that
\[\varphi_{\gamma}(x_{0},t_{1}-t_{0})e^{-\Lambda_{\gamma}(t_{1}-t_{0})}\leq \varphi_{\gamma}(x_{1},0)+\frac{|x_{1}-x_{0}|^{2}}{4\int_{0}^{t_{1}-t_{0}}e^{ \Lambda_{\gamma}(s)}\,ds}+\frac{1}{4|B_{1}|}\int_{0}^{t_{1}-t_{0}}\int_{B_{1} }e^{-\Lambda_{\gamma}(s)}\Big{(}|\sigma|^{2}|\xi^{\prime}(s)|^{2}+\alpha_{ \gamma}(s,x_{\sigma}(s))\Big{)}\,d\sigma\,ds.\]
Changing variables \(y=x_{\sigma}\), it follows that
\[\frac{1}{|B_{1}|}\int_{B_{1}}\alpha_{\gamma}(s,x_{\sigma}(s))d\sigma=\frac{ \xi(s)^{-d}}{|B_{1}|}\int_{B_{\xi(s)}(x_{0}+z(s)(x_{1}-x_{0}))}\alpha_{\gamma} (s,y)dy\]
where \(B_{\xi(s)}(x_{0}+z(s)(x_{1}-x_{0}))\) is the ball of radius \(\xi(s)\) centered at \(x_{0}+z(s)(x_{1}-x_{0})\).
Using Holder's inequality with exponent \(2d\), it follows that the above quantity is bounded above by \(\xi(s)^{-1/2}\|\alpha_{\gamma}\|_{L^{2d}(\{s\}\times\mathbb{R}^{d})}.\) Hence, after dropping the good term \(e^{-\Lambda_{\gamma}(s)}\) in the last integral we see that
\[\varphi_{\gamma}(x_{0},t_{1}-t_{0})e^{-\Lambda_{\gamma}(t_{1}-t_{0})}\leq \varphi_{\gamma}(x_{1},0)+\frac{|x_{1}-x_{0}|^{2}}{4\int_{0}^{t_{1}-t_{0}}e^{ \Lambda_{\gamma}(s)}\,ds}+\frac{1}{4}\int_{0}^{t_{1}-t_{0}}\left(|\xi^{\prime }(s)|^{2}+\xi^{-1/2}(s)\|\alpha_{\gamma}(s,\cdot)\|_{L^{2d}(\mathbb{R}^{d})} \right)ds.\]
Fix some \(a>0\) and set
\[\xi(s):=\begin{cases}as^{3/4}&\text{if }\,t_{0}\leq s<(t_{1}-t_{0})/2,\\ a(t_{1}-s)^{3/4}&\text{if }\,(t_{1}-t_{0})/2\leq s\leq t_{1}.\end{cases}\]
Using Holder's inequality with exponent \(2\) on \(\int_{0}^{t}\xi^{-1/2}(s)\|\alpha_{\gamma}(s,\cdot)\|_{L^{2d}(\mathbb{R}^{d})}\,ds\), we see that
\[\|\xi^{\prime}\|_{L^{2}([t_{0},t_{1}])}^{2}\leq C(t_{1}-t_{0})^{1/2}a^{2}, \quad\|\xi^{-1/2}\|_{L^{2}([0,t_{1}-t_{0}])}\leq Ca^{-1/2}(t_{1}-t_{0})^{1/8}\]
thus,
\[\varphi_{\gamma}(x_{0},t_{1}-t_{0})e^{-\Lambda_{\gamma}(t_{1}-t_{0})}\leq \varphi_{\gamma}(x_{1},0)+\frac{|x_{1}-x_{0}|^{2}}{4\int_{0}^{t_{1}-t_{0}}e^{ \Lambda_{\gamma}(s)}\,ds}+C((t_{1}-t_{0})^{1/8}a^{-1/2}\|\alpha_{\gamma}\|_{L^ {2}([0,t_{1}-t_{0}];L^{2d}(\mathbb{R}^{d}))}+(t_{1}-t_{0})^{1/2}a^{2})\]
Optimizing over \(a>0\), we obtain
\[\varphi_{\gamma}(x_{0},t_{1}-t_{0})e^{-\Lambda_{\gamma}(t_{1}-t_{0})}\leq \varphi_{\gamma}(x_{1},0)+\frac{|x_{1}-x_{0}|^{2}}{4\int_{0}^{t_{1}-t_{0}}e^{ \Lambda_{\gamma}(s)}\,ds}+C(t_{1}-t_{0})^{\frac{3}{t_{0}}}\|\alpha_{\gamma}\|_{L ^{2}([0,t_{1}-t_{0}];L^{2d}(\mathbb{R}^{d}))}^{4/5} \tag{3.16}\]
for a potentially different constant \(C>0\).
Finally, it remains to estimate \(\|\alpha_{\gamma}\|_{L^{2}([0,t];L^{2d}(\mathbb{R}^{d}))}\). Recalling that \(\alpha_{\gamma}=\varphi_{\gamma}(u_{\gamma}-\lambda)_{+}\), we may write
\[\|\alpha_{\gamma}\|_{L^{2d}(\{s\}\times\mathbb{R}^{d})}^{2d}\leq\|\varphi_{ \gamma}\|_{L^{\infty}([t_{0},t_{1}]\times\mathbb{R}^{d})}^{2}\int_{0}^{\infty}2 dv^{2d-1}|\{x\in\mathbb{R}^{d}:u_{\gamma,+}(t_{1}-s,x)>v+\bar{\lambda}_{\gamma}(s)\}|\,dv\]
By Chebyshev's inequality, for any strictly increasing function \(f:\mathbb{R}\to\mathbb{R}\),
\[\leq\|p_{\gamma}\|_{L^{\infty}([t_{0},t_{1}]\times\mathbb{R}^{d})}^{2}\int_{0}^{ \infty}2dv^{2d-1}\frac{\|f(u_{\gamma,+})-f(0)\|_{L^{1}(\{t_{1}-s\}\times\mathbb{R}^ {d})}}{f(\bar{\lambda}_{\gamma}(s)+v)-f(0)}\,dv\]
If we choose \(f(a)=\exp(ba)-1\), then we see that
\[\|\alpha_{\gamma}\|_{L^{2d}(\{s\}\times\mathbb{R}^{d})}\leq Ce^{-b\bar{\lambda}_{ \gamma}(s)}\|\exp(bu_{\gamma,+})-1\|_{L^{1}(\{t_{1}-s\}\times\mathbb{R}^{d})}= Ce^{-b\lambda(s)},\]
where one should note carefully that now \(\bar{\lambda}_{\gamma}\) has been replaced by \(\lambda\) in the last right-hand term. Thus, we have the estimate
\[\|\alpha_{\gamma}\|_{L^{2}([t_{0},t_{1}];L^{2d}(\mathbb{R}^{d}))}^{2}\lesssim \int_{0}^{t_{1}-t_{1}}e^{-\frac{\kappa}{2}\lambda(s)}\leq(t_{1}-t_{0})e^{- \frac{\kappa}{2}\lambda(t_{1}-t_{0})},\]
hence,
\[\|\alpha_{\gamma}\|_{L^{2}([t_{0},t_{1}];L^{2d}(\mathbb{R}^{d}))}^{4/5}\lesssim (t_{1}-t_{0})^{2/5}e^{-\lambda(t_{1}-t_{0})}.\]
Combining our work, we now have
\[\varphi_{\gamma}(x_{0},t_{1}-t_{0})e^{-\Lambda_{\gamma}(t_{1}-t_{0})}\leq \varphi_{\gamma}(x_{1},0)+\frac{|x_{1}-x_{0}|^{2}}{4\int_{0}^{t_{1}-t_{0}}e^{ \Lambda_{\gamma}(s)}\,ds}+C(t_{1}-t_{0})^{7/10}e^{-\lambda(t_{1}-t_{0})}, \tag{3.17}\]
for some possibly new constant \(C\). The result follows after replacing \(\varphi_{\gamma}\) with \(p_{\gamma}\) and multiplying both sides by \(e^{\Lambda_{\gamma}(t_{1}-t_{0})}\)
Before we can show that the Hopf-Lax formula also holds for the limiting pressure, we first need a Lemma that gives us a pointwise well-defined representative of our weak solution \(p\). The argument is a simple adaptation of a result from [10].
**Lemma 3.6**.: _Suppose that \((\rho,p,n)\) is a weak solution to (1.1-1.2). \(p\) can be redefined on a set of measure zero so that_
\[p(x,t)=\lim_{r\to 0}\frac{1}{r^{2}|B_{r}|}\int_{B_{r}(x)}\int_{t}^{t+r^{2}}p(y,s )\,ds\,dy \tag{3.18}\]
_for all \((x,t)\). With this definition, \(p\) is spacetime upper semicontinuous and for all \(x\in\mathbb{R}^{d}\) the mapping \(t\mapsto p(x,t)\) is continuous from the right._
Proof.: From our control on \(u_{\gamma}\) and the relation \(\Delta p_{\gamma}=-n_{\gamma}-\frac{1}{\gamma}u_{\gamma}\) it follows that after taking limits, we have
\[\Delta p\geq-n\geq-n_{0}\]
in the sense of spacetime distributions. Therefore, for any \(\epsilon>0\),
\[\Delta\Big{(}\frac{1}{\epsilon^{2}}\int_{0}^{\epsilon^{2}}p(\cdot,t+s)\,ds \Big{)}\geq-n_{0}\]
in the sense of space distributions. Hence, the mean value property for Laplace's equation implies that for all \(x\in\mathbb{R}^{d}\) and \(t>0\) the function
\[\phi(r,\epsilon):=\frac{1}{\epsilon^{2}|B_{r}|}\int_{B_{r}(x)}\int_{0}^{ \epsilon^{2}}p(y,t+s)+\frac{n_{0}}{2d}|y-x|^{2}\,ds\,dy\]
is non-decreasing with respect to \(r\). We also note that for \(0\leq r^{\prime}\leq r\) we have
\[\phi(r,r)-\phi(r,r^{\prime})=\frac{1}{r^{2}|B_{r}|}\int_{B_{r}(x)}\int_{0}^{r^ {2}}p(y,t+s)-p(y,t+(\frac{r^{\prime}}{r})^{2}s)\,ds\,dy\]
\[\geq\frac{1}{r^{2}|B_{r}|}\int_{B_{r}(x)}\int_{0}^{r^{2}}\int_{(\frac{r^{ \prime}}{r^{\prime}})^{2}s}^{s}[\partial_{t}p(y,a)]_{-}\,da\,ds\,dy.\]
From Corollary 3.4, it follows that \([\partial_{t}p]_{-}\) is bounded in \(L^{q}(\mathbb{R}^{d}\times[0,\tau])\) for any \(q\in[1,\infty)\). Therefore,
\[\phi(r,r)-\phi(r,r^{\prime})\geq-|B_{r}|^{-1/q}(r^{2}-r^{\prime 2})^{1-1/q}\|[ \partial_{t}p]_{-}\|_{L^{q}([0,\tau]\times\mathbb{R}^{d})}\geq-Cr^{1-(d+1)/q} (r-r^{\prime})^{1-1/q}\]
for some constant \(C>0\).
By choosing \(q>d+1\), we can conclude that there exists a Holder continuous function \(g\) such that \(r\mapsto\phi(r,r)+g(r)\) is nondecreasing and \(g(0)=0\). As a result, \(\lim_{r\to 0^{+}}\phi(r,r)\) must exist for all \((x,t)\). Hence, (3.18) is well defined everywhere. The Lebesgue differentiation theorem also implies that our redefinition only changes \(p\) on a set of measure zero.
Finally, to see that \(p\) is upper semicontinuous, we note that \(\lim_{r\to 0}\phi(r,r)=\lim_{r\to 0}\phi(r,r)+g(r)=\inf_{r>0}\phi(r,r)+g(r)\). Thus, we may write
\[p(x,t)=\inf_{r>0}g(r)+\frac{1}{r^{2}|B_{r}|}\int_{B_{r}(x)}\int_{t}^{t+r^{2}}p( y,s)\,ds\,dy.\]
The infimum over a family of functions always produces an upper semicontinuous function, hence, \(p\) is upper semicontinuous.
At last we obtain the main result of this Section, the Hopf-Lax formula for our limit pressure \(p\).
**Proposition 3.7**.: _Given any points \((x_{1},t_{1}),(x_{0},t_{0})\) with \(t_{0}<t_{1}\) and a decreasing function \(\lambda\in L^{1}([0,t_{1}-t_{0}])\), there exists a constant \(C=C(t_{1},d)\) such that_
\[p(x_{0},t_{0})\leq e^{\Lambda(t_{1}-t_{0})}\Big{(}p(x_{1},t_{1})+\frac{|x_{1}- x_{0}|^{2}}{4\int_{0}^{t_{1}-t_{0}}e^{\Lambda(s)}\,ds}+C(t_{1}-t_{0})^{7/10}e^{- \lambda(t_{1}-t_{0})}\Big{)} \tag{3.19}\]
where \(b\) is the constant from Proposition 3.3 and
\[\Lambda(t):=\frac{5}{4b}\int_{0}^{t}\lambda(s)\,ds+\frac{t}{b}\log(1+\frac{C} {t})\]
Proof.: Using the formula from Lemma 3.6, we have
\[p(x_{0},t_{0})=\lim_{r\to 0^{+}}\frac{1}{r^{2}|B_{r}|}\int_{B_{r}(x)}\int_{t}^{ t+r^{2}}p(y,s)\,ds\,dy.\]
Choose a point \(x_{2}\in\mathbb{R}^{d}\) and \(t_{2}>t_{0}\) such that \(p_{\gamma}(x_{2},t_{2})\) converges to \(p(x_{2},t_{2})\) along some subsequence \(\gamma_{k}\). Using the \(L_{t}^{2}H_{x}^{1}\) strong convergence of \(p_{\gamma}\) to \(p\) and then applying Lemma 3.5, we have for any \(x_{2}\in\mathbb{R}^{d}\) and \(t_{2}>t_{0}\)
\[p(x_{0},t_{0})=\lim_{r\to 0^{+}}\lim_{k\to\infty}\frac{1}{r^{2}|B_{r}|} \int_{B_{r}(x)}\int_{t_{0}}^{t_{0}+r^{2}}p_{\gamma_{k}}(y,s)\,ds\,dy\leq\] \[\lim_{r\to 0^{+}}\lim_{k\to\infty}\frac{1}{r^{2}|B_{r}|}\int_{B_{r} (x)}\int_{t_{0}}^{t_{0}+r^{2}}e^{\Lambda_{\gamma_{k}}(t_{2}-s)}\Big{(}p_{ \gamma_{k}}(x_{2},t_{2})+\frac{|x_{2}-x_{0}|^{2}}{4\int_{0}^{t_{2}-s}e^{ \Lambda_{\gamma_{k}}(a)}\,da}+C(t_{2}-t_{0})^{7/10}e^{-\lambda(t_{2}-s)}\Big{)} \,dy\,ds.\]
Recall that
\[\Lambda_{\gamma}(t):=\frac{5}{4b}\int_{0}^{t}\lambda(a)\,da+\frac{1}{b}\int_{0 }^{t}\log(1+\|\mathrm{exp}(bu_{\gamma,+})-1\|_{L^{1}(\{t_{1}-a\}\times\mathbb{ R}^{d})})\,da.\]
Applying Jensen's inequality, we have the bound
\[\Lambda_{\gamma}(t)\leq\frac{5}{4b}\int_{0}^{t}\lambda(a)\,da+\frac{t}{b}\log (1+\frac{1}{t}\|\mathrm{exp}(bu_{\gamma,+})-1\|_{L^{1}([t_{1}-t,t_{1}]\times \mathbb{R}^{d})}).\]
Hence, we can find a potentially new constant \(C=C(\tau,d)>0\) such that
\[\Lambda_{\gamma}(t)\leq\frac{5}{4b}\int_{0}^{t}\lambda(a)\,da+\frac{t}{b}\log (1+\frac{C}{t})=\Lambda(t).\]
for all \(\gamma\). Therefore,
\[p(x_{0},t_{0})\leq e^{\Lambda(t_{2}-t_{0})}\Big{(}p(x_{2},t_{2})+\frac{|x_{2}- x_{0}|^{2}}{4\int_{0}^{t_{2}-t_{0}}e^{\Lambda(s)}\,ds}+C(t_{2}-t_{0})^{7/10}e^{- \lambda(t_{2}-t_{0})}\Big{)}.\]
Since \(p_{\gamma}\) converges pointwise almost everywhere to \(p\) along appropriate subsequences, it follows that
\[p(x_{0},t_{0})\leq e^{\Lambda(t_{2}-t_{0})}\Big{(}p(x_{2},t_{2})+\frac{|x_{2}- x_{0}|^{2}}{4\int_{0}^{t_{2}-t_{0}}e^{\Lambda(a)}\,da}+C(t_{2}-t_{0})^{7/10}e^{- \lambda(t_{2}-t_{0})}\Big{)}.\]
for a dense set of \((x_{2},t_{2})\) with \(t_{2}>t_{0}\). The result now follows from the upper semicontinuity of \(p\)
## 4. Holder continuity of the Hitting time
We are now going to construct a radial supersolution which will give an upper bound for the rate of expansion for the tumor, and thus, will yield a lower bound on the arrival times. Given a point of interest \(x_{0}\in\mathbb{R}^{d}\), our supersolution will be defined on the time-dependent annulus
\[A(t):=\{t\}\times\{x:r(t)\leq|x-x_{0}|\leq mr(t)\},\quad A=\bigcup_{t\in[0, \epsilon]}A(t),\]
for some \(m>1\) and a function \(r(t)\geq 0\) that we will define shortly. Given some starting time \(t_{0}\), for each \(t,r\geq 0\) we define
\[\bar{p}(t,r):=\sup_{x\in B_{r}(x_{0})}p(t+t_{0},x),\]
where the sup is well defined since \(p\) is upper semicontinuous in space.
Now we will construct our supersolution \(\psi(t,x)\) by solving
\[\begin{cases}-\Delta\psi(t,x)=\bar{n}_{0}&\text{if }r(t)<|x-x_{0}|<mr(t),\\ \psi(t,x)=0&\text{if }|x-x_{0}|\leq r(t),\\ \psi(t,x)=\bar{p}(t,|x-x_{0}|)&\text{if }|x-x_{0}|\geq mr(t).\end{cases}\]
On \(A(t)\), the equation admits the explicit radial solution
\[\psi(t,x)=h(t)\Gamma_{d}(|x-x_{0}|)-\frac{\bar{n}(0)}{2d}|x-x_{0}|^{2}+g(t), \tag{4.1}\]
where \(\Gamma_{d}\) is the fundamental solution of the Laplace equation in dimension \(d\), i.e. \(\Gamma_{d}^{\prime}(r)=r^{1-d}\),
\[h(t):=\frac{\bar{p}(t,mr(t))+(2d)^{-1}\bar{n}(0)(m^{2}-1)r(t)^{2}}{\Gamma_{d}( mr(t))-\Gamma_{d}(r(t))}, \tag{4.2}\]
and
\[g(t):=\frac{\bar{n}_{0}}{2d}-h(t)\Gamma_{d}(r(t)). \tag{4.3}\]
Finally, we define \(r(t)\) by choosing some initial data \(r(0)\) and then solving the ODE
\[r^{\prime}(t)=-|\nabla\psi(t,y)| \tag{4.4}\]
where the right hand side is evaluated at any point \(y\) such that \(|y-x_{0}|=r(t)\).
We now show that \(\psi\) is indeed a supersolution as long as comparison holds at initial time. Due to the lack of regularity for the pressure variable, we establish comparison using the time integrated versions of \(\psi\) and \(p\). This creates an annoying issue where it is difficult to establish that the boundary data stays ordered as the annulus moves. To avoid this problem, we establish comparison by first going through a sequence of supersolutions \(\psi_{k}\), where the \(\psi_{k}\) are defined on modified annuli whose outer radii are taken to be piecewise constant in time.
**Lemma 4.1**.: _Let \(\mu(t,x)\) be the characteristic function of the set \(\{x\in\mathbb{R}^{d}:|x-x_{0}|\geq r(t)\}\). If \(\mu(0,x)\leq\rho(t_{0},x)\) for almost every \(x\in\mathbb{R}^{d}\), then \(p(t_{0}+t,x)\leq\psi(t,x)\) for almost every \(x\in\mathbb{R}^{d}\) and almost every time \(t\geq 0\)_
Proof.: As we noted above, we will first prove the comparison for a modified sequence of supersolutions \(\psi_{k}\). The \(\psi_{k}\) will be defined in precisely the same way as \(\psi\), except that we will modify the construction of the moving annulus. Hence, given radii \(r_{k}(t)<R_{k}(t)\), we define \(\psi_{k}\) by solving
\[\begin{cases}-\Delta\psi_{k}(t,x)=\bar{n}_{0}&\text{if }r_{k}(t)<|x-x_{0}|<R_{k}(t), \\ \psi_{k}(t,x)=0&\text{if }|x-x_{0}|\leq r_{k}(t),\\ \psi_{k}(t,x)=\bar{p}(t,|x-x_{0}|)&\text{if }|x-x_{0}|\geq R_{k}(t).\end{cases}\]
\(r_{k}(t)\) will be defined as before via the ODE \(r_{k}^{\prime}(t)=-|\nabla\psi_{k}(t,y)|\) where \(y\) is any point satisfying \(|y-x_{0}|=r_{k}(t)\). We then define \(R_{k}\) by setting
\[R_{k}(t):=mr_{k}(t_{k,j}),\quad\text{if }t\in[t_{k,j},t_{k,j+1}),\]
where we inductively define the points \(t_{k,j}\) by setting \(t_{k,0}=0\) and then taking
\[t_{k,j+1}:=\inf\{t\geq t_{k,j}:r_{k}(t)<(1-\frac{1}{k+1})r_{k}(t_{k,j})\}.\]
As before, on the annulus \(r_{k}(t)\leq|x-x_{0}|\leq R_{k}(t)\), the \(\psi_{k}\) will admit the explicit radial solutions
\[\psi_{k}(t,x)=h_{k}(t)\Gamma_{d}(|x-x_{0}|)-\frac{\bar{n}(0)}{2d}|x-x_{0}|^{2}+ g_{k}(t), \tag{4.5}\]
where
\[h_{k}(t):=\frac{\bar{p}(t,R_{k}(t))+(2d)^{-1}\bar{n}(0)(R_{k}(t)^{2}-r_{k}(t)^{ 2})}{\Gamma_{d}(R_{k}(t))-\Gamma_{d}(r_{k}(t))}, \tag{4.6}\]
and
\[g_{k}(t):=\frac{\bar{n}_{0}}{2d}-h_{k}(t)\Gamma_{d}(r_{k}(t)). \tag{4.7}\]
Let \(\Psi_{k}(t,x)=\int_{0}^{t}\psi_{k}(s,x)\,ds\). Since \(\psi_{k}\) is clearly Lipschitz in space on \(|x-x_{0}|\leq R_{k}(t)\), it follows that \(\Psi_{k}\) is Lipschitz in space on \(|x-x_{0}|\leq R_{k}(t)\) and
\[\nabla\Psi_{k}(t,x)=\int_{0}^{t}\nabla\psi_{k}(s,x)\,ds\]
almost everywhere on \(|x-x_{0}|\leq R_{k}(t)\). Define \(\tilde{t}_{k}(r)\) to be the inverse function of \(r_{k}(t)\). From the definition of \(\psi_{k}\), it follows that
\[\nabla\Psi_{k}(t,x)=\int_{\min(t,\tilde{t}_{k}(|x-x_{0}|))}^{t}\nabla\psi_{k}( s,x)\,ds.\]
Now if \(x\) is a point such that \(|x-x_{0}|<R_{k}(t)\) and \(|x-x_{0}|<r(0)\), then for each fixed \(s\in(\tilde{t}(|x-x_{0}|),t]\), there exists a neighborhood of \(x\) such that \(\nabla\psi_{k}(s,x)\) is differentiable and \(-\Delta\psi_{k}(s,x)=\bar{n}_{0}\). Thus, it follows that
\[-\Delta\Psi_{k}(t,x)=\operatorname{sgn}_{+}\big{(}t-\tilde{t}(|x-x_{0}|) \big{)}\tilde{t}^{\prime}(|x-x_{0}|)|\nabla\psi_{k}(\tilde{t}(|x-x_{0}|),x)|+ \int_{\min(t,\tilde{t}_{k}(|x-x_{0}|))}^{t}\bar{n}_{0}\,ds.\]
Since \(r_{k}^{\prime}(\tilde{t}_{k}(|x-x_{0}|))=-|\nabla\psi(\tilde{t}(|x-x_{0}|),x)|\) and \(\tilde{t}_{k}(r)\) is the inverse of \(r_{k}(t)\), we see that
\[-\Delta\Psi_{k}(t,x)=-\operatorname{sgn}_{+}\big{(}t-\tilde{t}(|x-x_{0}|) \big{)}+(t-\tilde{t}(|x-x_{0}|)_{+}\bar{n}_{0}.\]
On the other hand, if \(x\) is a point such that \(r(0)<|x-x_{0}|<R_{k}(t)\), then
\[-\Delta\Psi_{k}(t,x)=t\bar{n}_{0}.\]
Let \(\mu_{k}(t,x)\) be the characteristic function of the set \(\{(t,x):|x-x_{0}|\geq r_{k}(t)\}\) and note that \(\mu_{k}(t,x)=\operatorname{sgn}_{+}(t-\tilde{t}_{k}(|x-x_{0}|))\) and \(\int_{0}^{t}\mu_{k}(s,x)\,ds=(t-\tilde{t}_{k}(|x-x_{0}|)_{+}.\) Combining our work from above, we can conclude that for almost every \(x\) satisfying \(|x-x_{0}|<R_{k}(t)\) we have
\[-\Delta\Psi_{k}(t,x)=\mu_{k}(0,x)-\mu_{k}(t,x)+\int_{0}^{t}\mu_{k}(s,x)\bar{n} _{0},\]
and for almost all \(x\in\mathbb{R}^{d}\) we have \(\Psi_{k}(t,x)(1-\mu_{k}(t,x))=0\), as well as \(\psi_{k}(1-\mu_{k}(t,x))=0\).
Now let us define the time shifted variables \(\tilde{w}(t,x):=\int_{0}^{t}p(s+t_{0},x)\,ds\), \(\tilde{\rho}(t,x):=\rho(t_{0}+t,x)\), and \(\tilde{n}(t,x):=n(t_{0}+t,x)\). It then follows that \(-\Delta\tilde{w}(t,x)=\tilde{\rho}(t,x)-\tilde{\rho}(0,x)+\int_{t_{0}}^{t} \tilde{\rho}(s,x)\tilde{n}(s,x)\) and \((1-\rho(t+t_{0},x))w_{0}=0\) almost everywhere. For any time \(t\in[0,t_{k,1})\), the definition of \(\psi_{k}\) guarantees that \(\Psi_{k}(t,x)\geq w(t,x)\) for all \(x\) satisfying \(|x-x_{0}|=R_{k}(t)=mr(0)\). Hence, for any \(t\in[0,t_{1})\) and any increasing \(C^{1}\) function \(\eta:\mathbb{R}\to\mathbb{R}\) such that \(\eta(a)=0\) if \(a\leq 0\), we have
\[\int_{\{|x-x_{0}|\leq R_{k}(0)\}}(\tilde{\rho}-\mu_{k})\eta(\tilde{w}-\Psi_{k}) +\eta^{\prime}(\tilde{w}-\Psi_{k})|\nabla(\tilde{w}-\Psi_{k})|^{2}\leq\int_{\{| x-x_{0}|\leq R_{k}(0)\}}\int_{0}^{t}\eta(\tilde{w}-\Psi_{k})(\tilde{\rho}\tilde{n}-\mu_{k} \bar{n}_{0})\]
Letting \(\eta\) approach \(\operatorname{sgn}_{+}\) and using the fact that \(\operatorname{sgn}_{+}(\tilde{w}-\Psi_{k})=\operatorname{sgn}_{+}(\tilde{\rho}- \mu_{k})\), we can conclude that
\[\int_{\{|x-x_{0}|\leq R_{k}(0)\}}(\tilde{\rho}-\mu_{k})_{+}\leq\int_{\{|x-x_{0}| \leq R_{k}(0)\}}\bar{n}_{0}\int_{0}^{t}(\tilde{\rho}-\mu_{k})_{+}.\]
Hence, Gronwall's inequality now implies that \(\tilde{\rho}(t,x)\leq\mu_{k}(t,x)\) for all \(t\in[0,t_{k,1})\) and almost all \(x\in\mathbb{R}^{d}\) (recall it is immediate that \(\tilde{\rho}\leq\mu_{k}\) on \(|x-x_{0}|\geq R(0)\) from the definition of \(\mu_{k}\)). The masses of the differences \(\mu_{k}(t,x)-\mu_{k}(0,x)\) and \(\tilde{\rho}(t,x)-\tilde{\rho}(0,x)\) are continuous functions of time, therefore, the ordering \(\tilde{\rho}\leq\mu_{k}\) must hold at time \(t_{1}\). This allows us to run the above argument on \([t_{k,1},t_{k,2})\). Iterating, we conclude that the ordering \(\tilde{\rho}\leq\mu_{k}\) must hold for all times \(t\) when \(r_{k}(t)>0\).
Now we wish to argue that \(\liminf_{k\to\infty}r_{k}(t)\geq r(t)\). Let
\[t_{*}=\inf\{t>0:\liminf_{k\to\infty}r_{k}(t)<r(t)\},\]
and note that \(\liminf_{k\to\infty}r_{k}(t_{*})=r(t_{*})\). Using the explicit formulas (4.1) and (4.5), as well as the upper semicontinuity of \(r\mapsto\bar{p}(t,r)\), it follows that
\[\liminf_{k\to\infty}r_{k}^{\prime}(t_{*})\geq r^{\prime}(t_{*})\]
whenever \(r(t_{*})>0\). Hence, \(r(t)\leq\liminf_{k\to\infty}r_{k}(t)\) for all times where \(r(t)>0\). This implies that \(\tilde{\rho}(t,x)\leq\mu(t,x)\) for all \(t\) and almost all \(x\).
Finally, we note that the ordering \(\tilde{\rho}(t,x)\leq\mu(t,x)\) implies that for almost every time \(t\)
\[(p-\psi)_{+}(\Delta p+n)=0,\quad(p-\psi)_{+}(\Delta\psi+\bar{n}_{0})=0.\]
distributionally. Thus, for any \(T>0\)
\[\int_{Q_{T}}|\nabla(p-\psi)_{+}|^{2}=\int_{Q_{T}}(p-\psi)_{+}(n-\bar{n}_{0})\]
which is only possible if \((p-\psi)_{+}=0\) almost everywhere.
We can now use this barrier supersolution to get bounds on the Holder continuity of the hitting time. The key is to use our Hopf-Lax estimate from 3.7 to ensure that the supersolution arrives at the point of interest at the correct time.
**Theorem 4.2**.: \(T\) _is locally Holder continuous on the set \(\{x\in\mathbb{R}^{d}:0<T(x)<\infty\}\). In particular, for any \(x_{1}\in\mathbb{R}^{d}\) such that \(T(x_{1})\in(0,\infty)\), we have_
\[\sup_{y\in B_{R}(x_{1})}T(x_{1})-T(y)\lesssim R^{\alpha_{d}} \tag{4.8}\]
_for all \(R>0\) sufficiently small, where_
\[\alpha_{d}:=\begin{cases}\frac{2}{e}&\text{if }d=2,\\ 2(\frac{2}{d})^{\frac{d}{2-2}}&\text{if }d>2.\end{cases} \tag{4.9}\]
Proof.: Let \(\epsilon>0\) be a small value that we will choose later. Let
\[\delta=\delta(\epsilon):=\inf\{R>0:\sup_{y\in B_{R}(x_{1})}T(x_{1})-T(y)\geq \epsilon\}.\]
Since \(T\) is continuous at \(x_{1}\), it follows that \(\lim_{\epsilon\to 0}\delta(\epsilon)=0\). Let \(t_{0}=T(x_{1})-\epsilon\) and \(t_{1}=T(x_{1})\). Thanks to the super solution that we have constructed above, we know that
\[\inf_{y\in B_{r(t)}(x_{1})}T(y)\geq t_{0}+t,\]
which implies
\[\sup_{y\in B_{r(t)}(x_{1})}T(x_{1})-T(y)\leq(t_{1}-t_{0}-t).\]
Hence, if we can provide lower bounds on \(r(t)\) in terms of \(t\), we can get a Holder estimate for \(T\) at \(x_{1}\). In particular, a bound of the form \((t_{1}-t_{0}-t)^{1/\alpha}\lesssim r(t)\) will imply that \(\sup_{y\in B_{R}(x_{1})}T(x_{1})-T(y)\lesssim R^{\alpha}\).
To bound \(r(t)\) from below, we must consider the ODE (4.4), which can be simplified to
\[r^{\prime}(t)=-|h(t)||\Gamma_{d}^{\prime}(r(t))|-\frac{\bar{n}(0)}{d}r(t).\]
Noting that in any dimension there exists a function \(\xi_{d}(m)\) such that \(\frac{|\Gamma^{\prime}_{d}(r(t))|}{|\Gamma_{d}(m(r(t))-\Gamma_{d}(r(t))|}=r(t)^{-1 }\xi_{d}(m)\), it follows from the structure of \(h\) and the ODE that there exists some constant \(K>0\) such that
\[r^{\prime}(t)+Kr(t)\geq-\frac{\bar{p}(t,mr(t))\xi_{d}(m)}{r(t)}. \tag{4.10}\]
Now we want to estimate \(\bar{p}(t,mr(t))\). To do so, we will apply the bounds from Lemma 3.5, choosing to evaluate \(p\) at \((x_{1},t_{1})\) and leaving the choice of \(\lambda\in L^{1}([0,t_{1}-t_{0}])\) until later. With these choices, we see that
\[\begin{array}{rcl}\bar{p}(t,mr(t))&=&\sup_{x\in B_{mr(t)}(x_{1})}p(t+t_{0},x) \\ &\leq&\sup_{x\in B_{mr(t)}(x_{1})}H(t)|x-x_{1}|^{2}+F(t)=m^{2}r^{2}H(t)+F(t), \end{array}\]
where we have defined
\[H(t):=e^{\Lambda(t_{1}-t_{0}-t)}(4\int_{0}^{t_{1}-t_{0}-t}e^{\Lambda(s)}\,ds )^{-1},\quad F(t):=C(t_{1}-t_{0}-t)^{7/10}e^{-\lambda(t_{1}-t_{0}-t)+\Lambda(t _{1}-t_{0}-t)} \tag{4.11}\]
for notational convenience.
Returning to equation (4.10) and applying the upper bound on \(\bar{p}\) obtained above, we have
\[r^{\prime}(t)+r(t)(K+m^{2}\xi_{d}(m)H(t))\geq-\xi_{d}(m)\frac{F(t)}{r(t)}\]
Multiplying both sides by \(2r(t)\) and defining \(z(t)=r(t)^{2}\), we get
\[z^{\prime}(t)+z(t)(2K+2m^{2}\xi_{d}(m)H(t))\geq-\xi_{d}(m)F(t). \tag{4.12}\]
Now we choose \(m\) by optimizing \(m^{2}\xi_{d}(m)\). Define
\[\xi_{d}:=\inf_{m>1}\frac{m^{2}}{2}\xi_{d}(m).\]
One can then check that \(\xi_{d}=(\frac{d}{2})^{\frac{d}{d-2}}\) and \(\operatorname{argmin}\frac{m^{2}}{2}\xi_{d}(m)=(\frac{d}{2})^{\frac{1}{d-2}}\) (where these should be understood in a limiting sense when \(d=2\)). Thus, we have
\[z^{\prime}(t)+z(t)(2K+4\xi_{d}H(t))\geq-dF(t). \tag{4.13}\]
Let \(\bar{H}(t)=\int_{0}^{t}4H(s)\,ds\). Multiplying both sides of (4.13) by \(\exp(2Kt+\xi_{d}\bar{H}(t))\) and integrating in time, we can conclude that
\[z(t)e^{2Kt+\xi_{d}\bar{H}(t)}\geq z(0)-d\int_{0}^{t}F(s)e^{2Ks+\xi_{d}\bar{H}(s )}\,ds. \tag{4.14}\]
Now we need to provide upper bounds on \(\exp(\xi_{d}\bar{H}(t))\). To do so, we will need to make a choice for \(\lambda\). Fix some \(\theta>0\) and set
\[\lambda(s)=\theta+s^{-1/2}.\]
We then have
\[\Lambda(t)=\frac{5}{4b}(\theta t+2t^{1/2})+\frac{t}{b}\log(1+\frac{C}{t}).\]
Using the above estimates, we see that
\[\begin{array}{l}4H(t)\leq\frac{\exp\left(\frac{5}{4b}(\theta(t_{1}-t_{0}-t) +2(t_{1}-t_{0}-t)^{1/2})+(t_{1}-t_{0}-t)\log(1+C/(t_{1}-t_{0}-t))\right)}{\int _{0}^{t_{1}-t_{0}-t}e^{\frac{5}{4b}\theta s}\,ds}=\\ \frac{\frac{5\theta}{4b}\exp\left(2(t_{1}-t_{0}-t)^{1/2})+(t_{1}-t_{0}-t)\log( 1+C/(t_{1}-t_{0}-t))\right)}{1-e^{-\frac{5}{4b}\theta(t_{1}-t_{0}-t)}}.\end{array}\]
Since \((t_{1}-t_{0}-t)\leq(t_{1}-t_{0})=\epsilon\), we can assume that \(\epsilon\) is sufficiently small that
\[4H(t)\leq\frac{5\theta}{4b(1-e^{-\frac{5}{4b}\theta(t_{1}-t_{0}-t)})}+\frac{2 0\theta(t_{1}-t_{0}-t)^{1/2}}{4b(1-e^{-\frac{5}{4b}\theta(t_{1}-t_{0}-t)})}\]
Hence, for some possibly new constant \(C>0\) independent of \(\epsilon\) and \(\theta\) we get
\[\bar{H}(t)\leq\log\left(\frac{e^{\frac{5\theta}{4b}\epsilon}-1}{e^{\frac{5 \theta}{4b}(t_{1}-t_{0}-t)}-1}\right)+C(1+\epsilon^{3/2}\theta).\]
Thus,
\[\exp(\xi_{d}\bar{H}(t))\lesssim\big{(}\frac{e^{\frac{56}{48}\epsilon}-1}{e^{\frac{56 }{48}(t_{1}-t_{0}-t)}-1}\big{)}^{\xi_{d}}\exp(C\epsilon^{3/2}\theta)\]
Now we move to estimating \(F(s)\exp(2Ks+\xi_{d}\bar{H}(s))\). From our choice of \(\lambda\), it is clear that \(\lambda(t)\geq 2\Lambda(t)\) for all \(t\) sufficiently small. Thus, it follows that
\[F(s)\lesssim(t_{1}-t_{0}-s)^{7/10}e^{-\lambda(t_{1}-t_{0}-s)/2},\]
hence, we have the bound
\[F(s)\exp(2Ks+\xi_{d}\bar{H}(s))\lesssim\big{(}\frac{e^{\frac{56}{48}\epsilon}-1 }{e^{\frac{56}{48}(t_{1}-t_{0}-s)}-1}\big{)}^{\xi_{d}}(t_{1}-t_{0}-s)^{7/10} \exp\Big{(}\frac{1}{2}\big{(}4Ks+(2C\epsilon^{3/2}-1)\theta-(t_{1}-t_{0}-s)^{ -1/2})\big{)}\Big{)}\]
Therefore, once \(\epsilon\) is small enough that \(2C\epsilon^{3/2}+\frac{5}{4b}\epsilon<1\) we can choose \(\theta\) large enough that
\[z(0)-d\int_{0}^{\epsilon}F(s)e^{2Ks+\xi_{d}\bar{H}(s)}\,ds\geq z(0)/2,\]
and from there we can conclude that
\[z(0)\lesssim z(t)e^{2Kt+\xi_{d}\bar{H}(t)}\]
for all \(t\in[0,\epsilon]\). This implies that
\[z(0)(\theta(t_{1}-t_{0}-t))^{\xi_{d}}\lesssim z(t)=r(t)^{2}.\]
Hence, \(r(0)(t_{1}-t_{0}-t)^{\xi_{d}/2}\lesssim r(t).\) The result now follows from the fact that \(\alpha_{d}=2/\xi_{d}\).
## 5. Results from Obstacle Problem Theory
In this section, we use techniques from the theory of the obstacle problem to study the local behavior of the interface. The main technique here is the quadratic blowup, which classifies free boundary points into regular points, where the zero set is asymptotically a half-space, and singular points, where the zero set is asymptotically lower dimensional. With sufficiently regular source term, the blowup limit approximates the solution at a uniform scale, and we can use this to extract information on the local geometry of the positive set. The regularity of the source term in the equation satisfied by \(w\) is governed by the regularity of the nutrient and the regularity of the hitting time. Since the nutrient enjoys parabolic regularity as in Lemma 2.2, Holder continuity of the hitting time leads to Holder continuous source, which is enough to control the blowup limit at both types of free boundary points.
Thus, using that \(T\in C^{0,\alpha}_{\rm loc}(\mathcal{O})\) for the \(\alpha\in(0,1)\) from Theorem 4.2, we show that the regular points form an open set of full measure in the spacetime interface, on which \(T\) improves to locally Lipschitz and the spatial interface evolves as a locally \(C^{1,1-}\) graph. The scale and bounds for which this regularity is achieved can be quantified in terms of the Holder seminorm of \(T\) and the scale at which the zero set achieves sufficiently large density near the regular point. We also show Holder regularity of the unit normal to the interface in spacetime, using the spatial regularity of the interface and the monotonicity of its expansion. Under the stronger assumption that no singular points occur for some time interval, this lets us improve \(T\) to \(C^{1,1/2-}_{\rm loc}\) on the corresponding region.
As for singular points, we show that they form a relatively closed set in \(\mathcal{O}\) contained in a \(C^{1}\) manifold of dimension \(d-1\). This improves on the standard obstacle problem result that the singular points at a fixed time are contained in a \(C^{1}\) manifold of dimension \(d-1\) and implies that the worse case, where the singular points have positive \(d-1\) Hausdorff measure for some time, only occurs for at most countably many times. Under the additional assumption that \(T\) is Lipschitz up to the singular set, we show a stronger generic regularity result which gives that for a.e. time the singular points have \(d-2\) Hausdorff measure \(0\). In dimension \(2\), this would imply that the times with singular points have zero measure, as a relatively closed subset of \((0,\infty)\).
We note that we are not currently able to prove that \(T\) is Lipschitz up to singular points. It is not clear to what extent the obstacle problem can be leveraged to understand the geometry of the patch at times just before a singular point occurs, in order to prove nondegeneracy of the pressure. Nondegeneracy at later times is also uncertain, but appears more tractable since the blowup is available. For example, suppose
one knew, for a singular point \(x_{0}\) and all sufficiently small \(r\), that the set \(\{w(\cdot,T(x_{0}))=0\}\cap B_{r}(x_{0})\) is contained in a strip of width \(Cr^{1+\alpha}\), for some \(C,\alpha\) depending on the source term. Assuming such a strip condition, then the Hopf lemma could be applied to establish nondegeneracy of \(p\) near \(x_{0}\) at times \(t\geq T(x_{0})\). This strip condition has been proven by [10] for singular points in the \((d-1)\)-dimensional stratum, albeit using methods which require much stronger regularity than \(C^{0,\alpha}\) source. A related result on the rate of convergence of the quadratic blowup at singular points in dimension 2 has been proven by [11]. As far as we are aware, it is not currently known whether or in what sense this strip condition may hold for the obstacle problem with \(C^{0,\alpha}\) source.
Nevertheless, we expect that \(T\) is indeed Lipschitz, as it seems unlikely that the pressure sometimes becomes degenerate, but only at instances of merging or topological change. It is clear that we cannot hope for better than Lipschitz, since \(T\) cannot be differentiable when two pieces of the boundary collide while traveling at different speeds. We also discuss examples in Remark 5.7 which show that Lipschitz continuity of \(T\) is sharp at regular points without the additional assumption to give global control over singular points.
For the obstacle problem, the \(C^{1,\alpha}\) regularity of the free boundary at regular points and the \(C^{1}\) manifold covering singular points are well-known for Holder source (see appendix for more details). Thus, the main challenge in lies in the analyzing the time-indexed family of obstacle problems satisfied by \(w(\cdot,t)\) to control these properties in time. We note that such parameterized families have now been studied extensively for the constant source obstacle problem with varying fixed boundary data, mainly with the goal of understanding generic behavior of singular points ([14], [15]). Our problem differs in that we must contend with a varying low regularity source and no fixed boundary data, which rules out many of the techniques typically used. In particular, the results of [15], including that the singular set has \((d-4)\)-Hausdorff measure zero, do not appear to be in reach with even Lipschitz source. Our approach draws from arguments in [14] to establish the \(C^{1}\) manifold property for the singular set. However, whereas comparison arguments with the fixed boundary data allow Monneau to prove directly that the hitting time is Lipschitz, we must work harder to get lower regularity for \(T\). Finally, the analysis of the regular set for the time-parameterized family, to our knowledge, is new. The main facts we make use of are the Holder continuity of \(T\), the spacetime continuity of \(w\), the \(L^{1}\) time-continuity of \(\rho\), and the monotonicity of \(\rho\) and \(w\) in time.
We remark that Proposition 2.12 shows that the interface strictly expands, and in space-time is exactly the graph of \(T\). Therefore, regularity improvements to \(T\) correspond exactly to regularity of the space-time interface as a \(d\)-dimensional manifold. As a result, we will generally not consider the space-time perspective directly, preferring to work with the subsets of \(\mathbb{R}^{d}\) traced out by the moving interface.
Now, we proceed to study the local situation at the free boundary. As we noted above, the new regularity of \(T\) from Theorem 4.2 feeds back into the obstacle problem satisfied by \(w\) through the dependence of \(\eta\), as defined in (1.8), on \(T\). We state this precisely below:
**Lemma 5.1**.: _Up to \(C^{1,1-}\), \(\eta(\cdot,t)\) has the same spatial regularity as \(T\) on \(\overline{\{w(\cdot,t)>0\}}\). In particular, for any \(\tau>0\), we have \(\eta\in L^{\infty}_{t}C^{0,\alpha}_{x}([0,\tau];\mathbb{R}^{d})\)._
Proof.: The first part follows immediately from the \(L^{\infty}_{t}C^{1,1-}\) regularity of \(n\), from Lemma 2.2. The second part follows from the \(C^{0,\alpha}\) regularity of \(T\).
The exact regularity of \(\eta\) is relevant for determining the spatial regularity of the free boundary near regular points. However, for most results in this section, we only require Holder continuity to give uniqueness of the quadratic blowup limit introduced in Lemma 2.10, and to give spatial equicontinuity of \(\eta(\cdot,t)\). The uniqueness of the blowup limit and its subsequent characterization is best expressed as the following dichotomy, originally due to Caffarelli:
**Lemma 5.2**.: _Let \(u\) be a solution of the obstacle problem \(\Delta u=f\chi_{\{u>0\}}\) in \(\mathbb{R}^{d}\) with \(f\) positive and \(C^{0,\alpha}\) near 0. If \(0\in\partial\{u>0\}\), then one of the following holds:_
1. \(\{u=0\}\) _has density_ \(\frac{1}{2}\) _at 0, and the quadratic rescalings_ \(r^{-2}u(rx)\) _converge in_ \(C^{1,1-}(B_{1})\) _to_ \(\frac{f(0)}{2}(x\cdot e)_{+}^{2}\) _for some unit vector_ \(e\)
_
2. \(\{u=0\}\) _has density 0 at 0, and the quadratic rescalings_ \(r^{-2}u(rx)\) _converge in_ \(C^{1,1-}(B_{1})\) _to_ \(\frac{f(0)}{2}x\cdot D^{2}u(0)x\)_, where_ \(D^{2}u(0)\) _exists in the classical sense and is a positive semidefinite matrix with trace 1._
_Points of the first type are called regular points, and points of the second type are called singular points._
This dichotomy was proven in [10] for the constant source obstacle problem, with the note that minor modifications could extend the proof to the Holder continuous case. An energetic criterion for the dichotomy appears in [11]. Careful proofs for the uniqueness of the blowup limit in the Holder continuous case are given in [1] for regular points and [12] for singular points.
The dichotomy applies to the free boundary of \(w(\cdot,t)\) for each \(t\). We let \(R_{t}\) denote the regular points of \(\partial\Omega_{t}\), and \(\Sigma_{t}\) denote the singular points of \(\partial\Omega_{t}\), for the obstacle problem solved by \(w(\cdot,t)\) at each time. Subsequently, we take
\[R:=\bigcup_{t>0}R_{t}\text{ and }\Sigma:=\bigcup_{t>0}\Sigma_{t},\]
so that
\[\mathcal{O}=\{0<T(x)<\infty\}=R\cup\Sigma.\]
Let us also mention that \(\{R_{t}\}_{t>0}\) is a foliation of \(R\), and so is \(\{\Sigma_{t}\}_{t>0}\) for \(\Sigma\), due to Proposition 2.12. We further subdivide singular points into strata by the dimensionality of the zero set; specifically, for \(0\leq k\leq d-1\) we denote
\[\Sigma_{t}^{k}:=\{x\in\partial\Omega_{t}:\dim\ker D^{2}w=k\}\text{ and }\Sigma^{k}:=\bigcup_{t}\Sigma_{t}^{k}.\]
For the obstacle problem, regular points are relatively open in the free boundary ([1], Corollary 4.8), and thus singular points form a closed set. This topological control is lost in the union over all times, so a first step is to reestablish that control for our \(R\) and \(\Sigma\). For this, we use a lemma due to Blank, which allows us to identify regular points by finite-scale behavior.
**Lemma 5.3** ([1] Theorem 4.5).: _Let \(u\geq 0\) solve \(\Delta u=f\chi_{\{u>0\}}\) in \(B_{1}(0)\), with \(0\in\partial\{u>0\}\) and \(0<f\leq 1\). Then there exist universal parameters \(\lambda_{0},r_{0},\tau\in(0,1)\) such that if \(\lambda_{0}<f\) in \(B_{1}\) and_
\[\frac{|\{x:u(x)=0\}\cap B_{r}(x_{0})|}{|B_{r}|}\geq\frac{1}{8}\text{ for some }r<r_{0},\]
_then_
\[\frac{|\{x:u(x)=0\}\cap B_{s}(x_{0})|}{|B_{s}|}\geq\frac{3}{8}\text{ for all }s<\tau r.\]
In particular, if the hypothesis of the lemma holds, then \(\{u=0\}\) has positive density at 0, so 0 is a regular point. This lemma is applicable even when \(f\) is less regular than Holder, and results in a modified regular-singular dichotomy in that case. Essentially, one may take away that nonuniqueness of the blowup limit in the low regularity setting can occur due to infinite rotation, but not due to any sort of mixing of regular and singular point behavior at different scales. Since the following result only relies on the previous lemma and continuity of \(T\), it also holds when \(T\) is less regular than Holder.
**Proposition 5.4**.: \(R\) _is open. Thus, \(\Sigma\) is relatively closed in \(\mathcal{O}\)._
Proof.: Let \(\lambda_{0},r_{0},\tau\in(0,1)\) be the parameters given by Lemma 5.3. Suppose \(x_{0}\in R\), so that \(\Omega_{T(x_{0})}\) has density \(\frac{1}{2}\) at \(x_{0}\), and thus there exists \(r>0\) such that
\[\frac{|\{x:w(x,T(x_{0}))=0\}\cap B_{r}(x_{0})|}{|B_{r}|}\geq\frac{1}{8}\]
Then the result of the lemma is that for all \(s<\tau r\),
\[\frac{|\{x:T(x)\geq T(x_{0})\}\cap B_{s}(x_{0})|}{|B_{s}|}\geq\frac{3}{8}\]
Fix \(s_{0}=\tau r/2\). By Lemma 2.2, \(|\Omega_{t}|\) is continuous in \(t\), so we can choose \(\delta\) for \(s_{0}\) such that
\[\frac{|\{x:T(x)>T(x_{0})+\delta\}\cap B_{s_{0}}(x_{0})|}{|B_{s_{0}}|}\geq\frac{5 }{16}\]
Let \(s_{1}<s_{0}\) be such that if \(|x-x_{0}|<s_{1}\), then \(|T(x)-T(x_{0})|<\delta\). Then for \(x_{1}\in B_{s_{1}}(x_{0})\), we compute
\[\frac{|\{x:T(x)>T(x_{1})\}\cap B_{s_{0}+|x_{1}-x_{0}|}(x_{1})|}{|B _{s_{0}+|x_{1}-x_{0}|}|} \geq\frac{|\{x:T(x)>T(x_{0})+\delta\}\cap B_{s_{0}}(x_{0})|}{|B _{s_{0}+|x_{1}-x_{0}|}|}\] \[\geq\left(\frac{5}{16}\right)\left(1-\frac{s_{0}}{s_{0}+s_{1}} \right)^{d}\]
Thus, if we take \(s_{1}\) sufficiently small, this last quantity is greater than \(\frac{1}{8}\), and all points in \(B_{s_{1}}(x_{0})\) are regular.
### Regular points
We now turn toward understanding the behavior of the interface near regular points. Standard obstacle problem theory ([10], [11]) gives that for \(C^{0,\alpha}\) source, the interface is locally \(C^{1,\alpha}\) at regular points, with the scale at which the regularity is achieved depending on the scale at which the zero set is sufficiently large. We discuss the dependence of this regularity in greater detail in the appendix. In particular, for this problem we have:
**Proposition 5.5**.: \(R\) _can be covered by open neighborhoods \(V\), each with the property that there exist constants \(C,r>0\) such that for each \(x\in V\), \(B_{r}(x)\cap\Omega_{T(x)}\) is the intersection of \(B_{r}(x)\) with the lower graph of a \(C^{1,\alpha}\) function in some coordinate system (depending on \(x\)) with seminorm bounded by \(C\)._
Proof.: By Lemma 6.9, we need only show that if the zero set reaches density sufficiently close to \(\frac{1}{2}\) at scale \(r\) near \(x\), then it does so at the same scale at all points near \(x\). This is essentially immediate from the fact that \(\Omega_{t}\) expands monotonically in \(t\), with the measure \(|\Omega_{t}|\) Lipschitz as a function of \(t\) by Lemma 2.2.
The dependence of the coordinate system in Proposition 5.5 is only a minor inconvenience, and we will eventually remove it in Proposition 5.12. To better understand this dependence, we introduce \(\nu(x)\), defined for \(x\in R\) as the outward unit normal to \(\Omega_{T(x)}\) at \(x\). We have spatial regularity of \(\nu\) from the obstacle problem; namely, \(\nu\in C^{0,\alpha}_{\mathrm{loc}}(R_{t})\) for each \(t\). Our goal will be to improve this to regularity of \(\nu\) on \(R\).
The key ingredients will be the regularity of \(\Omega_{t}\) near regular points, and the strictly monotonic expansion of the \(\Omega_{t}\). The essential idea will be that if the tangent planes to \(\Omega_{T(x)}\) at \(x\) and to \(\Omega_{T(y)}\) at \(y\) intersect for some points \(x,y\) with different hitting times, they must intersect well away from \(x\) and \(y\) or else we will be able to use the regularity of the interfaces to show that \(\partial\Omega_{T(x)}\) and \(\partial\Omega_{T(y)}\) intersect, which contradicts monotonicity. This then gives control over the angle at which the tangent planes may intersect in terms of the distance between \(x\) and \(y\).
**Proposition 5.6**.: _The outward unit normal vector \(\nu\) to \(\Omega_{T(x)}\) at \(x\) satisfies \(\nu\in C^{0,\alpha/(1+\alpha)}_{\mathrm{loc}}(R)\)._
Proof.: By Proposition 5.5, we may cover \(R\) with neighborhoods \(V\) such that for each \(x\in V\) uniformly, \(\partial\Omega_{T(x)}\cap V\) is the lower graph in some coordinate system of a function \(f_{T(x)}\), with the \(f_{t}\) uniformly bounded in \(C^{1,\alpha}\). We will restrict to such a \(V\) for the remainder of the proof.
Then as a preliminary step, we can observe continuity of \(\nu\) from the regularity and monotonicity of the interface by a purely geometrical argument. Namely, a \(C^{1,\alpha}\) domain entertains a uniform interior and exterior cone condition, where the angle of the cone improves toward \(\pi\) as we allow its height to approach \(0\); specifically, the cone in \(B_{r}\) can be taken with angle \(2\arccos(Cr^{\alpha})\), when the \(C^{1,\alpha}\) seminorm is \(C\). Thus, if \(\nu\) were discontinuous at some \(x\in R\), we could use compactness to find a sequence \((y_{n})\) converging to \(x\) with \(T(y_{n})\) either increasing or decreasing to \(T(x)\) and \(\nu(y_{n})\) converging to some unit vector distinct from \(\nu(x)\). Then for \(n\) sufficiently large, at a sufficiently small scale, the interior cone at \(x\) will intersect with the exterior cone of a \(y_{n}\), or vice versa, and we draw a contradiction with the monotonic expansion of the \(\Omega_{t}\) depending on whether the \(T(y_{n})\) are decreasing or increasing.
Then, we have checked that \(x\to\nu(x)\) is continuous. Now, to obtain a quantitative local continuity estimate in view of the cone regularity we described above, we may restrict attention to \(x,y\in V\) with
\(\frac{1}{2}<\nu(x)\cdot\nu(y)<1\). Moreover, since the case \(T(x)=T(y)\) is managed by the spatial regularity of the interface, we may assume that \(T(x)>T(y)\). For notation, we let \(r=\nu(x)\cdot\nu(y)\) and use \(P_{x},P_{y}\) to refer to the tangent planes to \(\Omega_{T(x)}\) at \(x\) and \(\Omega_{T(y)}\) at \(y\) respectively.
Let \(v:=\frac{\nu(y)-|r|\nu(x)}{\sqrt{1-r^{2}}}\) be the projection of \(\nu(y)\) into \(\nu(x)^{\perp}\), scaled to unit norm. Considering the point \(x-hv\) for \(h>0\), we compute that its \(\nu(y)\) component is \(x\cdot\nu(y)-h\sqrt{1-r^{2}}\). Thus \(x-hv\) reaches \(P_{y}\) precisely when \(h=\frac{(x-y)\cdot\nu(y)}{\sqrt{1-r^{2}}}\), and in general we have
\[d(x-hv,P_{y})\geq h\sqrt{1-r^{2}}-\delta\quad\text{ where }\delta:=|x-y| \tag{5.1}\]
Now, we apply the \(C^{1,\alpha}\) regularity of the interface in \(V\). For all \(h\) sufficiently small, this regularity implies that \(\partial\Omega_{T(x)}\) in \(B_{h}(x)\) is contained in a \(Ch^{1+\alpha}\)-neighborhood of \(P_{x}\); in other words, \(\Omega_{T(x)}\) and its exterior contain the following halfspaces:
\[\{z\in B_{h}(x):(z-x-Ch^{1+\alpha})\cdot\nu(x)\leq 0\}\subset\Omega_{T(x)} \cap B_{h}(x) \tag{5.2}\]
\[\{z\in B_{h}(x):(z-x+Ch^{1+\alpha})\cdot\nu(x)\geq 0\}\subset(\mathbb{R}^{d} \setminus\Omega_{T(x)})\cap B_{h}(x) \tag{5.3}\]
In particular, since \(x-hv\in P_{x}\), it follows that there is a point \(\tilde{x}\in\partial\Omega_{T(x)}\) with
\[|\tilde{x}-(x-hv)|<Ch^{1+\alpha}. \tag{5.4}\]
Let \(y_{1}\) be the nearest point in \(P_{y}\) to \(\tilde{x}\). We illustrate this with the figure below.
As the figure may suggest, \(\tilde{x}\) cannot be too far below \(y_{1}\), or else it falls into \(\Omega_{T(y)}\), contradicting that \(T(x)>T(y)\). Specifically, as in (5.2), we can apply the \(C^{1,\alpha}\) regularity to to get that for all sufficiently small \(r\), \(\Omega_{T(y)}\cap B_{r}(y)\) contains the halfspace \(\{z:(z-y-Cr^{1+\alpha}\nu(y))\cdot\nu(y)\leq 0\}\cap B_{r}(y)\). Since \(\tilde{x}\notin\overline{\Omega_{T(y)}}\), \(\tilde{x}\) must not be contained in that halfspace, and we have
\[(\tilde{x}-y_{1})\cdot\nu(y)>-C|y_{1}-y|^{1+\alpha} \tag{5.5}\]
The left side here is the signed distance of \(\tilde{x}\) to \(P_{y}\). From (5.1), the signed distance of \(x-hv\) to \(P_{y}\) is bounded above by \(-(h\sqrt{1-r^{2}}-\delta)\), so using (5.4), we conclude that the left side above is bounded above by \(Ch^{1+\alpha}-(h\sqrt{1-r^{2}}-\delta)\). On the other hand, we have
\[|y_{1}-y|\leq|\tilde{x}-y|\leq|\tilde{x}-(x-hv)|+|(x-hv)-x|+|x-y|\leq C_{0}h^{ 1+\alpha}+h+\delta\]
When \(\delta<h<1\), this is \(O(h)\), and so \(|y_{1}-y|^{1+\alpha}\geq-Ch^{1+\alpha}\) for some \(C\). Thus, the inequality (5.5) becomes
\[Ch^{1+\alpha}-(h\sqrt{1-r^{2}}-\delta)>-Ch^{1+\alpha}\]
Rearranging and absorbing constants, this means
\[h\sqrt{1-r^{2}}-\delta\leq Ch^{1+\alpha}\]
Figure 1.
so that
\[|\nu(x)-\nu(y)|\leq\sqrt{1-r^{2}}\leq Ch^{\alpha}+\delta h^{-1}\]
Optimizing \(h\), we get
\[|\nu(x)-\nu(y)|\leq C\delta^{\frac{\alpha}{1+\alpha}}\]
where the constant depends only on the uniform bound for the \(C^{1,\alpha}\) seminorms of the graphs, and on \(\alpha\). Hence we conclude.
As a result of the regularity of \(\nu\), we can improve Proposition 5.5 to also have the coordinate system chosen locally uniformly. In other words, near regular points, one can fix a local coordinate system in which the free boundary evolves as a \(C^{1,\alpha}\) graph over some time interval.
We now turn toward applying the improved geometry of the patch at regular points to the pressure. Elliptic regularity for \(C^{1,\alpha}\) domains implies that the pressure \(p(\cdot,t)\) has a well-defined gradient on \(R_{t}\), and the Hopf lemma for \(C^{1,\alpha}\) domains implies that \(\nabla p(\cdot,t)\) is nonvanishing on \(R_{t}\). However, there is an important limitation here: \(x\mapsto\nabla p(x,T(x))\) is not necessarily continuous on \(R\), complicating our analysis. This is illustrated with the following example:
_Remark 5.7_.: Singular points can exert a nonlocal effect on the pressure gradient.
For example, if we consider a pressure supported on a strip of width \(h\) with zero boundary conditions and constant Laplacian \(-1\), then we can see that \(|\nabla p|=\frac{h}{2}\) on the boundary, since the solution to the one-dimensional problem with \(p(0)=p(h)=0\) is \(p(x)=\frac{1}{2}x(h-x)\). In particular, it follows that if we have a patch which consists of two strips, and those strips merge along a hyperplane at some time, then \(\nabla p\) has a jump discontinuity in time at every regular point at the time those strips merge.
Similar examples can be considered for singular points in each stratum \(\Sigma^{k}\) by examining a cylindrical patch with a cylindrical hole as the radius of the cylinder shrinks to \(0\). Here, by cylinder we mean the product of \(\mathbb{R}^{k}\) and a \((d-k)\)-dimensional ball, for \(0\leq k\leq d-1\).
We note that the discontinuity in the gradient in Remark 5.7 is a jump in magnitude, not in direction. Indeed, the zero boundary condition implies that \(\frac{\nabla p(x,T(x))}{|\nabla p(x,T(x))|}=-\nu(x)\) on \(R_{t}\), so the regularity of \(\nu\) from Proposition 5.6 rules out such discontinuities.
This example also proves to be an obstacle to higher regularity of \(T\); we will later see in Lemma 5.18 that the derivative of \(T\) closely depends on \(\nabla p(x,T(x))\) when it exists. As a result, we can hope for \(T\) to be at best Lipschitz on \(R\).
The key in establishing this regularity will be to obtain a quantitative estimate from the Hopf lemma, to get a locally uniform lower bound for \(|\nabla p(x,T(x))|\) on \(R\). For this, we require an a priori estimate for the growth of the solution, in addition to control over the geometry. We can obtain this growth estimate from the strict superharmonicity of \(p(\cdot,t)\), and so we have the following statement:
**Lemma 5.8**.: _Let \(r_{0},c_{0},C_{0}>0\) and \(\alpha\in(0,1)\). Suppose \(u\) is a positive \(C^{2}\) solution to \(\Delta u\leq-c_{0}<0\) on \(B_{r_{0}}(0)\cap\{x:x_{d}>C_{0}|x^{\prime}|^{1+\alpha}\}\) with \(u(0)=0\), where we write \(x=(x^{\prime},x_{d})\). Then there exist \(\varepsilon,\delta>0\) depending only on \(d,\alpha,r_{0},c_{0},C_{0}\) such that for all \(h\in(0,\delta)\), we have \(u(he_{n})\geq\varepsilon h\)._
Proof.: First, we let \(C_{1}=C_{0}+1\), to give additional separation from the boundary when we are away from \(0\), and let \(U_{r}=B_{r}(0)\cap\{x:x_{d}>C_{1}|x^{\prime}|^{1+\alpha}\}\) for \(0<r<r_{0}\). For a given \(r\), we will decompose \(\partial U_{r}\) as \(\Gamma_{1}\cup\Gamma_{2}\), where \(\Gamma_{2}:=\partial B_{r}(0)\cap\{x:x_{d}\geq C_{1}|x^{\prime}|^{1+\alpha}\}\) is a spherical cap, and \(\Gamma_{1}=B_{r}(0)\cap\{x:x_{d}=C_{1}|x^{\prime}|^{1+\alpha}\}\).
Figure 2. the region \(U_{r}\) (shaded)
The proof of the Hopf lemma proceeds by perturbing \(u\) by a function \(v\) constructed specially for the domain and applying the comparison to the result. In particular, if for some \(v,\varepsilon>0,\delta>0\) we have
\[\begin{cases}\Delta(u-\varepsilon v)\leq 0\text{ on }U_{\delta}\\ u-\varepsilon v\geq 0\text{ on }\partial U_{\delta}\\ \partial_{d}v(0)=1\end{cases}\]
then the comparison principle implies that \(u-\varepsilon v\geq 0\) on \(U_{\delta}\), and the control over the derivative of \(v\) at \(0\) implies the result.
Borrowing from the proof of the Hopf lemma for \(C^{1,\alpha}\) domains in [10], we let
\[v(x)=x_{d}+\frac{2C_{1}}{\alpha}(\alpha+d-1)x_{d}^{1+\alpha}-2C_{1}|x|^{1+\alpha}\]
We note by direct computation that \(\partial_{d}v(0)=1\) and
\[\Delta v(x)=2C_{1}(1+\alpha)(\alpha+d-1)\left(x_{d}^{\alpha-1}-|x|^{\alpha-1}\right)\]
In particular, \(\Delta v(x)\geq 0\) on \(U_{r}\) for any \(r\).
Thus, we reduce to verifying the boundary condition, which requires using the particular behavior of \(v\) on \(\Gamma_{1}\), and choosing \(\delta\) and \(\varepsilon\) appropriately to control \(v\) on \(\Gamma_{1}\) and \(\Gamma_{2}\). Following this plan, we first check the boundary inequality on \(\Gamma_{1}\), defined above, where we have \(x_{d}=C_{1}|x^{\prime}|^{1+\alpha}\). Along a curve \(x(t)=(te^{\prime},C_{0}t^{1+\alpha})\) for a unit vector \(e^{\prime}\in\mathbb{R}^{d-1}\), we have
\[v(x(t))=C_{1}t^{1+\alpha}+\frac{2C_{0}^{2}}{\alpha}(\alpha+d-1)t^{(1+\alpha)^{ 2}}-2C_{0}(t^{2}+C_{1}t^{2(1+\alpha)})^{(1+\alpha)/2}\]
For small \(t>0\), we can drop the higher order terms and see that \(v\) grows like \(C_{1}t^{1+\alpha}-2C_{1}t^{1+\alpha}=-C_{1}t^{1+\alpha}\). In particular, there exists \(\delta_{0}=\delta_{0}(d,\alpha,C_{0})\) such that if \(\delta\leq\delta_{0}\), then \(v(x)\leq 0\) on \(B_{\delta}(0)\cap\{x:x_{d}=C_{1}|x^{\prime}|^{1+\alpha}\}\). We will set \(\delta=\min(\delta_{0},r_{0}/2)\) for the rest of the proof.
Next, we handle the boundary inequality on \(\Gamma_{2}\), the spherical cap defined by \(\Gamma_{2}=\partial B_{\delta}(0)\cap\{x:x_{d}\geq C_{1}|x^{\prime}|^{1+\alpha}\}\). By compactness, \(\Gamma_{2}\) has positive distance to \(\{x:x_{d}\geq C_{0}|x^{\prime}|^{1+\alpha}\}\). Let \(D=D(d,\alpha,\delta,C_{0})\) denote this distance. By shrinking \(D\) to \(\frac{r_{0}}{2}\) if necessary, we get that \(u\) is defined on a ball of radius \(D\) at each point of \(\Gamma_{2}\). The paraboloid on the ball of radius \(D\) with zero boundary data and Laplacian \(-c_{0}\) is a subsolution to \(u\), so we conclude that \(u(x)\geq\frac{c_{0}}{2}D^{2}\) on \(\Gamma_{2}\). Let \(M=\max(1,\max_{\Gamma_{2}}v)\), and then we can take
\[\varepsilon=\frac{c_{0}D^{2}}{2M}\]
to get \(u-\varepsilon v\geq 0\) on \(\Gamma_{2}\), with \(\varepsilon\) depending on all of the parameters in the statement of the lemma.
On \(\Gamma_{1}\), we have \(u\geq 0\) and \(v\leq 0\), so \(u-\varepsilon v\geq 0\). On \(\Gamma_{2}\), we chose \(\varepsilon\) so that \(u-\varepsilon v\geq 0\). Thus, \(u-\varepsilon v\geq 0\) on \(\partial U_{\delta}\), so we conclude.
With the Hopf lemma estimate and the regularity of the boundary near regular points, we can conclude that \(p(\cdot,T(x))\) has linear growth at \(x\), locally uniformly on \(R\).
**Proposition 5.9**.: \(R\) _can be covered with neighborhoods \(V\) with the following property: there exist parameters \(C,c,r_{0}>0\) such that for any \(x\in V\),_
\[c\leq r^{-1}\sup_{B_{r}(x)}p(\cdot,T(x))\leq C\]
_In particular, \(|\nabla p(x,T(x))|\sim 1\) on \(V\), for implicit constants depending on \(V\)._
Proof.: To obtain the lower bound, we apply Lemma 5.8. Thus, we must show that the parameters \(r_{0},c_{0},C_{0}\) from the statement of the lemma can be chosen locally uniformly on \(R\). By Lemma 2.4, we can choose \(c_{0}>0\) locally uniformly in time so that \(\Delta p=-n\leq-c_{0}<0\), using the assumption that \(n_{0}\) is bounded away from \(0\). By Proposition 5.5, for any \(x_{0}\in R\), we can find a neighborhood \(V\) of \(x_{0}\) in which we have a uniform \(C_{0}\) so that each \(R_{t}\) which intersects the neighborhood does so as a graph with \(C^{1,\alpha}\) seminorm controlled by \(C_{0}\). By taking \(r_{0}\) so that \(B_{2r_{0}}(x_{0})\subset V\), we can use \(r_{0},C_{0}\) as the parameters for all points in \(B_{r_{0}}(x_{0})\).
To obtain the upper bound, we first use Proposition 5.5. This gives us a neighborhood \(V\) and a parameter \(r_{1}>0\) for which \(B_{r_{1}}(x)\cap\partial\Omega_{T(x)}\) is a \(C^{1,\alpha}\) graph, with uniform control over the \(C^{1,\alpha}\) seminorm. In particular, this regularity implies that there is an \(r_{2}>0\) such that for all \(x\in V\) and all \(r\in(0,r_{2}]\), we have \(x-r\nu(x)\in\Omega_{T(x)}\). Then the mean value theorem along the path \(x-r\nu(x)\) implies that there exists some \(r\) for which
\[\nabla p(x,x-r\nu(x))\cdot-\nu(x)=\frac{p(x-r_{2}\nu(x),T(x))}{r_{2}}\]
We recall from the proof of Lemma 2.3 that \(p\in L^{\infty}(\mathbb{R}^{d}\times[0,\tau])\), for any \(\tau\in(0,\infty)\), using that the patch has bounded support and comparing to a sufficiently large paraboloid supersolution. Thus, we have
\[|\nabla p(x,x-r\nu(x))\cdot\nu(x)|\leq Cr_{2}^{-1}\]
Using the regularity of the boundary, the \(L^{\infty}\) bound on the pressure, and the \(L^{\infty}\) bound on the nutrient from Lemma 2.2, we can invoke boundary Schauder estimates to have \(p(\cdot,T(x))\) uniformly \(C^{1,\alpha}\) on \(B_{r_{1}}(x)\cap\Omega_{T(x)}\) for each \(x\in V\). Thus, we can transfer our bound to the boundary:
\[|\nabla p(x,T(x))|=|\nabla p(x,T(x))\cdot\nu(x)|\leq|(\nabla p(x,T(x))-\nabla p (x-r\nu(x)))\cdot\nu(x)|+|\nabla p(x-r\nu(x))\cdot\nu(x)|\leq Cr+Cr_{2}^{-1}\]
This also gives the upper bound on the linear growth of \(p(\cdot,T(x))\) near \(x\), so we conclude.
Having established nondegeneracy of the pressure, we get improved regularity of \(T\) on \(R\).
**Corollary 5.10**.: \(T\in C^{0,1}_{\rm loc}(R)\)_. In other words, \(T\) attains its optimal regularity on \(R\) in light of Remark 5.7, barring additional assumptions on \(\Sigma\)._
Proof.: This follows from the boundary regularity from Proposition 5.5 and the linear nondegeneracy of the pressure from Proposition 5.9, via a method similar to the comparison arguments of Section 4. Using these properties, we can construct a radial subsolution initially supported on an annulus in \(\Omega_{t}\) which expands at a constant rate, near any \(x_{0}\in R\). From this argument, we get
\[(T(x)-T(x_{0}))_{+}\leq C|x-x_{0}| \tag{5.6}\]
for \(x_{0}\in R\) and \(x\) sufficiently close to \(x_{0}\). Since the boundary regularity and linear growth rate are uniform for \(x_{0}\) restricted to a compact \(K\subset R\), the one-sided bound (5.6) holds uniformly for \(x,x_{0}\in K\), and so we conclude that \(T\) is locally Lipschitz on \(R\).
Finally, we turn to refining the statement of Proposition 5.5. The first step will be to use the linear growth of the pressure gradient to control the boundary in the Hausdorff metric.
**Lemma 5.11**.: _For any \(x\in R\), there exists parameters \(r,\delta>0\) such that for \(t_{1},t_{2}\in(T(x)-\delta,T(x)+\delta)\), we have_
\[\sup_{y_{1}\in R_{t_{1}}\cap B_{r}(x)}\inf_{y_{2}\in R_{t_{2}}\cap B_{r}(x)}|y _{1}-y_{2}|\sim|t_{1}-t_{2}|\]
_In other words, \(D(R_{t_{1}}\cap B_{r}(x),R_{t_{2}}\cap B_{r}(x))\sim|t_{1}-t_{2}|\), where \(D\) denotes Hausdorff distance. Here, all parameters and implicit constants depend on \(x\)._
Proof.: Let \(B\) be a ball compactly contained in \(R\), and let \(y_{1},y_{2}\in B\) with \(T(y_{1})<T(y_{2})\). Since \(T\in C^{0,1}_{\rm loc}(R)\) by Corollary 5.10, we have \(|T(y_{1})-T(y_{2})|\leq C(B)|y_{1}-y_{2}|\). We get the reverse bound by an analogous argument using the boundary regularity of Proposition 5.5 and using Proposition 5.9's upper bound on the linear growth of \(p(\cdot,t_{1})\) away from \(y_{1}\). That is, following the approach of Section 4, we can construct a supersolution supported outside a ball in the exterior of \(\Omega_{T(y_{1})}\) near \(y_{1}\), such that the supersolution expands at a constant rate. From that argument, we get that there exists a point \(\tilde{y}_{2}\in R_{t_{2}}\) with \(|y_{1}-\tilde{y}_{2}|\leq C(B)|T(y_{1})-T(y_{2})|\). Letting \(t_{1}=T(y_{1}),t_{2}=T(y_{2})\), and restricting to the case where \(|t_{1}-t_{2}|\) is sufficiently small to guarantee that the \(\tilde{y_{2}}\) from before is in \(B\), we have shown that \(D(R_{t_{1}}\cap B,R_{t_{2}}\cap B)\sim|t_{1}-t_{2}|\) for implicit constants depending on \(B\), from which we can obtain the original statement.
Using the previous result, we can now state our final improved form of Proposition 5.5.
**Proposition 5.12**.: \(R\) _is covered by neighborhoods \(V\) with the following property: there exists \(r>0\), a coordinate system \((x^{\prime},x_{n})\), and a locally defined function \(f(x^{\prime},t)\) such that for each \(x\in V\), \(\Omega_{T(x)}\cap B_{r}(x)\) is the lower graph \(\{y\in B_{r}(x):y_{n}\leq f(y^{\prime},T(x))\}\). Moreover, \(f\) is uniformly \(C^{1,1}\) in space and \(C^{0,1}\) in time._
Proof.: First, by Proposition 5.6, we note that the coordinate system in Proposition 5.5 can be chosen locally uniformly, so that we get \(r>0\) and the family \(f(x^{\prime},t)\) which are uniformly \(C^{1,1}\) in space by the Lipschitz regularity of \(T\).
Thus, it remains only to check that the regularity in time follows from our control over the Hausdorff distance. Fix \(x^{\prime}\) and \(t_{1},t_{2}\), and write \(x_{1}=(x^{\prime},f(x^{\prime},t_{1}))\), \(x_{2}=(x^{\prime},f(x^{\prime},t_{2}))\). Then for \(\varepsilon=|f(x^{\prime},t_{1})-f(x^{\prime},t_{2})|=|x_{1}-x_{2}|\), \(B_{\varepsilon}(x_{1})\) contains the point \(\tilde{x}\in\partial\Omega_{T(x_{2})}\) which minimizes the distance to \(x_{1}\). By Lemma 5.11, after possibly shrinking our neighborhood, \(|x_{1}-\tilde{x}|\leq C|t_{1}-t_{2}|\). In particular, \(|\tilde{x}^{\prime}-x^{\prime}|\leq C|t_{1}-t_{2}|\), so \(|f(\tilde{x}^{\prime},t_{2})-f(x^{\prime},t_{2})|\leq C|t_{1}-t_{2}|\), for some larger \(C\) given by the \(C^{1}\) spatial regularity of \(f\). Then
\[|f(x^{\prime},t_{1})-f(x^{\prime},t_{2})|\leq|f(x^{\prime},t_{1})-f(\tilde{x}^ {\prime},t_{2})|+|f(\tilde{x}^{\prime},t_{2})-f(x^{\prime},t_{2})|\leq C|t_{1} -t_{2}|\]
which completes the proof.
### Singular points
Now we proceed to analysis of the singular set, with the goal of controlling singular points in dimension. First, we will show that the blowup profile at singular points varies continuously along the spacetime interface. The main tool will be a uniform approximation result that we prove in the appendix, which will allow us to make use of the uniform-in-time spatial continuity of \(\eta\).
**Proposition 5.13**.: \(x\mapsto D^{2}w(x,T(x))\) _is continuous on \(\Sigma\)._
Proof.: First, by Lemma 2.3 and Lemma 5.1, we have that \(w\) is locally spacetime Lipschitz and \(\eta\) is \(C^{0,\alpha}\) in space locally uniformly in time. We also have that \(T\) is locally \(C^{0,\alpha}\) on \(\mathcal{O}\). Thus, for some \(C>0\), we can restrict to a spacetime neighborhood of the interface where all of these norms are bounded by \(C\) ( in the case of \(T\), in the sense of the neighborhood's projection into space).
Let \(\varepsilon>0\). By Lemma 6.13, there exists a scale depending only on the modulus of continuity of \(1-\eta\) in (1.8), such that the quadratic blowup uniformly approximates \(w\) near singular points at that scale; concretely, there is a \(\delta=\delta(\varepsilon/3,C)>0\) such that if \(x_{0}\) is a singular point in our neighborhood with blowup \(q_{0}(x)=\frac{1}{2}x\cdot D^{2}w(x_{0})x\) (recentered at \(0\)), then
\[\|\delta^{-2}w(x_{0}+\delta x,T(x_{0}))-q_{0}\|_{C^{1}(B_{1})}<\frac{ \varepsilon}{3}\]
In particular, if \(x_{1}\) is another singular point in the same neighborhood with blowup \(q_{1}\), then we have
\[\|q_{0}-q_{1}\|_{L^{\infty}(B_{1})}\leq\frac{2\varepsilon}{3}+\delta^{-2}\|w(x _{0}+\delta x,T(x_{0}))-w(x_{1}+\delta x,T(x_{1}))\|_{L^{\infty}(B_{1})} \tag{5.7}\]
From the regularity of \(T\) and \(w\) on our chosen neighborhood, it follows that
\[\|q_{0}-q_{1}\|_{L^{\infty}(B_{1})}\leq\frac{2\varepsilon}{3}+C_{1}\delta^{-2} |x_{1}-x_{0}|^{\alpha} \tag{5.8}\]
for some \(C_{1}\) depending only on \(C\). Then it is clear that for \(|x_{1}-x_{0}|\) sufficiently small, we have \(\|q_{0}-q_{1}\|_{L^{\infty}(B_{1})}<\varepsilon\). By equivalence of norms on \(\mathbb{R}^{d\times d}\), this gives \(|D^{2}w(x_{0})-D^{2}w(x_{1})|\leq O(\varepsilon)\), and we conclude.
We remark that \(D^{2}w(x,T(x))\) also exists in a one-sided sense for \(x\in R\), with \(D^{2}w(x,T(x))=\nu(x)\nu(x)^{T}\) for \(\nu\) as in Proposition 5.6. Thus, in light of that proposition, \(D^{2}w(x,T(x))\) is continuous on \(R\). However, due to the jump in rank, there is no possibility of continuity from \(R\) to \(\Sigma^{k}\) when \(k<d-1\).
Using the continuous dependence of the blowup, we can subsequently apply a Whitney extension argument to obtain that singular points are contained in \(C^{1}\) manifolds. Following the approach in [4], we introduce the following lemma:
**Lemma 5.14** (Whitney's extension theorem).: _Let \(K\subset\mathbb{R}^{d}\) be compact, and suppose we have a function \(f:K\to\mathbb{R}\) and a family of degree \(m\) polynomials \(p_{x}\) indexed over \(K\). If_
1. \(p_{x_{0}}(x_{0})=f(x_{0})\) _for each_ \(x_{0}\in K\)__
2. \(|D^{k}p_{x_{0}}(x_{1})-D^{k}p_{x_{1}}(x_{1})|=o(|x_{0}-x_{1}|^{m-k})\) _for_ \(x_{0},x_{1}\in K\) _and_ \(0\leq k\leq m\)_._
_Then \(f\) extends to a \(C^{m}\) function on \(\mathbb{R}^{d}\) such that \(f(x)=p_{x_{0}}(x)+o(|x-x_{0}|^{m})\) for all \(x_{0}\in K\)._
**Proposition 5.15**.: _Near a point in \(\Sigma^{k}\), \(\Sigma\) is locally contained in a \(C^{1}\) manifold of dimension \(k\). In particular, \(\Sigma\) is contained in countably many \(C^{1}\) submanifolds of dimension \(d-1\)._
Proof.: We fix a compact \(K\subset\Sigma\) for the proof. We will apply Lemma 5.14 to extend the zero function on \(K\), with second order Taylor polynomial \(q_{x_{0}}\) at \(x_{0}\in K\) given by the quadratic blowup of \(w\) at each point. Specifically, this results in
\[q_{x_{0}}(x)=\frac{1}{2}(x-x_{0})\cdot D^{2}w(x_{0},T(x_{0}))(x-x_{0})\text{ where }x_{0}\in K,x\in\mathbb{R}^{d}\]
The extension will give us a \(C^{2}\) function \(f\) on \(\mathbb{R}^{d}\) such that \(\nabla f\equiv 0\) on \(K\), after which the implicit function theorem will imply that in a neighborhood of \(x_{0}\in K\), the set \(\{\nabla f=0\}\) is contained in a \(C^{1}\) manifold of dimension \(\dim\ker D^{2}f(x_{0})=\dim\ker D^{2}w(x_{0},T(x_{0}))\).
Thus, we proceed to verify the assumptions of the lemma. First, we note that by Lemma 5.1, \(\eta\) is uniformly \(C^{0,\alpha}\) in space in a spacetime neighborhood of the interface as it passes through \(K\) in space. It follows by our uniform approximation result for the quadratic blowup at singular points, Lemma 6.13, that there is a modulus of continuity \(\sigma\) such that
\[\|r^{-2}w(x_{0}+r(x-x_{0}),T(x_{0}))-\frac{1}{2}(x-x_{0})\cdot D^{2}w(x_{0},T( x_{0}))(x-x_{0})\|_{C^{1}_{x}(B_{1})}\leq\sigma(r) \tag{5.9}\]
for any \(x_{0}\in K\). Then, if we apply this estimate to \(x_{0},x_{1}\in K\) with \(T(x_{0})\leq T(x_{1})\), we get
\[|q_{x_{0}}(x_{1})-q_{x_{1}}(x_{1})|=|\frac{1}{2}(x_{1}-x_{0})\cdot D^{2}w(x_{0 },T(x_{0}))(x_{1}-x_{0})|\leq|x_{0}-x_{1}|^{2}\sigma(|x_{0}-x_{1}|)\]
directly from (5.9) and the fact that \(w(x_{1},T(x_{0}))=0\). On the other hand, if \(T(x_{0})>T(x_{1})\), we have that \(|q_{x_{1}}(x_{0})|\leq|x_{0}-x_{1}|^{2}\sigma(|x_{0}-x_{1}|)\) from the above, and
\[|q_{x_{1}}(x_{0})-q_{x_{0}}(x_{1})|=|\frac{1}{2}(x_{1}-x_{0})\cdot(D^{2}w(x_{ 1},T(x_{1}))-D^{2}w(x_{0},T(x_{0})))(x_{1}-x_{0})|\leq o(|x_{1}-x_{0}|^{2})\]
by the continuity of the Hessian from Proposition 5.13. This completes the \(k=0\) case of (ii) in Lemma 5.14.
The verification of the \(k=1\) case of (ii) is similar, using the derivative bound from (5.9). In general, we have \(\nabla q_{x_{0}}(x)=D^{2}w(x_{0},T(x_{0}))(x-x_{0})\). Then, when \(T(x_{0})\leq T(x_{1})\), we get
\[|\nabla q_{x_{0}}(x_{1})-\nabla q_{x_{1}}(x_{1})|=|D^{2}w(x_{0},T(x_{0}))(x_{1 }-x_{0})|\leq|x_{1}-x_{0}|\sigma(|x_{1}-x_{0}|)\]
directly from (5.9) and the fact that \(\nabla w(x_{1},T(x_{0}))=0\). On the other hand, if \(T(x_{0})>T(x_{1})\), we have that
\[|\nabla q_{x_{1}}(x_{0})+\nabla q_{x_{0}}(x_{1})|=|\frac{1}{2}(D^{2}w(x_{0},T( x_{0}))-D^{2}w(x_{1},T(x_{1})))(x_{1}-x_{0})|\leq o(|x_{1}-x_{0}|)\]
again by Proposition 5.13, giving that
\[|\nabla q_{x_{0}}(x_{1})-\nabla q_{x_{1}}(x_{1})|=|\nabla q_{x_{0}}(x_{1})| \leq|\nabla q_{x_{0}}(x_{1})+\nabla q_{x_{1}}(x_{0})|+|\nabla q_{x_{1}}(x_{0} )|\leq o(|x_{1}-x_{0}|)\]
Finally, the \(k=2\) case of (ii) in Lemma 5.14 is exactly continuity of the Hessian, from Proposition 5.13. Thus, the conditions of the lemma are satisfied, which completes the proof.
We stress that this result is for \(\Sigma\), and not for the corresponding subset \(\operatorname{Graph}_{T}(\Sigma)\) of the spacetime interface, where we use the notation introduced in (2.10). Since \(T\) is only known to be Holder continuous on \(\Sigma\), we obtain the weaker result that \(\operatorname{Graph}_{T}(\Sigma)\) is locally contained in \(C^{0,\alpha}\) manifolds of dimension \(d-1\). Nevertheless, we are able to apply this to establish control in Hausdorff dimension over the interface. The Hausdorff dimension of the spatial interface has previously been studied in [20] for a similar problem, where the obstacle problem satisfied by \(w\) was applied to conclude that \(\partial\Omega_{t}\) has locally finite \((d-1)\)-dimensional Hausdorff measure for each \(t\). Using the Lipschitz regularity of \(T\) near regular points and our spatial control over singular points, we can study the spacetime interface \(\operatorname{Graph}_{T}(\mathcal{O})\) for the
first time and show that it has the expected Hausdorff dimension \(d\). We summarize the consequences of Proposition 5.15 with the following statements.
**Corollary 5.16**.: _We have, for \(\alpha\) as in Theorem 4.2:_
1. \(\partial\Omega_{t}\) _has finite_ \((d-1)\)_-dimensional Hausdorff measure._
2. \(\Sigma\) _has locally finite_ \((d-1)\)_-dimensional Hausdorff measure. In particular,_ \(\Sigma_{t}\) _has zero_ \((d-1)\)_-dimensional Hausdorff measure for all but countably many_ \(t\) _in_ \((0,\infty)\)_, and for a.e._ \(t\in(0,\infty)\)_,_ \(\Sigma_{t}\) _has Hausdorff dimension at most_ \(d-1-\alpha\)_._
3. \(\operatorname{Graph}_{T}(\mathcal{O})\) _has Hausdorff dimension_ \(d\)_, and decomposes as_ \(\operatorname{Graph}_{T}(R)\cup\operatorname{Graph}_{T}(\Sigma)\)_, where the first set is relatively open with locally finite_ \(d\)_-dimensional Hausdorff measure, and the second set has locally finite_ \((d-\alpha)\) _Hausdorff measure._
Proof.: We use the fact that \(\rho\in L_{t}^{\infty}BV_{x}([0,\tau];\mathbb{R}^{d})\), and thus \(\Omega_{t}\) is a set of finite perimeter. In particular, we can consider the reduced boundary \(\partial^{*}\Omega_{t}\), which has finite \((d-1)\) Hausdorff measure and contains \(R_{t}\), by the local regularity of the boundary at those points. On the other hand, \(\Sigma_{t}\) is locally contained in a \(C^{1}\) manifold of dimension \((d-1)\). In fact, since \(\Sigma_{t}\) is compact, \(\Sigma_{t}\) is contained in a bounded \(C^{1}\) manifold, which then also has finite \((d-1)\) measure, so we have
\[\mathcal{H}^{d-1}(\partial\Omega_{t})\leq\mathcal{H}^{d-1}(\partial^{*}\Omega _{t})+\mathcal{H}^{d-1}(\Sigma_{t})<\infty\]
Since \(\Sigma\) is locally contained in \(C^{1}\) manifolds of dimension \(d-1\), the \((d-1)\)-dimensional Hausdorff measure on \(\Sigma\) is locally finite. In particular, it is also \(\sigma\)-finite, which implies that there cannot be uncountably many \(t\) for which \(\Sigma_{t}\) has positive \((d-1)\) measure. The improvement to \((d-1-\alpha)\) dimension at a.e. time follows from a geometric measure theory lemma of [10]. Since \(\Sigma\) has dimension \(d-1\) and \(T\in C^{0,\alpha}_{\operatorname{loc}}(\mathcal{O})\), Corollary 7.8 of that paper directly gives the result.
For the final statement, we use the general result that the graph of a \(C^{0,\alpha}\) function on a set of Hausdorff dimension \(s\), for \(\alpha\in(0,1]\) and \(s\geq 0\), has Hausdorff dimension at most \(s+1-\alpha\). In particular, since \(T\) is locally Lipschitz on \(R\), which is open in \(\mathbb{R}^{d}\), and locally \(C^{0,\alpha}\) on \(\Sigma\), which is Hausdorff dimension at most \(d-1\), we get the local control in Hausdorff measure for the graphs of those sets. Then we can write \(\operatorname{Graph}_{T}(\mathcal{O})\) as a countable union of sets of Hausdorff dimension at most \(d\), so we conclude.
We remark that a natural open question is whether the space-time interface has locally finite \(d\)-dimensional Hausdorff measure near singular points. This would follow, for example, if \(T\) were known to be uniformly Lipschitz near \(\Sigma\).
### Speculative Results
We finish our treatment of the singular set by noting that stronger generic control over \(\Sigma_{t}\) is possible with slightly stronger regularity than currently known: namely, when \(T\) is Lipschitz.
**Proposition 5.17**.: _If \(T\in C^{0,1}_{\operatorname{loc}}(\mathcal{O})\), then \(\Sigma_{t}\) has \((d-2)\)-Hausdorff measure 0 for a.e. \(t\in(0,\infty)\)._
Figure 3. A cylindrical patch with a nearly cylindrical hole. As the hole contracts, singular points are expected to occur near the axis (dotted). Due to variation in the diameter of the hole, however, singular points may occur at different times. Proposition 5.15 confirms that we have the expected spatial regularity for the set \(\Sigma\subset\mathbb{R}^{d}\) of points which are singular at any time.
Proof.: Here we follow an argument by [14], originally applied to a hitting time for the constant Laplacian obstacle problem with a time-varying condition on the fixed boundary. Since \(T\) is Lipschitz, Proposition 4.6 in [14] implies that for any compact subset \(K\) of \(\Sigma\),
\[\limsup_{x,y\in K,|x-y|\to 0}\frac{|T(x)-T(y)|}{|x-y|}=0\]
Then from the coarea formula, we have
\[\int_{\Sigma^{d}}|\nabla T_{|_{\Sigma^{d}}}|\,d\mathcal{H}^{d-1}=\int_{0}^{ \infty}\mathcal{H}^{d-2}(T_{|_{\Sigma^{d-1}}}^{-1}(t))\,dt=\int_{0}^{\infty} \mathcal{H}^{d-2}(\Sigma_{t}^{d-1})\,dt\]
Then the integrand on the left is \(0\), so \(\Sigma_{t}^{d-1}\) has \((d-2)\)-Hausdorff measure \(0\) for a.e. \(t\). On the other hand, \(\Sigma_{t}^{k}\) has \((d-2)\)-Hausdorff measure \(0\) for \(k<d-2\) and all \(t\), while \(\Sigma_{t}^{d-2}\) has positive \((d-2)\)-Hausdorff measure for at most countably many \(t\), so the result follows.
We finish our treatment of the regular set by investigating the regularity improvement possible under the assumption that no singular points occur at some time. As suggested by Remark 5.7, an assumption of this form is required to go beyond the regularity established in Proposition 5.10. The idea here will be to apply the regularity of \(\nu\) from Proposition 5.6 in conjunction with global Schauder estimates to prove time regularity of \(p\) and higher spatial regularity of \(T\).
As a preliminary step, we show the relationship between \(\nabla p\) and \(\nabla T\).
**Lemma 5.18**.: _Suppose that for some open \(U\subset R\), we have that \(\nabla p\) is continuous in spacetime on \((U\times(t_{0},t_{1}))\cap\overline{\{(x,t):\rho(x,t)=1\}}\) for some \(t_{0},t_{1}\) with \(\inf T(U)<t_{0}<t_{1}<\sup T(U)\). Then \(T\) is continuously differentiable on \(U\cap T^{-1}((t_{0},t_{1}))\) with \(\nabla T(x)=-\frac{\nabla p(T(x),x)}{|\nabla p(T(x),x)|^{2}}\)._
Proof.: Let \(e\) be a vector with positive component in the inward normal direction to \(\Omega_{T(x)}\) at \(x\); that is, with \(e\cdot\nu(x)<0\). Then, we have
\[\nabla w(T(x),x+he)=\operatorname{sgn}_{+}(T(x)-T(x+he))\int_{T(x+he)}^{T(x)} \nabla p(t,x+he)\,dt\]
If we divide both sides by \(h\) and let \(h\to 0\), then the left side converges to \((\nu(x)\cdot e)\nu(x)\), from the quadratic blowup. If the right side is nonzero, we rewrite it as
\[\frac{T(x)-T(x+he)}{h}\nabla p(T(x),x)+\frac{1}{h}\int_{T(x+he)}^{T(x)}\nabla p (t,x+he)-\nabla p(T(x),x)\,dt\]
Using the Lipschitz continuity of \(T\) from Proposition 5.10 and the spacetime continuity of \(\nabla p\), the second term vanishes as \(h\to 0\). As we have already seen, \(\nabla p\) cannot vanish on the interface due to the Hopf lemma, so in the limit, we get
\[(\nu(x)\cdot e)\nu(x)=-\partial_{e}T(x)\nabla p(T(x),x)\]
Here, \(\partial_{e}T(x)\) refers to the one-sided derivative of \(T\) at \(x\) in direction \(e\). Since \(\nabla p(T(x),x)\) has the same direction as \(-\nu(x)\), we get that \(\partial_{e}T(x)=\frac{\nu(x)\cdot e}{|\nabla p(T(x),x)|}\).
Then, it is an elementary result that a continuous function on \(\mathbb{R}\) with continuous left derivative is differentiable. Applying it here, we get that \(T\) has all two-sided directional derivatives, and we can read from the formula that we must have
\[\nabla T(x)=\frac{\nu(x)}{|\nabla p(T(x),x)|}=-\frac{\nabla p(T(x),x)}{| \nabla p(T(x),x)|^{2}}\]
**Proposition 5.19**.: _If for some interval \((t_{0},t_{1})\), we have that \(\Sigma_{t}\) is empty for every \(t\in(t_{0},t_{1})\), then \(T\in C_{\mathrm{loc}}^{1,1/2-\varepsilon}(\Omega_{t_{1}}\setminus\overline{ \Omega_{t_{0}}})\) for every \(\varepsilon\in(0,\frac{1}{2})\)._
Proof.: From the previous lemma, we need to show spacetime continuity of \(\nabla p\) to establish differentiability of \(T\). We do this using the \(C^{1,\alpha}\) global Schauder estimates, which applied to a function \(u\) on a \(C^{1,\alpha}\) domain \(\Omega\), give that
\[\|u\|_{C^{1,\alpha}(\overline{\Omega})}\leq C(\|\Delta u\|_{L^{\infty}(\Omega)} +\|u\|_{C^{1,\alpha}(\partial\Omega)})\]
for some \(C\) which depends only on \(\alpha\) and \(\Omega\). In our case, \(C\) will actually be locally uniform in \(t\) for \(\Omega_{t}\), since the global Schauder estimates are proved by patching interior and boundary estimates, and we can cover the boundary with finitely many balls in which it evolves as a uniformly \(C^{1,1}\) graph for some time interval.
Then, specifically, we will apply the Schauder estimate to \(p(t)-p(s)\) on \(\Omega_{s}\), for \(t>s\), to bound \(\|\nabla p(t)-\nabla p(s)\|_{L^{\infty}(\Omega_{s})}\) in terms of \(|t-s|\). Since \(n\in C^{0,1-}_{t}L^{\infty}_{x}\) by Lemma 2.2, the work will be in controlling \(p(t)\) on \(\partial\Omega_{s}\). Intuitively, since \(p(t)\) and its tangential derivative vanish on \(\partial\Omega_{t}\), we expect that if the free boundary has not rotated too much between times \(s\) and \(t\), then these should be close to \(0\) on \(\partial\Omega_{s}\). We make this quantitative using Proposition 5.6.
First, since we have a locally uniform in time bound on \(\nabla p\), we get by the radial supersolution that \(D(\partial\Omega_{s},\partial\Omega_{t})\leq C|t-s|\) for some locally uniform in time constant, where \(D\) denotes Hausdorff distance as in Definition 6.5. Then, since \(p(t)\) vanishes on \(\partial\Omega_{t}\), we integrate along shortest-distance paths and use the gradient bound again to conclude that \(\|p(t)-p(s)\|_{L^{\infty}(\partial\Omega_{s})}=\|p(t)\|_{L^{\infty}(\partial \Omega_{s})}\leq C|t-s|\) for some uniform \(C\).
Next, we bound the tangential part of \(\nabla p(t)\) on \(\partial\Omega_{s}\). Recalling our definition of \(\nu(x)\) as the outward unit normal to \(\Omega_{T(x)}\) at \(x\), we denote the projection onto the tangential part as \(P^{\perp}_{\nu(x)}\). Then for \(x\in\partial\Omega_{s}\), we let \(\tilde{x}\in\partial\Omega_{t}\) be the distance minimizer so that \(|x-\tilde{x}|\leq C|t-s|\), and we have
\[|P^{\perp}_{\nu(x)}\nabla p(x)|\leq|(P^{\perp}_{\nu(\tilde{x})}-P^{\perp}_{ \nu(x)})\nabla p(\tilde{x})|+|P^{\perp}_{\nu(x)}(\nabla p(\tilde{x})-\nabla p( x))|\]
where all pressures are at time \(t\), and we use that the tangential derivative of \(p(t)\) on \(\partial\Omega_{t}\) vanishes. The first term is controlled by the continuity of \(\nu\) and our uniform bound on the pressure gradient, so by Proposition 5.6, it contributes \(C|t-s|^{1/2}\). The second term is controlled by the \(C^{1,\alpha}\) regularity of \(p\), so it contributes \(C|t-s|^{\alpha}\). Thus, choosing \(\alpha>\frac{1}{2}\), we have \(\|p(t)-p(s)\|_{C^{1}(\partial\Omega_{s})}\leq C|t-s|^{1/2}\).
Finally, we improve this to Holder by interpolation. Specifically, since \(p(\tau)\) is \(C^{1,\alpha}\) on \(\Omega_{\tau}\), uniformly in \(\tau\), for any \(\alpha\in(0,1)\), we have
\[\frac{|\nabla p(t,x)-\nabla p(t,y)|}{|x-y|^{\alpha}}\leq C(\alpha)\]
We want the Holder seminorm of \(\nabla p(t)\) on \(\partial\Omega_{s}\) to be small, so at small scales, we rearrange this to
\[\frac{|\nabla p(t,x)-\nabla p(t,y)|}{|x-y|^{\beta}}\leq C(\alpha)|x-y|^{\alpha-\beta}\]
for \(\beta<\alpha\) to be chosen. For large scales, we use the \(L^{\infty}\) bound for the tangential part of \(\nabla p(t)\) on \(\partial\Omega_{s}\), which gives \(\frac{C|t-s|^{1/2}}{|x-y|^{\beta}}\) on the right hand side. Optimizing, the critical scale is \(|x-y|\sim|t-s|^{1/2\alpha}\), and the \(C^{1,\beta}\) seminorm will scale as \(C(\alpha)|t-s|^{\frac{\alpha-\beta}{2\alpha}}\). In particular, this shows that by choosing \(\alpha\) close to \(1\) and \(\beta\) close to \(0\), we can get arbitrarily close to \(\frac{1}{2}\), so for each \(\varepsilon>0\), we have some \(\beta>0\) and some \(C\) such that
\[\|p(t)-p(s)\|_{C^{1,\beta}(\partial\Omega_{s})}<C|t-s|^{1/2-\varepsilon}\]
Then, combining this with the regularity of the nutrient from Lemma 2.2, we get \(\|p(t)-p(s)\|_{C^{1,\beta}(\overline{\Omega}_{s})}\leq C|t-s|^{1/2-\varepsilon}\) from the boundary Schauder estimate. We conclude that for points \((x,t)\) and \((y,s)\) in the region with \(t>s\), we have
\[|\nabla p(x,t)-\nabla p(y,s)|\leq|\nabla p(x,t)-\nabla p(y,t)|+|\nabla p(y,t)- \nabla p(y,s)|\leq C|x-y|^{\alpha}+C|t-s|^{1/2-\varepsilon} \tag{5.10}\]
for any fixed \(\alpha\in(0,1)\) and \(\varepsilon>0\), which proves the spacetime continuity of \(\nabla p\).
Then, by the lemma, \(\nabla T(x)=-\frac{\nabla p(x,T(x))}{|\nabla p(x,T(x))|^{2}}\), where we have \(|\nabla p(x,T(x))|\) locally uniformly bounded away from \(0\) by the Hopf lemma. In particular, it follows from (5.10) and the Lipschitz continuity of \(T\) that
\[|\nabla p(x,T(x))-\nabla p(y,T(y))|\leq C|x-y|^{1/2-\varepsilon}\]
so we obtain that \(\nabla T\in C^{0,1/2-\varepsilon}_{\mathrm{loc}}\) directly from its formula.
This implies that the free boundary has \(C^{2,1/2-\varepsilon}\) regularity at the relevant times. Subsequently, the regularity of \(\nu\) in Proposition 5.6 can be improved using second order approximations to the free boundary, leading to a minor improvement in the Holder exponent.
## 6. Appendix: Obstacle problem with \(C^{0,\alpha}\) source
In this section, we collect several known facts about obstacle problems with Holder continuous data. Many results for the model obstacle problem with constant source carry over to the Holder continuous case with minor modifications, as noted in [10] and [12]. We will cite several results from [11], [13], and [14], which offer careful treatments of this topic.
Let us consider solutions \(u\) to the obstacle problem
\[\Delta u=(1+f)\chi_{\{u>0\}} \tag{6.1}\]
on \(B_{1}\), where \(f\) is known a priori to vanish on the free boundary \(\Gamma(u)=\partial\{u>0\}\). When \(f\) is sufficiently regular, this equation has similar local behavior to the model case where \(f\equiv 0\); in particular, we make the assumption \(f\in C^{0,\alpha}(B_{1})\) for some \(\alpha\in(0,1)\). Regarding notation, we write \(\Omega(u)=\{u>0\}\), \(\Lambda(u)=\{u=0\}\), and in many cases we will refer to a tuple \((u,f)\) as the solution to (6.1). We take \(\lambda=\inf_{B_{1}}1+f,\mu=\sup_{B_{1}}1+f\), and we will have the standing assumption that \(\lambda>\frac{1}{2}\), which holds if \(0\in\Gamma(u)\) and \([f]_{C^{0,\alpha}(B_{1})}\) is sufficiently small.
On the free boundary, we have \(u=0,\nabla u=0\), so we should expect \(u\) to have quadratic growth away from the free boundary in the positive set. Of course, \(u\) is not regular enough to admit a second order Taylor expansion due to the jump in the second derivatives along the free boundary, but nevertheless, we recover several results to the same effect:
**Lemma 6.1** (Quadratic nondegeneracy, [11] Thm. 2.1).: _If \(0\in\overline{\Omega(u)}\), then for all \(r<1\),_
\[\sup_{B_{r}}u\geq\frac{\lambda}{2d}r^{2}\]
**Lemma 6.2** (Quadratic bound, [11] Thm. 2.4).: _If \(0\in\Gamma(u)\), then for all \(r<\frac{1}{2}\),_
\[\sup_{B_{r}}u\leq C(d)\mu r^{2}\]
**Lemma 6.3** (Regularity up to the free boundary, [11] Thm. 2.3).: _If \(0\in\Gamma(u)\), then \(\|u\|_{C^{1,\beta}(B_{1})}\leq C(d,\beta)\mu\) for all \(\beta\in(0,1)\)._
We adopt the following notation for the quadratic rescalings:
\[u_{r}(x)=r^{-2}u(rx) \tag{6.2}\]
\[u_{0}(x)=\lim_{r\to 0^{+}}u_{r}(x),\text{ provided this limit exists} \tag{6.3}\]
The combined results of lemmas 6.1, 6.2, and 6.3 can be used to derive Lemma 2.10: the \(C^{1,\beta}\) compactness of the quadratic blowup sequence \((u_{r})\). As we discuss in the main paper with Lemma 5.2, this compactness improves to convergence of the blowup sequence when \(f\in C^{0,\alpha}\), and we classify points as regular or singular based on the blowup profile. In general, regular points can be identified at finite scales, using criteria such as Lemma 5.3.
The quadratic blowup proves to be a key tool in understanding local behavior of the free boundary. For regular points, we use comparison and stability results to show flatness of the free boundary, which leads to regularity of the boundary. For singular points, we use monotonicity formulas and compactness results to show that they can be locally contained in \(C^{1}\) manifolds. Since the treatment of these cases diverges considerably, we will split them into the next two sections.
### Regular points
In this section, we review several results from [11] which connect the regularity of the free boundary at regular points and the scale at which this regularity is achieved with the regularity of the source term and the scale at which the zero set becomes large. The regularity of the free boundary can be summarized as follows:
**Lemma 6.4** ([4] Thm. 7.2).: _If \(f\in C^{0,\alpha}\) with \(\alpha\in(0,1]\), then in a neighborhood of a regular point, the free boundary is a \(C^{1,\alpha}\) graph._
In light of this result, we allow the case \(\alpha=1\) for the rest of this subsection.
For the main paper, we require a quantified version of this lemma, in order to apply it uniformly to the family of obstacle problems \(w(\cdot,t)\). Thus, we will retrace Blank's approach in this section while keeping track of its dependencies.
**Definition 6.5**.: Let \(S\subset\mathbb{R}^{d}\) be a compact set. We define the modulus of flatness,
\[\theta(r)=\sup_{0<\rho\leq r}\sup_{x\in S}\inf_{L}\frac{D(L\cap B_{\rho}(x),S \cap B_{\rho}(x))}{\rho}\]
where the inner infimum is over all hyperplanes \(L\) containing \(x\), and \(D\) denotes Hausdorff distance:
\[D(A,B)=\max(\sup_{x\in A}\operatorname{dist}(x,B),\sup_{y\in B}\operatorname{ dist}(y,A))\]
We say that \(S\) is \(\delta\)-Reifenberg flat if there exists \(R\) such that \(\theta(r)\leq 2\delta\) for all \(r<R\), and Reifenberg vanishing if \(\theta(r)\to 0\).
**Lemma 6.6** ([4] Theorem 6.7).: _Let \(S\) be a compact Reifenberg vanishing set with modulus of flatness \(\theta\) satisfying \(\int_{0}^{1}\frac{\theta(r)}{r}\,dr<\infty\). Then there exist constants \(C_{0},C_{1}\) such that if \(\int_{0}^{\rho}\frac{\theta(r)}{r}\,dr<C_{0}\), then there exists a coordinate system in which \(S\cap B_{\rho/2}\) is the graph of a \(C^{1}\) function \(g\), such that \(\nabla g\) is continuous with modulus of continuity \(C_{1}\int_{0}^{r}\frac{\theta(s)}{s}\,ds\)._
**Lemma 6.7** ([4] Theorem 7.1).: _Suppose that \(u\) solves the obstacle problem \(\Delta u=f\chi_{\{u>0\}}\) in \(B_{1}\) with \(\lambda\leq f\leq\mu\) and \(f\) Dini continuous with modulus \(\sigma\). Then if \(0\) is a regular point and the free boundary is \(\delta\)-Reifenberg flat in \(B_{3/4}\) for some sufficiently small \(\delta\), then the modulus of flatness of the free boundary inside \(B_{1/2}\) is controlled by \(C\sigma(r)\)._
In particular, these two results imply the \(C^{1,\alpha}\) regularity of the free boundary near regular points when \(f\in C^{0,\alpha}\) with \(f(0)=1\), at a scale depending on \([f]_{C^{0,\alpha}}(B_{1})\) and the scale at which the \(\delta\) Reifenberg flatness is achieved. For this, we have another result from Blank
**Lemma 6.8** ([4] Theorem 6.4).: _Let \(\varepsilon\in(0,\frac{1}{4})\), and suppose we have \(f\) with \(\lambda\leq f\leq\mu\) and \(u\) a solution to the obstacle problem \(\Delta u=f\chi_{\{u>0\}}\) in \(B_{1}\). If \(\mu-\lambda\) is sufficiently small, then there exist constants \(r_{0},\tau,\delta\in(0,1)\) depending on \(d,\mu,\lambda,\varepsilon\) for which the following holds:_
_If for some \(t\leq r_{0}\), we have_
\[\frac{|B_{t}\cap\{u=0\}|}{|B_{t}|}>\varepsilon,\]
_then \(\overline{B_{\tau t}}\cap\partial\{u>0\}\) is \(\delta\)-Reifenberg flat._
_Moreover, as \(\mu-\lambda\to 0\), \(\delta\to 0\). In particular, if \(f\) is continuous, \(\delta\) can be taken to be arbitrarily small (with all parameters now also depending on the modulus of continuity of \(f\))._
To be more precise, suppose \(u\) solves the obstacle problem \(\Delta u=f\chi_{\{u>0\}}\) in \(B_{1}\) with \(f\) taking values in \([\lambda,\mu]\), and write \(u_{c}\) for the solution to the obstacle problem \(\Delta u_{c}=c\chi_{\{u_{c}>0\}}\) such that \(u_{c}|_{\partial B_{1}}=u\). Then \(\{u_{\lambda}=0\}\subset\{u=0\}\subset\{u_{\mu}=0\}\). Moreover, if \(0\) is a regular point for \(u\), then there exists \(c\in[\lambda,\mu]\) such that \(0\) is a regular point for \(u_{c}\). Then Blank's uniform stability theorem for regular points ([4] Theorem 5.4) gives that in \(B_{1/2}\), there is a universal \(C\) such that for any \(c^{\prime}\), we have \(d(FB(u_{c}),FB(u_{c^{\prime}}))\leq C|c-c^{\prime}|\) where \(FB(v)\) denotes the free boundary of \(v\). Then we get flatness of \(FB(u)\) by trapping it between \(FB(u_{\lambda})\) and \(FB(u_{\mu})\) and using the stability and \(C^{1,\alpha}\) regularity of the constant-source free boundaries. As we zoom in, the \(C^{1,\alpha}\) seminorm goes to \(0\), and if \(f\) is continuous, \(|\mu-\lambda|\to 0\), so we can get \(\delta\)-Reifenberg flatness with arbitrarily small \(\delta\). In particular, the \(C^{1,\alpha}\) seminorm is uniformly bounded, depending only on the scale at which the density of the zero set is sufficiently large, and the rate at which \(|\mu-\lambda|\to 0\) depends only on the modulus of continuity of \(f\). Thus, we can replace the hypothesis of \(\delta\)-Reifenberg flatness in Lemma 6.6, to conclude:
**Lemma 6.9**.: _Suppose that \((u,f)\) solve (6.1) with \(f\in C^{0,\alpha}\) for \(\alpha\in(0,1]\). Then there exist \(r_{0},\varepsilon_{0}\) such that if \(0\) is a free boundary point and \(\frac{|B_{t}\cap\{u=0\}|}{|B_{t}|}>\varepsilon_{0}\) for some \(t\leq r_{0}\), then there exists \(r=r(t,[f]_{C^{0,\alpha}})\) such that the free boundary is a \(C^{1,\alpha}\) graph in \(B_{r}\), with \(C^{1,\alpha}\) seminorm controlled by \([f]_{C^{0,\alpha}(B_{1})}\) and \(r\)._
### Singular points
In this section, we use monotonicity formulas to study the continuity of the blowup limit at singular points and the rate of convergence for the blowup sequence. First, we have the following result:
**Lemma 6.10** ([12], Thm 5).: _Restricted to the singular set, \(D^{2}u\) is continuous with a logarithmic modulus of continuity. In particular, in a neighborhood of a singular point, the singular set is contained in a \(C^{1,\log}\) manifold of dimension \(\dim\ker D^{2}u\)._
The result of [12] is obtained using an epiperimetric inequality to control the Weiss monotonocity formula, introduced in [13]. For simplicity, we will instead consider the related Monneau monotonicity formula, at the cost of the explicit logarithmic modulus of continuity. The following result is part of the proof of [10] Theorem 1.9.
**Lemma 6.11**.: _Define_
\[\Xi_{u}^{q}(r)=r^{-(d+3)}\int_{\partial B_{r}}(u-q)^{2}=\int_{\partial B_{1}}( u_{r}-q)^{2} \tag{6.4}\]
_where \(u\) solves (6.1) with \(0\) as a singular point, and \(q\) is a quadratic form \(q(x)=\frac{1}{2}x\cdot Qx\) with \(Q\geq 0,\text{tr}Q=1\). Then_
\[\frac{d}{dr}\Xi_{u}^{q}(r)\geq-Cr^{\alpha-1}\]
_where \(C=C([f]_{C^{0,\alpha}})\). In particular, the limit \(\Xi_{u}^{q}(0+)\) exists._
As a corollary, we can show that near a singular point, \(u\) approximates a global solution at a uniform scale. This extends Lemma 13 of [11] to the \(C^{0,\alpha}\) source case.
We break the proof into two steps. First, we will prove the following slightly weaker claim:
**Lemma 6.12**.: _Let \(u\) solve (6.1) with 0 as a singular point. Then for every \(\varepsilon>0\), there exists \(\delta=\delta(\varepsilon,[f]_{C^{\alpha}})\) such that \(\|u_{\delta}-q\|_{C^{1}(B_{1})}<\varepsilon\), for some \(q\) of the form \(q(x)=\frac{1}{2}x\cdot Qx\) with \(Q\) a positive semidefinite matrix of trace 1. Here, we use the notation defined in (6.2)._
Proof.: Suppose for contradiction that we can find a sequence \(v^{k}\) solving \(\Delta v^{k}=(1+f^{k})\chi_{\{v^{k}>0\}}\) on \(B_{1}\) such that the result fails along the sequence \(v^{k}_{1/k}\). That is, for each \(k\), and every \(q\) of the form in the statement of the lemma,
\[\|v^{k}_{1/k}-q\|_{C^{1}(B_{1})}>\varepsilon \tag{6.5}\]
Then the sequence \(v^{k}_{1/k}\), defined on the expanding balls \(B_{k}\), converges along a subsequence on all compact sets in \(C^{1}\), to some \(v^{\infty}\). Since the \(f^{k}\) are uniformly \(C^{\alpha}\), the sequence \(f^{k}(\frac{x}{k})\) converges locally uniformly to 0. It follows that \(v^{\infty}\) is a nonnegative solution to \(\Delta v^{\infty}=\chi_{\{v^{\infty}>0\}}\) on \(\mathbb{R}^{d}\) with \(v^{\infty}(0)=0\).
Next, we show that the zero set of \(v^{\infty}\) has empty interior. Suppose otherwise, and we have some \(B_{r}(x)\subset B_{1}\) such that \(v^{\infty}\equiv 0\) on \(B_{r}(x)\). This implies that \(v^{k}_{1/k}\) is \(o(1)\) on \(\partial B_{r}(x)\) as \(k\to\infty\), and an application of the nondegeneracy bound Lemma 6.1 yields that \(v^{k}_{1/k}\equiv 0\) on \(B_{r/2}(x)\) for \(k\) sufficiently large.
Then, since the zero set of \(v^{\infty}\) has empty interior, all points in the zero set are singular free boundary points, and \(v^{\infty}\) is twice differentiable at those points. In particular, we get that \(v^{\infty}\) solves \(\Delta v^{\infty}=1\). As noted in the remarks after Lemma 13 of [11], this, along with the quadratic growth estimate, Lemma 6.2, implies that \(v^{\infty}\) is a quadratic polynomial of the type in the statement above. Thus, taking \(q=v^{\infty}\), we get a contradiction to (6.5) for \(k\) sufficiently large, which completes the proof.
**Lemma 6.13**.: _For every \(\varepsilon>0\), there exists \(\delta=\delta(\varepsilon,[f]_{C^{0,\alpha}(B_{1})})\) such that \(\|u_{\delta}-u_{0}\|_{C^{1}(B_{1})}<\varepsilon\), where we use the notation of (6.2). Equivalently, we have \(\|u-u_{0}\|_{C^{1}(B_{\delta})}<o(\delta^{2})\) as \(\delta\to 0\), where the little \(o\) depends only on \([f]_{C^{0,\alpha}}(B_{1})\)._
Proof.: Let \(q\) be the quadratic form given by the previous lemma. We apply Monneau's monotonicity formula, \(\Xi_{u}^{q}\), defined in (6.4). The derivative bound from Lemma 6.11 gives
\[\Xi_{u}^{q}(0)-\Xi_{u}^{q}(\delta)\leq C\delta^{\alpha}\]
We have \(\|u_{\delta}-q\|_{C^{1}(B_{1})}<\varepsilon\) from the lemma, so it follows that \(\|u_{\delta}-q\|_{L^{2}(\partial B_{1})}<C\varepsilon\) for some dimensional constant. Then \(\|u_{0}-q\|_{L^{2}(\partial B_{1})}^{2}\leq C\varepsilon^{2}+C\delta^{\alpha}\). But now we recall that \(u_{0},q\) are both quadratic forms, and so by equivalence of norms on \(\mathbb{R}^{d\times d}\), there exists a dimensional constant for which \(\|u_{0}-q\|_{C^{1}(B_{1})}\leq C\|u_{0}-q\|_{L^{2}(\partial B_{1})}\). Combining our estimates for \(\|u_{0}-q\|_{C^{1}(B_{1})}\) and \(\|u_{\delta}-q\|_{C^{1}(B_{1})}\), we conclude.
|
2310.20304 | Advancing Fluid Dynamics Stability Analysis: Construction of Lyapunov
Functions via the Generalized Kinetic Energy Approach | The energy method, also known as the Reynolds-Orr equation, is widely
utilized in predicting the unconditional stability threshold of shear flows
owing to the zero contribution of nonlinear terms to the time derivative of
perturbation kinetic energy. However, it often underestimates the critical
Reynolds numbers compared to experimental measurements. On the other hand,
linear stability analysis tends to yield impractically high limits due to the
occurrence of subcritical transitions.
A novel methodology is introduced to enhance and validate the generalized
kinetic energy formulation, aiming to provide a more accurate estimation of
transition.
This method considers the influence of nonlinear terms in calculating the
threshold amplitude.
The efficacy of this approach is showcased through the utilization of basic
low-order turbulence models and the Poiseuille flow as illustrative examples.
Through the proposed technique, the objective is to bridge the disparity
between theoretically predicted critical Reynolds numbers and experimental
observations, thus providing a more precise evaluation of shear flow stability.
This research contributes to the advancement of stability analysis methods,
offering practical implications for diverse fluid flow scenarios. | Péter Tamás Nagy | 2023-10-31T09:17:38Z | http://arxiv.org/abs/2310.20304v1 | Advancing Fluid Dynamics Stability Analysis: Construction of Lyapunov Functions via the Generalized Kinetic Energy Approach
###### Abstract
The energy method, also known as the Reynolds-Orr equation, is widely utilized in predicting the unconditional stability threshold of shear flows owing to the zero contribution of nonlinear terms to the time derivative of perturbation kinetic energy. However, it often underestimates the critical Reynolds numbers compared to experimental measurements. On the other hand, linear stability analysis tends to yield impractically high limits due to the occurrence of subcritical transitions.
A novel methodology is introduced to enhance and validate the generalized kinetic energy formulation, aiming to provide a more accurate estimation of transition. This method considers the influence of nonlinear terms in calculating the threshold amplitude. The efficacy of this approach is showcased through the utilization of basic low-order turbulence models and the Poiseuille flow as illustrative examples.
Through the proposed technique, the objective is to bridge the disparity between theoretically predicted critical Reynolds numbers and experimental observations, thus providing a more precise evaluation of shear flow stability. This research contributes to the advancement of stability analysis methods, offering practical implications for diverse fluid flow scenarios.
## 1 Introduction
Up to a specific Reynolds number, it is widely acknowledged that most fluid dynamic systems are unconditionally stable (Reynolds 1895; Orr 1907). However, beyond this threshold, the behavior of the fluid remains an open question. In the 19th century, Lord Kelvin (F.R.S. 1887) suggested that the stability threshold amplitude decreases as viscosity approaches zero:"... the steady motion is stable for any viscosity, however small; and that the practical unsteadiness pointed out by Stokes forty-four years ago and so admirably investigated experimentally five or six years ago by Osbourne Reynolds, is to be explained by limits of stability becoming narrower and narrower the smaller is the viscosity." Unfortunately, determining this permissible perturbation level of the laminar state has proven to be a challenging problem. The only exception is the well-known linear stability limit, beyond which the laminar state's region of attraction vanishes. While calculating this limit is computationally intensive for general geometries, it is feasible. However, for many practical applications, this limit is excessively high, if not infinite.
The initial solutions for the unconditional stability limit of plane Poiseuille flow were derived by Reynolds (1895) and Orr (1907). They aimed to minimize the Reynolds number at which the kinetic energy of the disturbance does not grow. This optimization (the Euler-Lagrange function) led to a general eigenvalue problem, where the Reynolds number acted as the eigenvalue. Below the critical value, any perturbation decays exponentially. Initially, solutions were obtained for the two-dimensional problem due to its complexity. However, the computed value, approximately \(Re=88\), based on the Reynolds number defined by the maximum velocity and half the channel gap, was an order of magnitude smaller than the experimentally observed value. Later, Joseph & Carmi (1969) tackled the three-dimensional problem and revealed that the kinetic energy of spanwise oscillating perturbations could grow at a significantly smaller Reynolds number, specifically 49.55. Additionally, they demonstrated that the most unstable perturbations of two-dimensional base flows were those oscillating exclusively in the spanwise direction, instead of in the streamwise one. Lately, another proof of the same statement was published by Xiong & Chen (2019).
Recently, Falsaperla _et al._ (2019) challenged this established understanding, demonstrating that by redefining the energy norm, purely streamwise oscillating waves emerge as the most critical. Their findings were in excellent agreement with experiments conducted by Prigent _et al._ (2003). Moreover, their results aligned with the work of Moffatt (1990), who established the stability of flow perturbed by spanwise oscillating waves. This latter statement were verified by numerical experiments (Lundbladh _et al._, 1994; Reddy _et al._, 1998) where the evolution of perturbed flows were simulated numerically. They found that additional noise was needed for the initial perturbation in the case of purely streamwise or spanwise oscillating flows. A further generalization of the kinetic energy was recently investigated by Nagy & Kulcsar (2023), who introduced multiplicators in the definition of kinetic energy for all velocity components. Addressing the three-dimensional domain, they predicted a critical Reynolds number roughly 25% larger for both Couette and Poiseuille flows. Their analysis indicated that critical perturbations manifest as tilted waves in both flow configurations. However, it's worth noting that their study neglected a non-linear term in pressure calculations, limiting its validity to a specific perturbation amplitude; this limit, however, was not determined. The present research is the continuation of their idea. The definition of kinetic energy is further generalized, and the developed method can predict the threshold amplitude. The definition of this generalized kinetic energy is equivalent to the definition of (Nerli _et al._, 2007), who redefined the norm by a perturbation and found relatively accurate threshold amplitude in the case of low-dimensional models of shear flows.
An alternative approach to enhance the Reynolds-Orr method involves the utilization of enstrophy. Synge (1938) explored this method, and more recently, Fraternale _et al._ (2018) applied it, predicting a significantly larger critical Reynolds number of \(Re_{\rm crit}=155\) for the two-dimensional case. Notably, this value is approximately double the energy limit for the same configuration. Unfortunately, the non-linear term in the vorticity equation cannot be eliminated in the case of three-dimensional flows. Furthermore, Nagy (2022) showed that in the case of three-dimensional systems the predicted critical Reynolds number is smaller than in the case of using the original Reynolds-Orr equation even if the non-linear terms are neglected.
Another way of improving the original energy method involves constraining the potential perturbation field rather than altering the definition itself. Originally, such a constraint was that the velocity field must satisfy the continuity equation, implying divergence-free velocity in the context of incompressible flow. Nagy _et al._ (2023) observed that the solution of the Reynolds-Orr equation fails to meet the compatibility condition essential for a smooth, physically realistic solution. They introduced this condition as a constraint into the problem; however, their ultimate finding was that while the solution of the Reynolds-Orr equation
does not meet the condition, there exist velocity fields close to the solution that do fulfill the compatibility condition. This implies that the condition subtly modifies the original result. Another form of restriction was applied in the receptivity problem of compressible boundary layers by Kamal _et al._ (2023). They limited the possible excitation fields to physically relevant cases and achieved excellent agreement with simulation results. However, the drawback of their approach lies in the subjective nature of selecting physically relevant perturbations, which can be highly dependent on the specific flow configuration.
In the aforementioned cases where stability was established, the non-linear terms of the Navier-Stokes equations were either eliminated or treated as zero. Yet, it is likely that further improvements can only be achieved by considering these terms. One promising approach is to regard the non-linear part as an excitation and establish a bound for it, thus obtaining conditional stability. This concept was explored in the context of Couette flow using the resolvent of the linear operator in the unstable half-plane by Kreiss _et al._ (1994). However, extending this solution method further appears to be challenging. Another, more comprehensive method that models the non-linear term as a bounded excitation of the linear system has been developed by two groups: Liu & Gayme (2020) and Kalur _et al._ (2021). Referred to as the quadratic constrained (QC) method, they applied this technique to simple turbulence models. Alternatively, a broader approach to constructing Lyapunov functions is the sum-of-squares method. In the realm of fluid dynamics, Goulart & Chernyshenko (2012) proposed the utilization of this technique to establish the global stability of fluid dynamic problems. They demonstrated its effectiveness on a ninth-order model of Couette flow. Fuentes _et al._ (2022) employed this optimization technique to create non-quadratic Lyapunov functions. They projected the velocity field onto the modes of the classic energy equation solutions and achieved a significantly higher Reynolds number limit using 13 modes. While this method holds promise in constructing Lyapunov functions, its computational demands increase rapidly as the number of dimensions grows (Liu & Gayme, 2020).
Recently, Pershin _et al._ (2020) introduced a probabilistic approach to assess the stability of Couette flow. Additionally, they proposed a control technique aimed at expanding the region of attraction of the laminar state.
A fundamentally different approach to address this problem involves calculating the minimal perturbation necessary to induce a non-laminar solution, often referred to as the minimal seed. This approach is similar to conditional stability calculations; however, in this methodology, optimization occurs on the unstable side of the boundary between the stable and unstable regions. Implicitly, the existence and realization of these minimal seeds demonstrate stability, as the flow must remain stable below the perturbation amplitude of the minimal seed.
The first attempts to find such state began in the 1990s. In the initial approaches (Kreiss _et al._, 1994; Lundbladh _et al._, 1994; Reddy _et al._, 1998; Andersson _et al._, 1999), researchers introduced perturbations that were solutions of linear or energy stability analyses, or they optimized the growth of the linear system. The perturbation amplitude was minimized to establish the threshold level. With advancements in computational capacity, it became possible to optimize the perturbation of the full non-linear problem. Typically, the initial kinetic energy is minimized, leading to maximal kinetic energy after a certain time horizon. For low-order flow models proposed by Waleffe (Waleffe, 1995, 1997), Cossu (2005) calculated the energy of these minimal seeds. Later, this method was applied to real flow configurations (Cossu, 2005; Duguet _et al._, 2013; Kerswell _et al._, 2014; Kerswell, 2018; Parente _et al._, 2022; Zhang & Tao, 2023). Non-linear optimizations revealed localized perturbation fields (Wu, 2023) with significantly lower kinetic energy than perturbations optimized by linear methods. Readers are referred to the cited papers for a more detailed discussion and specific results. Comparing these minimal seed results with threshold amplitude values from
stability analyses can be instrumental in estimating the methods' accuracy. If they closely align, it suggests a well-modeled boundary between the stable and unstable regions. However, if they differ significantly, it signals the need for further development in at least one of the methods.
In this paper, the classic energy method is presented for discretized fluid mechanical systems. Then, the generalized kinetic energy (GKE) method is introduced in Section 2. The method is first applied to simple equations of turbulence: the Threfethen two-dimensional TTRD' model (Baggett & Trefethen 1997) (Section 3.1) and the Waleffe 1995 (W95) model (Waleffe 1995) (Section 3.2). In the next step, the method is demonstrated for higher, yet still relatively low-order models of Poiseuille flow with 180 and 520 degrees of freedom (Section 3.3). These models are created using the Galerkin projection method, employing the Stokes eigenfunctions.
Finally, the findings and conclusions are summarized in Section 4.
## 2 Theory
### The original energy method
When employing Galerkin or Galerkin-Petrov projection on the perturbed Navier-Stokes equation, the perturbed fluid motion can be described by the following ordinary differential equation system (Nerli & Camarri 2006):
\[\frac{\mathrm{d}q_{i}}{\mathrm{d}t}=A_{i,j}\ q_{j}+Q_{i,j,k}\ q_{j}\ q_{k}, \tag{1}\]
\(q_{i}(t)\) represents an \(n\)-element vector (\(i=1...n\)) describing the perturbation of the base flow over time \(t\). The coefficients \(A_{i,j}\) and \(Q_{i,j,k}\) are time-independent arrays characterizing the behavior of the perturbed flow, where \(i,j\), and \(k\) are running variables ranging from \(1\) to \(n\) in the Einstein summation notation. For convenience, the last term in equation (1) can be rewritten as:
\[Q_{i,j,k}\ q_{j}\ q_{k}=N_{i,j}(q_{i}), \tag{2}\]
where
\[N_{i,j}(q_{i})=Q_{i,j,k}\ q_{k}. \tag{3}\]
The investigated system is stable, if the perturbations (\(q_{i}\)) tend to zero as \(t\rightarrow\infty\). In cases where the perturbation is assumed to be small (\(q_{i}\propto\epsilon\)), neglecting the non-linear (quadratic) terms in the equation allows for linear stability analysis. This involves examining the eigenvalues of the matrix \(A_{i,j}\). However, such an analysis is often insufficient in practical applications. \(A_{i,j}\) is non-normal meaning that the eigenvectors are non-orthogonal. For small initial perturbations, the amplitudes can grow exceptionally large and the non-linear terms cannot be neglected (Schmid 2007; Kerswell 2018).
An alternative method of stability analysis involves examining the derivative of the perturbation kinetic energy with respect to time. Assuming the kinetic energy of the perturbations is the inner product of the state vector:
\[e=q_{i}q_{i}, \tag{4}\]
its temporal derivative can be easily obtained from equation (1):
\[\frac{\mathrm{d}e}{\mathrm{d}t} =2\ q_{i}\frac{\mathrm{d}q_{i}}{\mathrm{d}t} \tag{5}\] \[\frac{\mathrm{d}e}{\mathrm{d}t} =2\ A_{i,j}\ q_{i}\ q_{j}+2\ Q_{i,j,k}\ q_{i}\ q_{j}\ q_{k}. \tag{6}\]
According to the Reynolds-Orr identity (Orr, 1907; Schmid & S Henningson, 2001) (utilizing Gauss divergence theorem), the non-linear term does not influence the change in kinetic energy if the perturbations are confined by walls, are periodic, or decay to zero in the far field in directions, which are reasonable assumptions in most cases.
\[2\,Q_{i,j,k}\,q_{i}\,q_{j}\,q_{k}=0. \tag{7}\]
From this point, matrices and vectors are denoted by bold letters to enhance readability. The Einstein summation notation is used when a three-dimensional array appears in an expression or the discussion.
_It is important to note that if the product \(\mathbf{q}^{T}\mathbf{q}\) is not equal to the kinetic energy, the Reynolds-Orr identity cannot be applied, and the non-linear term cannot be eliminated. Let's consider an ordinary differential equation system given where the variable is \(\tilde{\mathbf{q}}\) and the kinetic energy can still be calculated as:_
\[e=\tilde{\mathbf{q}}^{T}\mathbf{W}\tilde{\mathbf{q}} \tag{8}\]
_where \(W_{i,j}\) is a real, positive definite matrix typically expressing integration weights. \(\mathbf{W}=\mathbf{F}^{T}\mathbf{F}\) can be obtained using Cholesky decomposition on \(\mathbf{W}\). Since_
\[e=\tilde{\mathbf{q}}^{T}\mathbf{F}^{T}\mathbf{F}\tilde{\mathbf{q}}, \tag{9}\]
_by defining \(\mathbf{q}=\mathbf{F}\tilde{\mathbf{q}}\), \(\mathbf{q}^{T}\mathbf{q}\) represents the kinetic energy. Through the transformation, a new \(A_{i,j}\) matrix and \(Q_{i,j,k}\) array can be obtained, enabling the application of the Reynolds-Orr identity to the quadratic term._
The growth rate of the kinetic energy is
\[\mu_{e}=\frac{1}{e}\frac{\mathrm{d}e}{\mathrm{d}t} \tag{10}\]
and using equations (6) and (7) the following expression can be derived:
\[\mu_{e}=\frac{2\,\mathbf{q}^{T}\mathbf{\Lambda}\mathbf{q}}{\mathbf{q}^{T}\mathbf{q}} \tag{11}\]
The flow is considered Lyapunov stable, if \(\mu_{e}<0\) for any \(q_{i}\) state. This statement is equivalent to ensuring that the maximum over any possible state is negative:
\[\mu_{\mathrm{m},e}=\max_{\mathbf{q}}\,\mu_{e}\,(\mathbf{q})\,<0. \tag{12}\]
The numerator in (11) can be written as \(2\mathbf{q}^{T}\mathbf{\Lambda}\mathbf{q}=\mathbf{q}^{T}(\mathbf{\Lambda}+\mathbf{\Lambda}^{T})\mathbf{q}\). Moreover, the expression (11) represents the Rayleigh quotient of \(\mathbf{\Lambda}+\mathbf{\Lambda}^{T}\). Since \(\mathbf{\Lambda}+\mathbf{\Lambda}^{T}\) is a symmetric matrix, the largest Rayleigh quotient corresponds to the largest eigenvalue of \(\mathbf{\Lambda}+\mathbf{\Lambda}^{T}\), which is the maximum possible growth rate of kinetic energy. Therefore, the flow is Lyapunov stable if the largest eigenvalue of \(\mathbf{\Lambda}+\mathbf{\Lambda}^{T}\) is negative:
\[\lambda_{\mathrm{max}}\left(\mathbf{\Lambda}+\mathbf{\Lambda}^{T}\right)<0. \tag{13}\]
The critical state, which maximizes the growth rate of kinetic energy, is the corresponding eigenvector. Unfortunately, this condition is strict for practical application. This analysis is referred to as energy method or non-linear stability analysis since the results are valid for the non-linear system due to the non-linear terms not being assumed zero during the derivation but were eliminated by the Reynolds-Orr identity.
In many fluid dynamic applications, the concern is not just whether the flow is stable or not, but what the limit is where the flow becomes unstable. It's important to note that viscosity or
the Reynolds number only affects a specific part of the linear terms (\(\mathbf{A}\)) because the Laplace operator in the Navier-Stokes equation is linear and does not directly influence the non-linear terms. Let us decompose the matrix \(\mathbf{A}\) into components dependent on the Reynolds number and those independent of it:
\[\mathbf{A}(Re)=\mathbf{A}_{U}+\frac{1}{Re}\mathbf{A}_{R}. \tag{14}\]
Considering that the Laplacian term can only dissipate kinetic energy, \(\mathbf{A}_{R}\) is a positive definite matrix. The smallest Reynolds number, where \(\mu_{\mathrm{m},e}=0\), is equivalent to the smallest Reynolds number where \(\mu_{e}=0\). By substituting (14) into (11), setting the expression to zero, and subsequently expressing \(Re\) and calculating its minimum through variation, we arrive at the corresponding Euler-Lagrange equation:
\[\mathbf{A}_{R}+\mathbf{A}_{R}^{T}=\bar{Re}\left(-\mathbf{A}_{U}-\mathbf{A}_{U }^{T}\right). \tag{15}\]
This equation represents a general eigenvalue problem where the eigenvalue is the Reynolds number. The smallest eigenvalue, typically denoted as \(Re_{\mathrm{E}}\), is referred to as the global stability limit. If \(Re<Re_{\mathrm{E}}\), then \(\mu_{\mathrm{m},e}<0\), signifying unconditional stability in the flow.
### The generalized energy method
The classical energy method often proves to be highly conservative, predicting Reynolds number limits below experimental observations. This issue arises because at high Reynolds numbers, the \(\mathbf{A}\) matrix becomes non-normal. In such cases, the eigenvectors are non-orthogonal, and even in a linearly stable system, energy can grow significantly (Schmid, 2007) although it does not necessarily lead to a turbulent state.
The key to improving this method lies in introducing a generalized kinetic energy formulation, a concept also proposed by Nerli _et al._ (2007). The transformation of the state vector \(\mathbf{q}\) by an invertible \(\mathbf{S}\) matrix is given by
\[\mathbf{q}=\mathbf{S}\,\mathbf{r}, \tag{16}\]
and the generalized kinetic energy is defined as
\[h=\mathbf{r}^{T}\mathbf{r}. \tag{17}\]
This definition of generalized kinetic energy is equivalent to the one proposed by Nerli _et al._ (2007). However, their approach involved redefining the norm using a perturbation matrix, while here, variables are transformed. Although the objective of determining the allowable perturbation level is similar, the construction of the new norm is different. Additionally, the solution technique for calculating the threshold amplitude (defined in equation (3) in Nerli _et al._ (2007)) was not detailed there, a critical aspect for large systems.
The differential equation (1) can be rewritten as:
\[\frac{\mathrm{d}\,S_{i,j}r_{j}}{\mathrm{d}t}=A_{i,j}\,S_{j,k}\,r_{k}+Q_{i,j,k} \,S_{j,l}\,r_{l}\,S_{k,m}\,r_{m} \tag{18}\]
and
\[\frac{\mathrm{d}r_{i}}{\mathrm{d}t}=S_{i,j}^{-1}A_{j,k}\,S_{k,l}\,r_{l}+S_{i,j }^{-1}Q_{j,k,l}\,S_{k,m}\,r_{m}\,S_{l,o}\,r_{o}. \tag{19}\]
To facilitate this transformation, let's define:
\[\tilde{A}_{i,j} =S_{i,l}^{-1}\,A_{l,k}\;S_{k,j} \tag{2.20}\] \[\tilde{Q}_{i,j,k} =S_{i,m}^{-1}\mathcal{Q}_{m,o,l}\,S_{o,j}\;S_{l,k}\] (2.21) \[\tilde{N}_{i,j}(r_{i}) =S_{i,o}^{-1}Q_{o,k,l}\,S_{k,m}\,r_{m}\;S_{l,j} \tag{2.22}\]
These transformations result in a similar ordinary differential equation as (2.1), if \(A_{i,j}\) and \(Q_{i,j,k}\) are replaced by \(\tilde{A}_{i,j}\) and \(\tilde{Q}_{i,j,k}\), respectively. It is worth noting that while transforming the coefficient array \(Q\) might not be beneficial in practice due to computational expenses, the transformation of state vectors is a more computationally efficient alternative.
The growth rate of the generalized kinetic energy is defined as:
\[\mu_{h}=\frac{1}{h}\frac{\mathrm{d}h}{\mathrm{d}t}, \tag{2.23}\]
and can be calculated similarly to (2.6) as
\[\mu_{h}=\frac{2\,\tilde{A}_{i,j}\;r_{i}\;r_{j}+2\,\tilde{Q}_{i,j,k}\;r_{i}\;r_ {j}\;r_{k}}{r_{l}r_{l}}. \tag{2.24}\]
The flow is stable, if the \(\mu_{h}<0\) for any state \(r_{i}\).
The main difference lies in the quadratic term (\(\tilde{Q}_{i,j,k}\)) contributing to the growth rate of generalized kinetic energy (2.23), unlike in the case of the original kinetic energy.
Due to the presence of this term, conditional stability can be established, and it can be utilized to calculate the threshold amplitude.
It is convenient to rewrite the state vector as the product of its magnitude \(\gamma=\sqrt{r_{i}r_{i}}\) and a unitary vector:
\[r_{i}=\gamma\tilde{r}_{i}. \tag{2.25}\]
After substitution into equation (2.24), the growth rate of the generalized kinetic energy can be expressed as:
\[\mu_{h}=2\;\tilde{A}_{i,j}\;\tilde{r}_{i}\;\tilde{r}_{j}+2\,\gamma\;\tilde{Q}_ {i,j,k}\;\tilde{r}_{i}\;\tilde{r}_{j}\;\tilde{r}_{k}. \tag{2.26}\]
This approach was also employed by Nerli _et al._ (2007). The value \(\gamma\) can be used to characterize the amplitude of the perturbation. Let us define the possible maximum growth rate at a given level of perturbation as:
\[\mu_{\max,h}(\mathbf{S},\gamma)=\max_{\tilde{r}}\;\mu_{h}(\tilde{\mathbf{r}}, \mathbf{S},\gamma). \tag{2.27}\]
If the growth rate of generalized energy remains smaller than zero up to a certain amplitude (\(\mu_{h}<0\) if \(\gamma<\gamma_{\mathrm{crit}}\)), the investigated system is conditionally stable (Bedrossian _et al._, 2017), and \(h\) is a Lyapunov function. Since \(\mu_{h}\leqslant\mu_{\max,h}\), the flow is stable, if \(\mu_{\max,h}\) is smaller than zero.
The crucial question is how to determine \(\gamma_{\mathrm{crit}}\). Firstly, it's essential to emphasize that the developed method is applicable to subcritical systems within the investigated range; they must be linearly stable. For a linearly unstable system, \(\mu_{\max,h}>0\) for any \(\mathbf{S}\). In the case of a linearly stable system, there exist transformation matrices where the generalized energy growth rate (\(\mu_{h}\)), at least for infinitesimally small perturbations \(\gamma\to\epsilon\), practically \(\gamma=0\). As the amplitude of the perturbation (\(\gamma\)) increases, it can be assumed that the possible maximum growth rate increases continuously. At a certain value, the possible maximum growth rate becomes zero. This value of \(\gamma\) is the critical value. It is implicitly defined as:
\[\mu_{\max,h}(\mathbf{S},\gamma_{\mathrm{crit}})=0. \tag{2.28}\]
The corresponding unitary state vector, defined by
\[\mu_{\max,h}(\mathbf{S},\gamma_{\mathrm{crit}})=\mu_{h}(\tilde{\mathbf{r}}_{\mathrm{ crit}},\mathbf{S},\gamma_{\mathrm{crit}}), \tag{29}\]
can be utilized to obtain the critical state: \(\mathbf{r}_{\mathrm{crit}}=\gamma_{\mathrm{crit}}\tilde{\mathbf{r}}_{\mathrm{crit}}\). The maximal growth rate of generalized energy (\(\mu_{\max,h}\)) as the function of excitation magnitude \(\gamma\) is plotted in figure 1. At low \(\gamma\) values, the linear part of the dynamical system dominates, where the maximum growth rate is almost constant and equal to the Rayleigh coefficient of the \(\tilde{A}_{i,j}+\tilde{A}_{j,i}\) matrix. For higher \(\gamma\) values, the non-linearity of the system influences the maximal growth, which tends towards a straight line. The slope of this line corresponds to the maximum of \(\{2\ \tilde{Q}_{i,j,k}\ \tilde{r}_{i}\ \tilde{r}_{j}\ \tilde{r}_{k}\}\) among possible \(\tilde{r}_{i}\) states.
The investigated region can be envisioned as a multidimensional hypersphere in the \(\mathbf{r}\) state space around the origin. The radius of this sphere is \(\gamma\). If the radius is smaller than a critical value \(\gamma_{\mathrm{crit}}\), then \(\mu_{h}<0\), indicating that the norm of the solution vectors is decreasing, and the trajectories move inward the sphere, ultimately converging to the origin. At the critical radius, a trajectory becomes tangential to the sphere, and it may not reach the origin. The hypersphere with radius \(\gamma_{\mathrm{crit}}\) represents the stability region. Outside this sphere, the system can be, but is not necessarily, unstable. In the case of the two-dimensional problem, the stability region reduces to a circle and will be illustrated in Subsection 3.1 in figure 2(b).
The presented method offers the flexibility of varying and optimizing the transformation matrix. A common approach might be to maximize the stability region described by the value of \(\gamma_{\mathrm{crit}}\) in the state space of \(\mathbf{r}\) vectors. However, this optimization strategy is not advantageous, as multiplying \(\mathbf{S}\) by an arbitrary constant greater than one would inflate \(\gamma_{\mathrm{crit}}\). To address this issue, one option is to constrain the norm of the transformation matrix. However, a more beneficial and informative approach is to transform the stability region back to the original state space of \(\mathbf{q}\).
The linear transformation (scaling and rotating) of the hyperspehere results in a hyperellipoid in the original state space \(\mathbf{q}\). This hyperellipsoid defines the boundary of the region of attraction of the origin. Although the kinetic energy (\(e\)) can grow significantly inside this region, stability is guaranteed due to the exponential decay of the solution in a properly chosen solution norm (\(h\)). The largest radius of a hypersphere contained within the hyperellipsoid is equal to the smallest minor axis of the hyperellipsoid. The square of this radius (\(e_{\mathrm{min}}\)) represents the threshold kinetic energy below which the flow remains stable.
The region of attraction in both the original and transformed state spaces is illustrated in figure 2 in the case of a two-dimensional turbulence model. Due to the similarities to the method of (Nerli _et al._, 2007), who utilizes generalized kinetic energy, the region of attraction was a hyperellipsoid, there.
In addition, it is crucial to note that in this context, "min" pertains to the minimum squared radius of the region of attraction, not the minimal energy threshold leading to a turbulent state. The value \(e_{\mathrm{min}}\) can be mathematically expressed using equations (4) and (25) as follows:
\[e_{\mathrm{min}}(\mathbf{S})=\gamma_{\mathrm{crit}}^{2}(\mathbf{S})\min_{ \tilde{\mathbf{r}}}\{\tilde{\mathbf{r}}^{T}\mathbf{S}^{T}\mathbf{S}\tilde{\mathbf{r}}\}. \tag{30}\]
The argument of the minimum function is the Rayleigh coefficient of \(\mathbf{S}^{T}\mathbf{S}\), and the minimum value corresponds to the smallest eigenvalue of \(\mathbf{S}^{T}\mathbf{S}\), since \(\mathbf{S}^{T}\mathbf{S}\) is a symmetric matrix.
\[e_{\mathrm{min}}(\mathbf{S})=\gamma_{\mathrm{crit}}^{2}(\mathbf{S})\ \lambda_{ \mathrm{min}}\left(\mathbf{S}^{T}\mathbf{S}\right). \tag{31}\]
The corresponding unitary eigenvector \(\tilde{\mathbf{r}}_{\mathrm{min}}\) can be utilized to get the two locations \(\mathbf{q}_{\mathrm{min}}=\pm\gamma_{\mathrm{crit}}\,\mathbf{S}\,\tilde{\mathbf{r}}_{ \mathrm{min}}=\pm\mathbf{S}\,\mathbf{r}_{\mathrm{min}}\), where the hypersphere touches the hyperelipsoid,, as illustrated in figure 2(a).
If the kinetic energy of the perturbation is smaller than this critical value (\(e<e_{\rm min}\)), the flow is stable. The aim of the method is to maximize this limit, \(e_{\rm min}\). It is important to note that maximizing the norm of \(\boldsymbol{q}_{\rm crit}=\mathbf{S}\,\boldsymbol{r}_{\rm crit}\) would be unfeasible. Such an optimization would result in singular transformation matrices, causing stability regions to resemble "nail"-like structures.
### The usage of the generalized energy method
Two key questions remain unanswered. The first one concerns how the maximal growth rate (2.27) can be calculated, as it has a non-linear dependence on the state vector. Proving that a specific \(\boldsymbol{\tilde{r}}\) maximizes the expression (2.26) while satisfying the constraint of unity for the state vectors (\(\boldsymbol{\tilde{r}}\)) is a challenging task. This can be accomplished using Sum of Squares (SOS) methods, although they are computationally very expensive, as highlighted by (Fuentes _et al._ 2022). Simultaneously, other general constrained optimization techniques have seen significant advancements in recent decades. Typically, these methods compute the minimum rather than the maximum; hence, the functions to be maximized are multiplied by minus one.
It is worth noting that the expression in (2.26) is analytical, allowing for the analytical and explicit derivation of the gradient and the Hessian matrix. This feature enhances the efficiency of the optimization process. Various methods, including Sequential Quadratic Programming (SQP), Active Set Algorithm, and Interior Point Algorithm (Nocedal & Wright 2006), were explored. These methods are implemented in MATLAB's _fmincon_ function. After considering factors such as calculation time, accuracy, and robustness of the methods, it was found that the SQP method proved to be optimal for small systems (\(n=4\)), while the Interior Point Algorithm performed best for larger systems (\(n>180\)).
During the optimization, multiple random seed vectors were generated to initialize the process. Interestingly, at low \(\gamma\) values (less than 0.1), 60-100% of the cases converged to the same maximum. Even at high perturbation magnitudes (\(\gamma\approx 10\)), the convergence rate remained above 40%, as observed in the case of the four-dimensional turbulence model by Waleffe (1995). This observation suggests that the optimization procedure successfully identifies the global maximum.
The second key question is how the optimal transformation matrix \(\mathbf{S}_{\rm opt}\) can be obtained.
One plausible approach involves considering the eigenvectors of \(\mathbf{A}\) as the initial choice for \(\mathbf{S}\). They diagonalize the linear part of the system. Under this transformation, the new state variables correspond to the coefficients of the eigenmodes. The generalized kinetic energy is represented as the sum of these coefficient squares, ensuring that the system achieves energetic stability at low perturbation level. Such a transformation solves the issue of non-normality, since the eigenvectors of the transformed system are orthogonal. However, it is worth noting that in certain scenarios, \(\mathbf{A}\) might not be diagonalizable. This occurs when the eigenvectors are not linearly independent, rendering the inverse of the transformation matrix non-existent.
Moreover, empirical attempts have revealed that this approach is suboptimal since it fails to maximize \(e_{\rm min}\), a critical criterion in the optimization process.
A potential approach for optimizing \(\mathbf{S}\), can be outlined as follows:
1. Solve equation (2.28) for \(\gamma_{\rm crit}\).
2. Calculate \(e_{\rm min}\) utilizing equation (2.31).
3. Update (\(\mathbf{S}\)) systematically and repeat steps 1 and 2 iteratively until \(e_{\rm min}\) (2.31) converges to its maximum.
This systematic process ensures a step-by-step refinement of \(\mathbf{S}\), allowing the optimization to progress toward the maximum value of \(e_{\rm min}\).
This method is indeed feasible; however, the absence of gradients poses a significant challenge, especially when dealing with a large number of unknowns (\(n^{2}\)), leading to
computationally expensive optimizations. While one potential approach involves implicit differentiation of the expression \(e_{\min}(S_{i,j})\), this method proves exceptionally challenging. Implicit differentiation necessitates solving a complex nonlinear equation system, contrasting with the straightforward calculation of an explicit expression. Consequently, in cases where the system comprises a limited number of degrees of freedom, optimization without analytical gradients remains possible. Nonetheless, for expansive systems, the absence of these gradients renders the optimization process unfeasible due to its computational intensity.
An alternative approach involves introducing \(\gamma\) as an additional optimization variable within the elements of the transformation matrix (\(\mathbf{S}\)). Simultaneously, a constraint is imposed, mandating the growth rate to be zero. The expression \(e_{\min}(\mathbf{S}\gamma_{\text{crit}})\) is optimized, which is constrained by the equation (2.28). Although this method slightly increases the number of unknowns, it significantly enhances the efficiency of the optimization process. The reason lies in the explicit and efficient calculation of gradients, which become feasible due to this approach.
However, the previously mentioned numerical methods (SQP, Active Set, Interior Point) were not robust enough to handle this problem, likely due to its high sensitivity to the constraint. Initially, an attempt was made using the augmented Lagrangian method (Nocedal & Wright 2006) where another penalty term is added to mimic the Lagrange multiplier. This multiplier should be updated at each iteration to fulfill the constraint. However, solving the constraint equation (2.28) for \(\gamma_{\text{crit}}\) in each iteration significantly reduced the computational time significantly due to faster convergence and required fewer iteration step. Therefore, the usage of the Lagrange multiplier term lost its sense and it was abandoned, and the method simplified to the penalty method (Nocedal & Wright 2006). The resulting optimization problem in each iteration step is solved by the _fminunc_ function using 'quasi-newton' method, and then equation (2.28) is solved for \(\gamma_{\text{crit}}\) to fulfill the constraint. This modified approach proved to be more effective and computationally efficient. Additional essential details regarding the optimization process, including gradients and Hessian matrices of the functions, can be found in Appendix A.
## 3 Application
### Trefethen's simple model
One of the simplest low-order representation of turbulent flows is the TTRD' model described by Baggett & Trefethen (1997). In this model, the linearized part (\(\mathbf{A}\)) becomes non-normal
Figure 1: The maximal growth rate of the generalized kinetic energy (\(\mu_{\max,h}\)) as the function perturbation magnitude \(\gamma\) in the case of the optimally transformed TTRD’ model at \(Re=5\). The red curve represents the one tenth of the growth rate of original kinetic energy (\(\mu_{e}=0.6\)), which is independent of the perturbation level.
as the Reynolds number increases. Meanwhile, the non-linear part does not affect the growth rate of kinetic energy, as the corresponding matrix remains asymmetric.
The TTRD' model is represented by the following equation:
\[\frac{\mathrm{d}\vec{q}}{\mathrm{d}t}=\begin{bmatrix}-\frac{1}{Re}&1\\ 0&-\frac{1}{Re}\end{bmatrix}\vec{q}+\begin{bmatrix}0&-q_{1}\\ q_{1}&0\end{bmatrix}\vec{q}. \tag{3.1}\]
In the original reference, the state variables are denoted as \(\vec{q}=[u,v]^{T}\). The non-linear part of equation (3.1) can be expressed as a non-linear array \(Q_{i,j,k}\) (2.1). The non-zero elements are
\[Q_{1,1,2} =-1 \tag{3.2}\] \[Q_{2,1,1} =1. \tag{3.3}\]
The model remains linearly stable for arbitrarily large Reynolds numbers since the eigenvalues (-1/\(Re\)) remain negative. However, as the Reynolds number increases, the eigenvectors become non-orthogonal. The unconditional stability limit is \(Re_{E}=2\), as determined by equation (2.15). Above this limit, the kinetic energy of the perturbation can grow, but it does not necessarily lead to a turbulent state.
The generalized kinetic energy method (GKE) is applied to the problem. The optimal transformation matrices \(\mathbf{S}\), that maximize \(e_{\min}\), are calculated at the Reynolds number between 5 and 100 with the step size 2.5. The method is demonstrated in Figure 2 at \(Re=5\), where stable trajectories are depicted in green and unstable trajectories in red. These trajectories are plotted as functions of the original variables (Figure 2(a)) and the transformed variables (Figure 2(b)). The calculated region of attraction appears as an ellipse in the original state space, precisely touching the unstable trajectories. Outside of this region, there are states from which the solution tends toward another equilibrium point and does not return to the origin.
The calculated threshold amplitude as the function of Reynolds number are plotted in figure 3 and compared with the findings of Liu & Gayme (2020). The cited authors used the quadratic constraint method, which has been proven to be computationally efficient. They
Figure 2: The phase space trajectories of the TTRD’ model at \(Re=5\). Green trajectories converge towards the origin, while red trajectories tend to another equilibrium point, which is not shown. Black curve represents the boundary of the region of attraction. The red vector is the critical perturbation (2.28), where the growth rate of the generalized kinetic energy was zero at the critical perturbation level (2.29). The yellow vector illustrates the smallest perturbation (2.31) in the original state space (a) whose length is equal to the critical perturbation in the optimally transformed state space (b).
treated the non-linear term as a forcing with an approximated upper bound. The threshold amplitude is approximated by a power function which is plotted in figure 3.
The calculated threshold amplitude (\(\sqrt{e_{\min}}\)) decays as a function of the Reynolds number following a power law in CKE case as well. The exponents are nearly identical: -3.005 in this study and -3.07 in the work of Liu & Gayme (2020). However, the method presented here predicts a stable region with a radius roughly three times larger, indicating a energy level approximately one magnitude higher. This substantial difference arises from their approximation of the non-linear term, while GKE calculation takes into account the exact terms, providing a more precise representation of the system's behavior.
In the next step, the accuracy of the region of attraction is investigated by solving the ordinary differential equation close to the outside of the stable region. The solutions are initialized from slightly increased threshold state vectors \(\boldsymbol{q}_{0}=c_{u}\boldsymbol{q}_{\min}\) and computed using the Matlab _ode45_ Runge-Kutta method. The \(c_{u}\) value is systematically increased by 0.5% from 0.98 until an unstable solution obtained. The average value of \(c_{u}\) for the unstable solutions is found to be 1.02, indicating that the proposed method is highly accurate; unstable solutions can be obtained very close to the region of attraction. Additionally, the method is partially verified by the observation that in the investigated cases, none of the multipliers fall below one.
In figure 3, the square root of the energy of the critical perturbation (\(e_{\text{crit}}=||\boldsymbol{q}_{\text{crit}}||\)) is also plotted. While these values have limited physical relevance in the current study, as they correspond to a critical state in an optimized state space, they can be significant for understanding and analyzing the boundary between laminar and turbulent regions and they could prove useful for further comparisons.
However, it's worth noting that in most cases, these curves show high sensitivity to the optimization convergence, indicating that the results are likely less accurate compared to the \(e_{\min}\) values.
The optimal transformation matrix is
\[\mathbf{S}_{\text{opt}}\approx\begin{bmatrix}0.932426&-1.03515\\ 0.0273741&0.390105\end{bmatrix} \tag{10}\]
at \(Re=5\).
### Waleffe model
In the next step, the GKE method is applied to the low-order turbulence model proposed by Waleffe (1995). Since the method under consideration is capable of investigating systems around the origin of the state space, and the laminar equilibrium point in the original model was non-zero, the last state variable was shifted as \(n=m-1\) (using the original notation). This adjustment was made following the approach of Henningson (1996) and Kalur _et al._ (2021). Consequently, the resulting dynamical system is represented as follows:
\[\frac{\mathrm{d}\vec{q}}{\mathrm{d}t}=\frac{1}{Re}\begin{bmatrix}-\lambda_{w} &Re&0&0\\ 0&-\mu_{w}&0&0\\ 0&0&-\nu_{w}&0\\ 0&0&0&-\sigma_{w}\end{bmatrix}\vec{q}+\begin{bmatrix}-\gamma_{w}q_{3}^{2}+q_ {2}q_{4}\\ \delta_{w}q_{3}^{2}\\ \gamma_{w}q_{3}q_{4}-\delta_{w}q_{3}q_{4}\\ -q_{4}q_{2}\end{bmatrix}. \tag{11}\]
The parameters \(\lambda_{w},\mu_{w},\nu_{w},\sigma_{w}\) represent the decay rates due to viscosity, while \(\gamma_{w},\delta_{w}\) describe the non-linear interaction between rolls (\(q_{2}\)) and streaks (\(q_{1}\)). For a more comprehensive physical explanation of the model, readers are referred to the original paper by Waleffe (1995).
The non-linear part of the equation (3.5) can also be be expressed as:
\[\mathbf{N}=\begin{bmatrix}0&0&-\gamma_{w}q_{3}&q_{2}\\ 0&0&\delta_{w}q_{3}&0\\ \gamma_{w}q_{3}&-\delta_{w}q_{3}&0&0\\ -q_{2}&0&0&0\end{bmatrix} \tag{3.6}\]
or using the three-dimensional array \(Q_{i,j,k}\), where the non-zero elements are:
\[Q_{1,2,4} =1; Q_{1,3,3} =-\gamma_{w}; \tag{3.7}\] \[Q_{2,3,3} =\delta_{w}; Q_{3,3,1} =\gamma_{w};\] (3.8) \[Q_{3,3,2} =-\delta_{w}; Q_{4,2,1} =-1; \tag{3.9}\]
In this study, three different parameter sets are investigated. The first set is characterized by \(\lambda_{w}=\mu_{w}=\sigma_{w}=10\), \(\nu_{w}=15\), \(\delta_{w}=1\), \(\gamma_{w}=0.1\), denoted as the W95A model (Waleffe, 1995). The parameters of the second set remain the same except \(\gamma_{w}=0.5\), and this configuration is denoted as W95B. In the last case, all parameters are set to 1, \(\lambda_{w}=\mu_{w}=\nu_{w}=\sigma_{w}=\delta_{w}=1\),, and this configuration is denoted as the BT model (Baggett & Trefethen, 1997). It is important to note that these parameter sets significantly influence the system dynamics (Baggett & Trefethen, 1997; Kalur _et al._, 2021).
The unconditional stability limit of the system can be calculated using equation (2.15), which has the analytical solution:
\[Re_{E}=2\sqrt{\lambda\,\mu}. \tag{3.10}\]
(Waleffe, 1995). For the W95A and W95B models, \(R_{E}=20\) in the case of W95A and W95B model,, and for the BT model, \(R_{E}=2\). Below this critical value, the system is unconditionally stable, and the permissible perturbation level is infinite.
The optimized transformation matrices are calculated for the W95A, W95B, and BT models over different ranges of Reynolds numbers: 25 to 200 for W95A model, 25 to 2000 for W95B model and 5 to 100 for BT model. For the W95A and BT models, the step size
Figure 3: The square root of the smallest kinetic energy at the boundary of the region of attraction (\(\epsilon_{\rm min}\)) and the square root of the kinetic energy of the critical perturbation (\(e_{\rm crit}\)) in the case of TTRD’ model. The fitted curve from Liu & Gayme (2020) using the QC method (\(0.912\,Re^{-3.07}\)) is shown alongside. The best-fitting curve of GKE results is \(\sqrt{e_{\rm min}}\approx 2.228\,Re^{-3.005}\). The red crosses represent the square root of the initial kinetic energy of unstable solutions close to the region of attraction. The vertical red line signifies the unconditional stability limit \(Re_{E}=2\).
was set to 2.5, while for the W95B model, a logarithmic spacing was applied over 150 steps. Figure 4 shows the largest inner radius of the region of attraction for the three models.
The results are compared with other stability calculations methods. For the W95A and BT models, the proposed method yielded nearly the same permissible perturbation levels as the sum-of-squares (SOS) method used by Kalur _et al._ (2021). Furthermore, they applied the quadratic constraints (QC) method to the system, predicting significantly smaller regions due to the approximation of non-linear terms using bounds, although it required lower computational cost. A comparative analysis of the accuracy of the QC method for the two-dimensional TTRD' model and these four-dimensional models suggests that the accuracy of the QC method deteriorates as the number of degrees of freedom of the model increases. In the case of W95B model, the result are compared to the calculations of the generalized kinetic energy method by Nerli _et al._ (2007). The presented novel implementation exhibited slight improvements due to the more general form of the energy function. Additionally, our results closely matched the non-linearly optimized minimal seeds calculated by Cossu (2005). (It is mentioned that Nerli _et al._ (2007) defined the kinetic energy with a multiplier of 1/2 which was compensated by a factor of \(1/\sqrt{2}\) on the plots here.) Similarly, both the SOS method and our result are very close to the optimized minimal seeds (Kalur _et al._, 2021) of the BT model. In both cases, the close stability threshold energy and minimal seed energy values mean that the stability region is calculated within acceptable accuracy.
At the same time, the solutions that are initialized outside the region of attraction tend to laminar state in the case of W95A model, which was also observed by Kalur _et al._ (2021). This suggest that the true region of attraction is significantly larger than the predicted one. The larger region can be probably obtained utilizing higher-order energy (Lypunov) function.
For demonstration purposes of the method, four simulations are carried out using the BT parameters at \(Re=10\) initialized from values at the bound of the predicted region of attraction and values slightly outside of it. The optimal transformation matrix is given by
\[\mathbf{S}_{\mathrm{opt}}\approx\begin{bmatrix}0.748814&-0.655534&-0.0327291& 0.00472172\\ -0.16196&-0.0779668&0.00174343&-0.00904493\\ 0.00406973&-0.00425752&0.164989&-0.00173407\\ -0.0485946&0.0213396&0.00172342&0.560516\end{bmatrix} \tag{3.11}\]
and the corresponding critical vectors are
\[\boldsymbol{q}_{\mathrm{min}}\approx\begin{bmatrix}0.00272966\\ 0.0373305\\ 0.0202733\\ 0.000371659\end{bmatrix}\text{ and }\boldsymbol{q}_{\mathrm{crit}}\approx \begin{bmatrix}0.125899\\ 0.0144673\\ -0.0287946\\ -0.00806114\end{bmatrix}. \tag{3.12}\]
Two solution are initialized with \(\boldsymbol{q}_{\mathrm{min}}\) and \(\boldsymbol{q}_{\mathrm{crit}}\), and the square root of their kinetic and generalized kinetic energy are plotted in figure 5 by blue and green colors, respectively. It can be observed that the generalized kinetic energy (\(h_{0}\)) is the same at the initial points, as both states are on the region of attraction hypersphere. Moreover, the generalized energy growth rates (\(\mu_{h}\)) are initially close to zero in both cases. However, this behavior is expected only in the case of a solution initialized by \(\boldsymbol{q}_{\mathrm{crit}}\) following its definition. As time progresses, both solutions exhibit a negative growth rate, tending towards the laminar equilibrium state. However, their initial original kinetic energies (\(e_{0}\)) differ due to the transformation of variables. Furthermore, a notable growth in kinetic energy (\(\mu_{e}\)) of the perturbation can be observed in the case of the solution initialized with \(\boldsymbol{q}_{\mathrm{min}}\). Nevertheless, this classic energy eventually decays, as expected, since in another norm, its energy monotonically decreases over time.
Two additional simulations were conducted, both initialized slightly outside of the
Figure 4: The square root of the smallest kinetic energy of the boundary of the region of attraction (\(e_{\rm min}\)) and the square root of the kinetic energy of the critical perturbation (\(e_{\rm crit}\)) as the function of Reynolds number in the case of W95A model (a), W95B model (b) and BT model (c). The QC, SOS, DAL curves represent the results of Kalur _et al._ (2021). The red, vertical dashed line represents the unconditional stability limit, \(Re_{E}\). The best-fitting curves of \(\sqrt{e_{\rm min}}\): \(\sqrt{e_{\rm min}}\approx 77102\,Re^{-2.491}\) for W95A; \(\sqrt{e_{\rm min}}\approx 1467.5\,Re^{-2.043}\) for W95B ; \(\sqrt{e_{\rm min}}\approx 4.2818\,Re^{-2.0008}\) for BT model.
Figure 5: The square root of the original kinetic energy (a) and the generalized kinetic energy (b) as the function of time. These solutions are obtained by BT model at \(Re=10\). The green continuous curve is initialized from \(\mathbf{q}_{\rm min}\), the green dashed curve from \(1.2\mathbf{q}_{\rm min}\), the blue continuous curve from \(\mathbf{q}_{\rm crit}\), and the blue dashed curve from \(1.05\mathbf{q}_{\rm crit}\). The red, horizontal dashed line represents the permissible perturbation level.
predicted region of attraction: \(\boldsymbol{q}_{0}=1.2\,\boldsymbol{q}_{\rm min}\) and \(\boldsymbol{q}_{0}=1.05\,\boldsymbol{q}_{\rm crit}\). It is noteworthy that in both cases, the solutions converge to a non-laminar equilibrium state. Specifically, the generalized kinetic energy experiences initial growth in both simulations, followed by oscillations around the non-laminar equilibrium state. It is important to observe that the kinetic energy in the simulation initialized by \(\boldsymbol{q}_{0}=1.2\,\boldsymbol{q}_{\rm min}\) grows significantly at the beginning due to non-normality. This growth leads to an energy level comparable to that of \(\boldsymbol{q}_{\rm crit}\). In contrast, in the other case, this pure non-modal growth is not observed. The original kinetic energy of the solution decays slightly at the beginning and increases only later.
In summary, concerning the GKE results of the four-dimensional model, the predicted perturbation thresholds are validated as accurate in the cases of the BT and W95B models. However, it has been demonstrated to be overly conservative in the case of the W95A model.
### Poiseuille flow
In the subsequent phase, a higher-order yet still low-dimensional model of the fluid dynamic system is developed to represent Poiseuille flow. This involves computing the Stokes eigenfunctions of a rectangular cuboid and determining the coefficients of the ordinary differential equation system using the Galerkin projection method. The Galerkin projection method, as established in previous research (Nerli & Camarri, 2006; Bergstrom, 1999), proves to be an efficient approach for constructing low-order models.
The Stokes equations in non-dimensional form are given by:
\[\frac{\partial u_{i}}{\partial t}=-\frac{\partial p}{\partial x_{i}}+\frac{1} {Re}\frac{\partial^{2}u_{i}}{\partial x_{j}\partial x_{j}} \tag{3.13}\]
and
\[\frac{\partial u_{i}}{\partial x_{i}}=0 \tag{3.14}\]
where \(u_{i}\) represents the non-dimensional velocity, \(p\) is the non-dimensional pressure, and \(x_{i}\) are the spatial coordinates: \(x_{1}\in[0,L_{x}]\); \(x_{2}\in[-1,1]\); \(x_{3}\in[0,L_{z}]\), defining a rectangular cuboid. The eigenvectors can be obtained by assuming the following ansatz:
\[u_{i}=\hat{u}_{i}\mathrm{e}^{\lambda t} \tag{3.15}\]
and solving the eigenvalue problem,
\[\lambda\hat{u}_{i}=-\frac{\partial\hat{p}}{\partial x_{i}}+\frac{1}{Re}\frac {\partial^{2}\hat{u}_{i}}{\partial x_{j}\partial x_{j}}, \tag{3.16}\]
for \(\lambda\). The eigenvalues are negative real numbers expressing the dissipation rate of the mode. Furthermore, the eigenvectors are orthogonal, which proves advantageous for Galerkin projection. Given the linearity of the problem and assuming periodic solutions in \(x_{1}\) and \(x_{2}\) directions, solving the eigenvalue problem is conveniently achieved using complex Fourier series. The modes of the \(\hat{u}_{i}\) velocity field can be expressed as follows:
\[\tilde{u}_{i,j_{m},k_{m}}(x_{2})\mathrm{e}^{\mathrm{i}(j_{m}\alpha_{0}x_{1}+k _{m}\beta_{0}x_{3})} \tag{3.17}\]
where \(\alpha_{0}=2\pi/L_{x}\) and \(\beta_{0}=2\pi/L_{z}\) are the wavenumbers, and \(j_{m}\), \(k_{m}\) are the indices of the modes ranging from \(-\infty\) to \(\infty\). Substituting the complex wave form (3.17) into the equations (3.14) and (3.16) leads to the following eigenvalue problem for each \(j_{m}\), \(k_{m}\) mode:
\[\begin{bmatrix}L&0&0&-\mathrm{i}\alpha\\ 0&L&0&-D_{x_{2}}\\ 0&0&L&-\mathrm{i}\beta\\ -\mathrm{i}\alpha&D_{x_{2}}&\mathrm{i}\beta&0\end{bmatrix}\begin{bmatrix}\vec {u}_{1}\\ \vec{u}_{2}\\ \vec{u}_{3}\\ \vec{p}\end{bmatrix}=\lambda\begin{bmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&0\end{bmatrix}\begin{bmatrix}\vec{u}_{1}\\ \vec{u}_{2}\\ \vec{u}_{3}\\ \vec{p}\end{bmatrix} \tag{3.18}\]
where \(\alpha=j_{m}\,\alpha_{0}\), \(\beta=k_{m}\,\beta_{0}\) and \(L=-(\alpha^{2}+\beta^{2})+D_{x_{2}}^{2}\) is the Laplace operator, where \(D_{x_{2}}\) is the differential operator with respect to \(x_{2}\).
The problem (3.18) can be discretized using the Chebyshev collocation method. The required boundary conditions involve stationary walls at the bottom and top of the domain, implying \(\tilde{u}_{i}(\pm 1)=0\) for any \(i\) velocity component. These conditions are enforced by removing the corresponding rows from the matrices. In this study, 100 Chebyshev collocation points are employed, a choice deemed accurate based on prior research (Nagy _et al._, 2023). The discretized version of the equations (3.18) solved for the first \(N_{y}\) modes with the largest \(\lambda\) eigenvalues for \(j_{m}\in[-N_{x},N_{x}]\) and \(k_{m}\in[-N_{z},N_{z}]\) resulting in total \(N_{t}=(2N_{x}+1)\,N_{y}\,\,(2N_{z}+1)\) number of modes. The calculation can be simplified, since in the case of complex conjugate wavenumber pairs (\(j_{m,a}=-j_{m,b}\) and \(k_{m,a}=-k_{m,b}\)), the eigenvalues are the same and the eigenvectors are the complex conjugate of each other \(\tilde{u}_{i,j_{m},k_{m}}=\tilde{u}_{i,-j_{m},-k_{m}}^{*}\). The values of the parameters \((N_{x},N_{y},N_{z})\) vary across different models and will be provided later. Subsequently, the coefficients \(A_{i,j}\) and \(Q_{i,j,k}\) are computed using the Galerkin projection method:
\[A_{i_{m},j_{m}}=\int_{\mathcal{O}}\left(-U_{j}\,\frac{\partial\hat{u}_{i,i_{m }}}{\partial x_{j}}-\hat{u}_{j,i_{m}}\frac{\partial U_{i}}{\partial x_{j}}+ \frac{1}{Re}\frac{\partial^{2}u_{i,i_{m}}}{\partial x_{j}\partial x_{j}} \right)\hat{u}_{i,j_{m}}^{*}\,\mathrm{d}\Omega \tag{3.19}\]
\[Q_{i_{m},j_{m},k_{m}}=\int_{\mathcal{O}}\left(-\hat{u}_{j,j_{m}}\frac{ \partial\hat{u}_{i,i_{m}}}{\partial x_{j}}\right)\hat{u}_{i,k_{m}}^{*}\, \mathrm{d}\Omega \tag{3.20}\]
where \(i_{m},j_{m}\) and \(k_{m}\) are the indices of the modes, \(U_{i}\) denotes the velocity field of the base flow. For the Poiseuille flow investigated in this study, having only one non-zero velocity component:
\[U_{1}=1-x_{2}^{2}. \tag{3.21}\]
It is worth noting that
\[\int_{\mathcal{O}}\frac{\partial^{2}u_{i,i_{m}}}{\partial x_{j}\partial x_{j} }\hat{u}_{i,j_{m}}^{*}\,\mathrm{d}\Omega=\lambda_{i_{m}}\,\delta_{i_{m},j_{m}} \tag{3.22}\]
This is due to the fact that the velocity modes are solutions of the Stokes equation.
The modes are substituted in the form (3.17) and the integrals are evaluated utilizing Chebyshev collocation points. Since the eigenvectors are complex, the \(A_{i,j}\) matrix and the \(Q_{i,j,k}\) tensor are also complex. As a result, the previously derived gradients for the optimization procedure become invalid. However, this issue can be resolved by transforming the system into a real-valued one. Let \(i_{0}\) represent the indices of the real-valued modes, \(i_{c}\) the complex-valued modes, and \(i_{cc}\) their corresponding complex conjugates. By rearranging the modes in the order \(i_{0},i_{c},i_{cc}\), a transformation matrix \(\mathbf{T}\) can be defined as follows:
\[\mathbf{T}=\begin{bmatrix}T_{i_{0},i_{0}}&T_{i_{0},i_{c}}&T_{i_{0},i_{cc}}\\ T_{i_{c},i_{0}}&T_{i_{c},i_{c}}&T_{i_{c},i_{cc}}\\ T_{i_{cc},i_{0}}&T_{i_{cc},i_{c}}&T_{i_{cc},i_{cc}}\end{bmatrix}=\begin{bmatrix} \mathbf{I}_{N_{0}\times N_{0}}&\mathbf{0}_{N_{0}\times N_{c}}&\mathbf{0}_{N \times N_{c}}\\ \mathbf{0}_{N_{c}\times N_{0}}&\frac{1}{\sqrt{2}}\mathbf{I}_{N_{c}\times N_{c} }&\frac{1}{\sqrt{2}}\mathbf{I}_{N_{c}\times N_{c}}\\ \mathbf{0}_{N_{c}\times N_{0}}&\frac{1}{\sqrt{2}}\mathbf{I}_{N_{c}\times N_{c} }&-\frac{1}{\mathrm{i}\sqrt{2}}\mathbf{I}_{N_{c}\times N_{c}}\end{bmatrix} \tag{3.23}\]
Here, \(N_{0}\) represents the number of real-valued modes, and \(N_{c}\) represents the number of complex-valued modes (taking into account half of the complex-conjugate pairs). Applying the \(\mathbf{S}=\mathbf{T}^{-1}\). Applying the \(\mathbf{S}\) transformation matrix on the problem as described by equations (2.20) and (2.21) results in real-valued \(A_{i,j}\) matrix and the \(Q_{i,j,k}\) tensor, respectively. This transformation matrix can also be used to convert the transformed real coefficients back into the original complex coefficients of the complex-valued modes.
#### 3.3.1 Results
Two distinct configurations are explored in this study. In both cases, the dimensions of the domain are \(L_{x}=2\pi\) and \(L_{z}=\pi\), resulting in \(\alpha_{0}=1\) and \(\beta_{0}=2\). These domain sizes are chosen to ensure that the base wavenumbers (\(\alpha_{0}\), \(\beta_{0}\)) are close to the critical values as determined by linear stability analysis (\(\alpha=1.02\))(Orszag 1971) and standard non-linear stability analysis (\(\beta=2.04\)) (Nagy 2022). Previous research by Reddy _et al._ (1998) also investigated Poiseuille flow on the same domain. In the first model, denoted as M1, the number of modes is set to \(N_{x}=1,N_{y}=20,N_{z}=1\) resulting in \(N_{t}=180\). In the second model, denoted as M2, the mode counts are \(N_{x}=1,N_{y}=60,N_{z}=1\), yielding \(N_{t}=540\). In these models, only the modification of the base flow is considered, while higher-order Fourier modes are neglected. It is important to note that these models may not capture the true behavior of the flow perfectly, but they serve as demonstrations of the GKE method on relatively high-order systems compared to previous studies. Increasing the number of modes significantly raises the computational cost due to the evaluation of non-linear terms, a well-known challenge in reduced-order models (Sipp _et al._ 2020). For the investigation of systems with more than 10,000 degrees of freedom, the current GKE method is not feasible.
First, the linear stability limit (\(Re_{\rm L}\)) of the two models are determined, where the first eigenvalue of the linear part (**A**) becomes positive. The influence of \(N_{y}\) within the range of 10 to 100, as shown in Table 1. It is important to note that for fewer than 60 modes, the linear stability analysis is highly dependent on the number of modes due to the high sensitivity of the non-normal linear operator (Trefethen & Embree 2005) to numerical errors. Simultaneously, energy stability limit (\(Re_{\rm E}\)) of the system is less affected by the number of selected modes.
These two limits are crucial for the model and can be relatively easily calculated. Below the energy stability limit, the flow is unconditionally stable, meaning that the radius of the region of attraction is infinite. On the other hand, beyond the linear stability limit, the flow is unconditionally unstable, and the radius of the region of attraction is 0. Between these two limits, the proposed method can be employed to calculate the conditional stability threshold.
In the case of the previously defined models, M1 and M2, optimized transformation matrices are calculated for the following Reynolds numbers: 1500, 1000, 500, 250, and 125. To save computational time, the optimization process starts at the highest Reynolds number. Once the procedure converges, the next optimization at a lower Reynolds number is initialized with the previous optimal transformation matrix. It has been observed that if the procedure is initialized with a transformation matrix calculated at a lower Reynolds
\begin{table}
\begin{tabular}{r r r} \(N_{y}\) & \(Re_{\rm L}\) & \(Re_{\rm E}\) \\
10 & 1490080 & 49.8096 \\
**20** & **4544.82** & **49.6597** \\
30 & 2668.92 & 49.6306 \\
40 & 3452.81 & 49.6257 \\
50 & 4971.36 & 49.6228 \\
**60** & **5770.67** & **49.6220** \\
70 & 5995.41 & 49.6213 \\
80 & 5973.48 & 49.6210 \\
90 & 5947.59 & 49.6208 \\
100 & 5930.30 & 49.6207 \\ \end{tabular}
\end{table}
Table 1: The critical Reynolds number determined by linear stability analysis (\(Re_{L}\)) and energy stability analysis (\(Re_{E}\)) of the Poiseuille flow model with \(L_{x}=2\pi\) and \(L_{z}=\pi\) and \(N_{x}=1,N_{z}=1\). The values for the evaluated M1 and M2 models are indicated in bold.
number, the generalized kinetic energy increases even for infinitesimally small amplitudes (\(\gamma=0\)). Modifying the initial matrix in this case would require additional computational cost. However, if an optimal transformation matrix from a higher Reynolds number is used, this issue does not arise.
The results of optimization are plotted in Fig. 6. The plot shows the square root of the allowable perturbation kinetic energy divided by the base flow kinetic energy (\(E\)), which is proportional to the ratio of perturbation velocity magnitude to the base flow velocity magnitude. This quantity is referred to here as the threshold amplitude ratio. While some previous studies aimed to find the minimal threshold energy or minimal seed for Poiseuille flow on systems with significantly higher degrees of freedom, a rough comparison between the results has been attempted. The amplitude ratio at different Reynolds numbers is presented in Table 3. Notably, the allowable perturbation amplitude ratio obtained in this study is significantly smaller than the threshold amplitude reported in previous studies. In the studies conducted by Lundbladh _et al._ (1994) and Reddy _et al._ (1998), the base flow was perturbed with a prescribed or linearly optimized perturbation, and the threshold amplitude was investigated. However, non-linear optimization of the perturbation was not performed in these studies. Parente _et al._ (2022), on the other hand, investigated the flow on a considerably larger domain and solved the non-linear minimal seed problem. They achieved a threshold amplitude one magnitude smaller at a slightly smaller Reynolds number compared to the results reported by Lundbladh _et al._ (1994) and Reddy _et al._ (1998). Prior to comparing the results with the GKE method, it is crucial to acknowledge the differences in the approach: while previous studies focused on minimizing the necessary perturbation energy to induce transition, the current study maximizes the allowable perturbation. For the small system (M1), the amplitude ratio is only one magnitude smaller than the result of Parente _et al._ (2022). However, for the larger system (M2), the values are three orders of magnitude smaller, indicating that the results obtained by the GKE method highly depend on the dynamical system's number of degrees of freedom. moreover, it should be emphasized that the dimensions of the systems in the cited papers were orders of magnitude larger.
In the next step, power law functions are fitted to the threshold amplitude as a function of Reynolds number (\(\sqrt{e}\propto Re^{\gamma_{A}}\)) which has proven to be a good estimation in the case of Couette flow (Duguet _et al._ 2013). This approach has also been used in previously cited research. For Poiseuille flow, the exponents of the fitting are presented in Table 3, varying between -1.6 and -4.25 in studies (Lundbladh _et al._ 1994; Reddy _et al._ 1998; Parente _et al._ 2022; Zhang & Tao 2023). Our predictions align closely with this established range, with exponents of -2.94 and -4.66 for the M1 and M2 models, respectively. The disparity between
Figure 6: The square root of the ratio of allowable perturbation kinetic energy to the base flow kinetic energy as a function of Reynolds number for Poiseuille flow. The dimensions of the periodic domain are \(2\pi\times 2\times\pi\).
the exponents of M1 and M2 highlights that not only the amplitude but also the decay rate of the allowable perturbation amplitude decreases significantly as the number of unknowns increases.
However, it's important to note that the cited models typically have significantly more degrees of freedom, and the fitted range of Reynolds numbers varies among the cited papers. Furthermore, due to the system's linear instability above a certain Reynolds number, the threshold amplitude as a function of Reynolds number must deviate from a simple power-law function.
## 4 Conclusion
In the study, an approach is introduced to establish the conditional stability limit of fluid flows by constructing a Lyapunov function. The core concept involves a linear transformation of the state variables and the definition of Generalized Kinetic Energy (GKE) as the inner product of these new variables. The method described here is analogous to the alteration of the inner product of the original state vectors, a modification explored by (Nerli _et al._ 2007).
The direct consequence of the transformation, the growth rate of generalized kinetic energy depends on the perturbation amplitude. This dependency enables us to calculate the threshold amplitude of stability, providing crucial insights into the system's behavior. Assuming an appropriate transformation matrix and a linearly stable system, we observe that the maximum potential growth rate of an infinitesimally small perturbation is negative. However, as the perturbation level increases, this growth rate steadily rises.
The maximum potential growth of the system, in terms of perturbation level, can be classified into two distinct regions: initially, there is a constant phase characterized by a horizontal line, indicative of the dominance of linear dynamics at low perturbation levels. This phase is succeeded by a transitional region, leading to another straight line with a positive slope at higher perturbation levels, where the non-linear aspects of the system take
\begin{table}
\begin{tabular}{l c c} Source & \(Re=1000\) & \(Re=1500\) \\ Lundbladh _et al._ (1994) & - & 0.0053 \\ Reddy _et al._ (1998) & - & 0.00522 \\ Parente _et al._ (2022) & 0.00144 & - \\ M1, \(N_{t}=180\) & 0.0000432 & 0.0000220 \\ M2, \(N_{t}=540\) & \(5.32\cdot 10^{-7}\) & \(4.61\cdot 10^{-8}\) \\ \end{tabular}
\end{table}
Table 2: The threshold perturbation amplitude ratio (\(\sqrt{e/E}\)) for Poiseuille flow at different Reynolds numbers. Additional properties of the results can be found in Table 2.
\begin{table}
\begin{tabular}{l l c c c} Source & Perturbation & Domain & \(Re\) range & \(\gamma_{A}\) \\ Lundbladh _et al._ (1994) & Oblique wave & \(2\pi\times 2\times 2\pi\) & 1500-5000 & -1.75 \\ Reddy _et al._ (1998) & Oblique wave & \(2\pi\times 2\times 2\pi\) & 1500-5000 & -1.6 \\ Parente _et al._ (2022) & Minimal seed & \(250\times 2\times 125\) & 1000-1568 & -4.25 \\ Zhang \& Tao (2023) & Minimal seed & \(100\times 2\) (2D) & 2500-4500 & -3.8 \\ M1, \(N_{t}=180\) & GKE stability & \(2\pi\times 2\times\pi\) & 125-1500 & -2.94 \\ M2, \(N_{t}=540\) & GKE stability & \(2\pi\times 2\times\pi\) & 125-1500 & -4.66 \\ \end{tabular}
\end{table}
Table 3: Coefficients of the power law for the threshold amplitude (\(\sqrt{e_{\min}}\propto Re^{-\gamma_{A}}\)), half of the coefficient for the threshold energy.
precedence. The critical point occurs when the possible maximum growth rate of generalized kinetic energy intersects the zero line. This critical perturbation level signifies a threshold below which the flow remains stable, as the generalized kinetic energy diminishes, even though the standard kinetic energy may still increase.
In the transformed state space, the attractive region is approximated as a hypersphere with a radius equal to the critical perturbation level. In the original state space, this region appears as a hyperellipsoid, with its smallest semiminor axis determining the maximum allowable perturbation kinetic energy. To optimize this perturbation kinetic energy level, the transformation matrix is fine-tuned. This optimization process involves deriving analytic gradients, rendering the method viable even for systems with a few thousand degrees of freedom.
A crucial element in the calculations involves determining the global maximum of the potential growth rate among various perturbation states. To guarantee accuracy, the presented technique incorporates analytic gradients and the Hessian matrix, coupled with the use of multiple seed locations to ensure a comprehensive exploration of the solution space.
The effectiveness of the method is demonstrated first on a relatively straightforward dynamical system: the turbulent flow's two-dimensional model, known as the TTRD' model, a simplified representation of turbulent flow. Here, the GKE approach adeptly approximates the region of attraction. Unstable solutions are identified with initial norms approximately 2% larger than the predicted radius of the attraction region. Notably, the proposed method outperformed the quadratic constrained method, providing significantly more precise results.
Moving forward, the GKE method is applied to three variations of the four-dimensional Waleffe model, each differing only in their parameters. In two instances, the presented GKE method predicted comparable allowable perturbation levels to those derived by Kalur _et al._ (2021) using the sum-of-squares method. The GKE approach outperformed the quadratic constrained method, producing results differing by orders of magnitude. Specifically, in the cases of W95B and BT parameter sets, unstable solutions (Cossu 2005; Kalur _et al._ 2021) are close to the predicted region of attraction, corroborating the accuracy of the GKE method.
Finally, the method is extended to a reduced order model of the Poiseuille flow with 180 and 540 degrees of freedom. The predicted radius of the region of attraction decays similarly to the power law with the exponent of -2.94 and -4.66 in the small and large system, respectively. However, since the flow is linearly unstable above a certain Reynolds number, the decay of the radius must be faster the at higher Reynolds number.
In conclusion, the GKE method stands as a promising tool in the realm of fluid dynamics, providing accurate predictions for the conditional stability of linearly stable systems with a moderate number of degrees of freedom. While challenges persist in handling large systems, and further improvements of the method for flow modeling are necessary in the pursuit of understanding the conditional stability limits of fluid flows.
###### Acknowledgements.
The author is grateful to Yoohann Duguet at CNRS for their helpful recommendations.
**Funding.** The research leading to these results received funding from the National Research Development and Innovation Office of Hungary under Grant Agreement no. K142675.
**Declaration of interests. Declaration of Interests**. The author reports no conflict of interest.
**Author ORCIDs.** P. T. Nagy, [https://orcid.org/0000-0002-8024-3824](https://orcid.org/0000-0002-8024-3824)
## Appendix A Numerical methods
### The maximization of \(\mu_{h}\)
The critical aspect of the method lies in determining the maximum potential growth rate of generalized kinetic energy (27). In practical implementations, Matlab's _fmincon_ is employed, a tool that can significantly benefit from the provision of gradient and Hessian matrix of the cost function. The derivatives of the growth rate (26) concerning the normalized state vector (\(\tilde{r}_{k}\)) are expressed as follows:
\[\frac{\partial\mu_{h}}{\partial\tilde{r}_{p}} =2\left(\tilde{A}_{p,i}+\tilde{A}_{i,p}\right)\tilde{r}_{i}+\] \[+2\gamma\left(S_{i,j}^{-1}Q_{j,k,l}S_{k,p}S_{l,o}\tilde{r}_{o} \tilde{r}_{i}+S_{i,j}^{-1}Q_{j,k,l}S_{k,m}\tilde{r}_{m}S_{l,p}\tilde{r}_{i}+S_{ p,j}^{-1}Q_{j,k,l}S_{k,m}\tilde{r}_{m}S_{l,o}\tilde{r}_{o}\right) \tag{29}\]
where \(\tilde{A}_{i,j}\) is the transformed \(A_{i,j}\) matrix defined in equation (20). Let us introduce the vectors \(v_{j}=S_{j,i}^{-T}\tilde{r}_{i}\) and \(\tilde{q}_{i}=S_{i,j}\tilde{r}_{j}\) to simplify the gradient:
\[\frac{\partial\mu_{h}}{\partial\tilde{r}_{p}}=2\left(\tilde{A}_{p,i}+\tilde{A }_{i,p}\right)\tilde{r}_{i}+2\gamma\left(S_{p,k}^{T}Q_{j,k,l}\tilde{q}_{l}v_{j }+S_{p,l}^{T}Q_{j,k,l}\tilde{q}_{k}v_{j}+S_{p,j}^{-1}Q_{j,k,l}\tilde{q}_{k} \tilde{q}_{l}\right). \tag{30}\]
The Hessian matrix of \(\mu_{h}\) (26) is given by:
\[\frac{\partial^{2}\mu_{h}}{\partial\tilde{r}_{p}\partial\tilde{r }_{q}} =2\left(\tilde{A}_{p,q}+\tilde{A}_{q,p}\right)+\] \[2\gamma\left(S_{i,j}^{-1}Q_{j,k,l}S_{k,p}S_{l,q}\tilde{r}_{i}+S_{ q,j}^{-1}Q_{j,k,l}S_{k,p}S_{l,o}\tilde{r}_{o}+\right.\] \[\left.S_{i,j}^{-1}Q_{j,k,l}S_{k,q}S_{l,p}\tilde{r}_{i}+S_{q,j}^{ -1}Q_{j,k,l}S_{k,m}\tilde{r}_{m}S_{l,p}\right.\] \[\left.S_{p,j}^{-1}Q_{j,k,l}S_{k,q}S_{l,o}\tilde{r}_{o}+S_{p,j}^{ -1}Q_{j,k,l}S_{k,m}\tilde{r}_{m}S_{l,q}\right) \tag{31}\]
By introducing the expressions:
\[B_{k,l}=Q_{j,k,l}v_{j},\ \ C_{j,k}=Q_{j,k,l}\tilde{q}_{l},\ \ D_{j,l}=Q_{j,k,l}\tilde{q}_{k}, \tag{32}\]
the equation (31) simplifies to:
\[\frac{\partial^{2}\mu_{h}}{\partial\tilde{r}_{p}\partial\tilde{r }_{q}}=2\left(\tilde{A}_{p,q}+\tilde{A}_{q,p}\right)+ 2\gamma\left(S_{p,k}^{T}B_{k,l}S_{l,q}+\left(S_{q,j}^{-1}C_{j,k}S_{ k,p}\right)^{T}+\right.\] \[\left(S_{q,k}^{T}B_{k,l}S_{l,p}\right)^{T}+\left(S_{p,j}^{-1}D_{j,l}S_{l,p}\right)^{T}\] \[\left.S_{p,j}^{-1}C_{j,k}S_{k,q}+S_{p,j}^{-1}D_{j,l}S_{l,q}\right) \tag{33}\]
It's worth noticing that the Hessian matrix consists of the sum of four matrices and their transposes, resulting in a symmetric expression. This symmetry is expected due to the nature of second derivatives. From a practical perspective, only half of the expression needs to be calculated; the other half can be obtained by transposing the appropriate matrices.
The optimization is constrained by the requirement that the transformed state vector should be unitary:
\[c=\tilde{r}_{i}\tilde{r}_{i}-1=0 \tag{34}\]
The gradient of the constraint is straightforward:
\[\frac{\partial c}{\partial\tilde{r}_{i}}=2\tilde{r}_{i}.\] (A.7)
The Hessian of the constraint (A.6) is given by:
\[\frac{\partial^{2}c}{\partial\tilde{r}_{i}\partial\tilde{r}_{j}}=2\delta_{i,j}\] (A.8)
where \(\delta_{i,j}\) is the Kronecker delta function and the right hand side is two times the identity matrix.
### The maximization of \(e_{\rm min}\)
Maximizing \(e_{\rm min}(S_{i,j})\) (2.31) is a possibility, but it involves solving a complex, nonlinear equation system to calculate the gradients \(({\rm d}\,e_{\rm min}/{\rm d}\,S_{i,j})\). An alternative approach is introducing the critical perturbation level \(\gamma_{\rm crit}\) as as an additional variable of the cost function: \(e_{\rm min}(S_{i,j},\gamma_{\rm crit})\), constrained by the requirement that the maximum growth rate must be zero (2.28). The gradients of kinetic energy for the allowable perturbation (\(e_{\rm min}\)) are given by:
\[\frac{\partial e_{\rm min}}{\partial S_{p,q}}=2\gamma_{\rm crit}^{2}\tilde{r} _{\rm min,q}S_{p}j\tilde{r}_{\rm min,j}=2\gamma_{\rm crit}^{2}\tilde{r}_{\rm min,q}\tilde{q}_{\rm min,p},\] (A.9)
and
\[\frac{\partial e_{\rm min}}{\partial\gamma_{\rm crit}}=2\gamma_{\rm crit} \lambda_{\rm min}.\] (A.10)
It's important to note that \(\tilde{r}_{\rm min,i}\) is a unit vector corresponds to the smallest eigenvalue of \(S_{j,i}S_{j,k}\) matrix. This vector is distinct from \(\tilde{r}_{i}\) used in subsequent expressions for calculating the maximum of \(\mu_{h}\). \(\tilde{r}_{\rm min,i}\) depends solely on the transformation matrix.
The derivatives of the constraint (2.28) with respect of the elements of transformation matrix are
\[\frac{\partial\mu_{h,\rm max}}{\partial S_{p,q}}=2\left(-\tilde{ r}_{i}S_{i,p}^{-1}S_{q,j}^{-1}A_{j,m}S_{m,o}\tilde{r}_{o}+\tilde{r}_{i}S_{i,j}^{-1} A_{j,p}\tilde{r}_{q}\right)+\] \[+2\gamma\left(-\tilde{r}_{i}S_{i,p}^{-1}S_{q,l}^{-1}Q_{l,m,o}S_{m,p}\tilde{r}_{p}S_{o,r}\tilde{r}_{r}+\tilde{r}_{i}S_{i,j}^{-1}Q_{j,p,l}\tilde{ r}_{q}S_{l,o}\tilde{r}_{o}\right.\] \[\left.+\tilde{r}_{i}S_{i,j}^{-1}Q_{j,k,p}S_{k,m}\tilde{r}_{m} \tilde{r}_{q}\right)\] (A.11)
where it is assumed that the inverse of the slightly perturbed transformation matrix can be approximated as:
\[\left(S_{i,j}+\delta S_{i,j}\right)^{-1}\approx S_{i,j}^{-1}-S_{i,k}^{-1}\, \delta S_{k,l}\,S_{l,j}^{-1}.\] (A.12)
The expressions (A.13) can be further simplified using the previously defined vectors and the transformed \(A_{i,j}\) matrix:
\[\frac{\partial\mu_{h,\rm max}}{\partial S_{p,q}} =2\left(-v_{p}\tilde{A}_{q,j}\tilde{r}_{j}+v_{j}A_{j,p}\tilde{r}_ {q}\right)+\] \[+2\gamma\left(-v_{p}S_{q,l}^{-1}Q_{l,m,o}\tilde{q}_{m}\tilde{q}_ {o}+v_{j}Q_{j,p,l}\tilde{r}_{q}\tilde{q}_{l}+v_{j}Q_{j,k,p}\tilde{q}_{k} \tilde{r}_{q}\right)\] (A.13)
Furthermore, the derivative of growth rate with respect to the perturbation level is
\[\frac{\partial\mu_{h,\rm max}}{\partial\gamma}=2\,\gamma\,\tilde{Q}_{i,j,k}\, \tilde{r}_{i}\,\tilde{r}_{j}\,\tilde{r}_{k},\] (A.14)
which is simply the non-linear part of \(\mu_{h}\),
\[\frac{\partial\mu_{h,\max}}{\partial\gamma}=\mu_{h,\mathrm{NL}}.\] (A15)
|
2309.16925 | Lexicographical ordering of hypergraphs via spectral moment | The lexicographical ordering of hypergraphs via spectral moments is called
the $S$-order of hypergraphs.In this paper, the $S$-order of hypergraphs is
investigated.We characterize the first and last hypergraphs in an $S$-order of
all uniform hypertrees and all linear unicyclic uniformhypergraphs with given
girth, respectively. And we give the last hypergraph in an $S$-order of all
linear unicyclic uniform hypergraphs. | Hong Zhou, Changjiang Bu | 2023-09-29T01:52:41Z | http://arxiv.org/abs/2309.16925v1 | # Lexicographical ordering of hypergraphs via spectral moments
###### Abstract
The lexicographical ordering of hypergraphs via spectral moments is called the \(S\)-order of hypergraphs. In this paper, the \(S\)-order of hypergraphs is investigated. We characterize the first and last hypergraphs in an \(S\)-order of all uniform hypertrees and all linear unicyclic uniform hypergraphs with given girth, respectively. And we give the last hypergraph in an \(S\)-order of all linear unicyclic uniform hypergraphs.
keywords: hypergraph, spectral moment, adjacency tensor
_AMS classification (2020):_ 05C65, 15A18
## 1 Introduction
Let \(G\) be a simple undirected graph with \(n\) vertices and \(A\) be the adjacency matrix of \(G\). The \(d\)_th order spectral moment_ of \(G\) is the sum of \(d\) powers of all the eigenvalues of \(A\), denoted by \(\mathrm{S}_{d}(G)\)[1]. For two graphs \(G_{1},G_{2}\) with \(n\) vertices, if \(\mathrm{S}_{i}(G_{1})=\mathrm{S}_{i}(G_{2})\) for \(i=0,1,2,\ldots,n-1\), then adjacency matrices of \(G_{1}\) and \(G_{2}\) have the same spectrum. Therefore, \(\mathrm{S}_{i}(G_{1})=\mathrm{S}_{i}(G_{2})\) for \(i=0,1,2,\ldots\). We write \(G_{1}\prec_{s}G_{2}\) (\(G_{1}\) comes before \(G_{2}\) in an \(S\)-order) if there exists a \(k\in\{1,2,\ldots,n-1\}\) such that \(\mathrm{S}_{i}(G_{1})=\mathrm{S}_{i}(G_{2})\) for \(i=0,1,2,\ldots,k-1\) and \(\mathrm{S}_{k}(G_{1})<\mathrm{S}_{k}(G_{2})\). We write \(G_{1}=_{s}G_{2}\), if \(\mathrm{S}_{i}(G_{1})=\mathrm{S}_{i}(G_{2})\) for \(i=0,1,2,\ldots,n-1\).
In 1987, Cvetkovic and Rowlinson [2] characterized the first and last graphs in an \(S\)-order of all trees and all unicyclic graphs with given girth, respectively. Other works on the \(S\)-order of graphs can be referred to [3; 4; 5; 6; 7; 8]. The \(S\)-order of graphs had been used in producing graph catalogues [9].
In this paper, the \(S\)-order of hypergraphs is defined. We characterize the first and last hypergraphs in an \(S\)-order of all uniform hypertrees and all linear unicyclic
uniform hypergraphs with given girth, respectively. And we give the last hypergraph in an \(S\)-order of all linear unicyclic uniform hypergraphs.
Next, we introduce some notations and concepts for tensors and hypergraphs. For a positive integer \(n\), let \([n]=\{1,2,\ldots,n\}\). An \(m\)-order \(n\)-dimension complex _tensor_\(\mathcal{A}=(a_{i_{1}\cdots i_{m}})\) is a multidimensional array with \(n^{m}\) entries on complex number field \(\mathbb{C}\), where \(i_{j}\in[n],j=1,\ldots,m\).
Let \(\mathbb{C}^{n}\) be the set of \(n\)-dimension complex vectors and \(\mathbb{C}^{[m,n]}\) be the set of \(m\)-order \(n\)-dimension complex tensors. For \(x=\left(x_{1},\ldots,x_{n}\right)^{\mathrm{T}}\in\mathbb{C}^{n}\), \(\mathcal{A}x^{m-1}\) is a vector in \(\mathbb{C}^{n}\) whose \(i\)th component is
\[(\mathcal{A}x^{m-1})_{i}=\sum_{i_{2},\ldots,i_{m}=1}^{n}a_{ii_{2}\cdots i_{m}} x_{i_{2}}\cdots x_{i_{m}}.\]
A number \(\lambda\in\mathbb{C}\) is called an _eigenvalue_ of \(\mathcal{A}\) if there exists a nonzero vector \(x\in\mathbb{C}^{n}\) such that
\[\mathcal{A}x^{m-1}=\lambda x^{[m-1]},\]
where \(x^{[m-1]}=\left(x_{1}^{m-1},\ldots,x_{n}^{m-1}\right)^{\mathrm{T}}\). The number of eigenvalues of \(\mathcal{A}\) is \(n(m-1)^{n-1}\)[10, 11].
A hypergraph \(\mathcal{H}=(V(\mathcal{H}),E(\mathcal{H}))\) is called \(m\)_-uniform_ if \(|e|=m\geq 2\) for all \(e\in E(\mathcal{H})\). For an \(m\)-uniform hypergraph \(\mathcal{H}\) with \(n\) vertices, its _adjacency tensor_ is the order \(m\) dimension \(n\) tensor \(\mathcal{A}_{\mathcal{H}}=(a_{i_{1}i_{2}\cdots i_{m}})\), where
\[a_{i_{1}i_{2}\cdots i_{m}}=\begin{cases}\frac{1}{(m-1)!},&\text{if }\{i_{1},i_{2}, \ldots,i_{m}\}\in E(\mathcal{H}),\\ 0,&\text{otherwise}.\end{cases}\]
Clearly, \(\mathcal{A}_{\mathcal{H}}\) is the adjacency matrix of \(\mathcal{H}\) when \(\mathcal{H}\) is \(2\)-uniform [12]. The _degree_ of a vertex \(v\) of \(\mathcal{H}\) is the number of edges containing the vertex, denoted by \(d_{\mathcal{H}}(v)\) or \(d_{v}\). A vertex of \(\mathcal{H}\) is called a _core vertex_ if it has degree one. An edge \(e\) of \(\mathcal{H}\) is called a _pendent edge_ if it contains \(|e|-1\) core vertices. Sometimes a core vertex in a pendent edge is also called a _pendent vertex_. The _girth_ of \(\mathcal{H}\) is the minimum length of the hypercycles of \(\mathcal{H}\), denoted by \(g(\mathcal{H})\). \(\mathcal{H}\) is called _linear_ if any two different edges intersect into at most one vertex. The \(m\)_-power hypergraph_\(G^{(m)}\) is the \(m\)-uniform hypergraph which obtained by adding \(m-2\) vertices with degree one to each edge of the graph \(G\).
In 2005, the concept of eigenvalues of tensors was proposed by Qi [10] and Lim
[11], independently. The eigenvalues of tensors and related problems are important research topics of spectral hypergraph theories [13; 14; 15; 16], especially the trace of tensors [16; 17; 18; 19; 20].
Morozov and Shakirov gave an expression of the \(d\)th order trace \(\mathrm{Tr}_{d}(\mathcal{A})\) of a tensor \(\mathcal{A}\)[17]. Hu et al. proved that \(\mathrm{Tr}_{d}(\mathcal{A})\) is equal to the sum of \(d\) powers of all eigenvalues of \(\mathcal{A}\)[18]. For a uniform hypergraph \(\mathcal{H}\), the sum of \(d\) powers of all eigenvalues of \(\mathcal{A}_{\mathcal{H}}\) is called the \(d\)_th order spectral moment_ of \(\mathcal{H}\), denoted by \(\mathrm{S}_{d}(\mathcal{H})\). Then \(\mathrm{Tr}_{d}(\mathcal{A}_{\mathcal{H}})=\mathrm{S}_{d}(\mathcal{H})\). Shao et al. established some formulas for the \(d\)th order trace of tensors in terms of some graph parameters [19]. Clark and Cooper expressed the spectral moments of hypergraphs by the number of Veblen multi-hypergraphs and used this result to give the "Harary-Sachs" coefficient theorem for hypergraphs [16]. Chen et al. gave a formula for the spectral moment of a hypertree in terms of the number of some sub-hypertrees [20].
This paper is organized as follows. In Section 2, the \(S\)-order of hypergraphs is defined. We introduce 4 operations of moving edges on hypergraphs and give changes of the Zagreb index after operations of moving edges. In Section 3, we give the first and last hypergraphs in an \(S\)-order of all uniform hypertrees. In Section 4, the expressions of \(2m\)th and \(3m\)th order spectral moments of linear unicyclic \(m\)-uniform hypergraphs are obtained in terms of the number of sub-hypergraphs. We characterize the first and last hypergraphs in an \(S\)-order of all linear unicyclic uniform hypergraphs with given girth. And we give the last hypergraph in an \(S\)-order of all linear unicyclic uniform hypergraphs.
## 2 Preliminaries
For two \(m\)-uniform hypergraphs \(\mathcal{H}_{1},\mathcal{H}_{2}\) with \(n\) vertices, if \(\mathrm{S}_{i}(\mathcal{H}_{1})=\mathrm{S}_{i}(\mathcal{H}_{2})\) for \(i=0,1,2,\ldots,n(m-1)^{n-1}-1\), then adjacency tensors of \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) have the same spectrum. Therefore, \(\mathrm{S}_{i}(\mathcal{H}_{1})=\mathrm{S}_{i}(\mathcal{H}_{2})\) for \(i=0,1,2,\ldots\). We write \(\mathcal{H}_{1}\prec_{s}\mathcal{H}_{2}\) (\(\mathcal{H}_{1}\) comes before \(\mathcal{H}_{2}\) in an \(S\)-order) if there exists a \(k\in\{1,2,\ldots,n(m-1)^{n-1}-1\}\) such that \(\mathrm{S}_{i}(\mathcal{H}_{1})=\mathrm{S}_{i}(\mathcal{H}_{2})\) for \(i=0,1,2,\ldots,k-1\) and \(\mathrm{S}_{k}(\mathcal{H}_{1})<\mathrm{S}_{k}(\mathcal{H}_{2})\). We write \(\mathcal{H}_{1}=_{s}\mathcal{H}_{2}\) if \(\mathrm{S}_{i}(\mathcal{H}_{1})=\mathrm{S}_{i}(\mathcal{H}_{2})\) for \(i=0,1,2,\ldots,n(m-1)^{n-1}-1\). In this paper, \(\mathrm{S}_{i}(\mathcal{H})\) is also written \(\mathrm{S}_{i},i=0,1,2,\ldots\). Let \(\mathbf{H}_{1}\) and \(\mathbf{H}_{2}\) be two sets of hypergraphs. We write \(\mathbf{H}_{1}\prec_{s}\mathbf{H}_{2}\) (\(\mathbf{H}_{1}\) comes before \(\mathbf{H}_{2}\) in an \(S\)-order) if \(\mathcal{H}_{1}\prec_{s}\mathcal{H}_{2}\) for each \(\mathcal{H}_{1}\in\mathbf{H}_{1}\) and each \(\mathcal{H}_{2}\in\mathbf{H}_{2}\).
For an \(m\)-uniform hypergraph \(\mathcal{H}\) with \(n\) vertices, let \(\mathrm{S}_{0}(\mathcal{H})=n(m-1)^{n-1}.\) In [12], the \(d\)th order traces of the adjacency tensor of an \(m\)-uniform hypergraph were
given for \(d=1,2,\ldots,m\).
**Lemma 2.1**.: _[_12_]_ _Let \(\mathcal{H}\) be an \(m\)-uniform hypergraph with \(n\) vertices and \(q\) edges. Then_
_(1) \(\mathrm{Tr}_{d}(\mathcal{A}_{\mathcal{H}})=0\) for \(d=1,2,\ldots,m-1\);_
_(2) \(\mathrm{Tr}_{m}(\mathcal{A}_{\mathcal{H}})=qm^{m-1}(m-1)^{n-m}\)._
Next, we introduce 4 operations of moving edges on hypergraphs and give changes of the Zagreb index after operations of moving edges. The sum of the squares of the degrees of all vertices of a hypergraph \(\mathcal{H}\) is called the _Zagreb index_ of \(\mathcal{H}\), denoted by \(M(\mathcal{H})\)[21]. Let \(E^{\prime}\subseteq E(\mathcal{H})\), we denote by \(\mathcal{H}-E^{\prime}\) the sub-hypergraph of \(\mathcal{H}\) obtained by deleting the edges of \(E^{\prime}\).
**Transformation 1**: Let \(e=\{u,v,v_{1},v_{2},\ldots,v_{m-2}\}\) be an edge of an \(m\)-uniform hypergraph \(\mathcal{H}\), \(e_{1},e_{2},\ldots,e_{t}\) be the pendent edges incident with \(u\), where \(t\geq 1\), \(d_{\mathcal{H}}(u)=t+1\) and \(d_{\mathcal{H}}(v)\geq 2\). Write \(e_{i}^{{}^{\prime}}=(e_{i}\setminus\{u\})\bigcup\{v\}\). Let \(\mathcal{H}^{{}^{\prime}}=\mathcal{H}-\{e_{1},\ldots,e_{t}\}+\{e_{1}^{\prime},\ldots,e_{t}^{\prime}\}\).
**Lemma 2.2**.: _Let \(\mathcal{H}^{\prime}\) be obtained from \(\mathcal{H}\) by transformation 1. Then \(M(\mathcal{H}^{\prime})>M(\mathcal{H})\)._
Proof.: By the definition of the Zagreb index, we have
\[M(\mathcal{H}^{\prime})-M(\mathcal{H}) =d_{\mathcal{H}^{\prime}}^{2}(v)-d_{\mathcal{H}}^{2}(v)+d_{ \mathcal{H}^{\prime}}^{2}(u)-d_{\mathcal{H}}^{2}(u)\] \[=(d_{\mathcal{H}}(v)+t)^{2}-d_{\mathcal{H}}^{2}(v)+1-(t+1)^{2}\] \[=2t(d_{\mathcal{H}}(v)-1)>0.\]
**Transformation 2**: Let \(u\) and \(v\) be two vertices in a uniform hypergraph \(\mathcal{H}\), \(e_{1},e_{2},\ldots,e_{r}\) be the pendent edges incident with \(u\) and \(e_{r+1},e_{r+2},\ldots,e_{r+t}\) be the pendent edges incident with \(v\), where \(r\geq 1\) and \(t\geq 1\). Write \(e_{i}^{{}^{\prime}}=(e_{i}\setminus\{u\})\bigcup\{v\},i\in[r]\), \(e_{i}^{{}^{\prime}}=(e_{i}\setminus\{v\})\bigcup\{u\},i=r+1,\ldots,r+t\). If \(d_{\mathcal{H}}(v)\geq d_{\mathcal{H}}(u)\), let \(\mathcal{H}^{\prime}=\mathcal{H}-\{e_{1},\ldots,e_{r}\}+\{e_{1}^{\prime}, \ldots,e_{r}^{\prime}\}\). If \(d_{\mathcal{H}}(v)<d_{\mathcal{H}}(u)\), let \(\mathcal{H}^{\prime}=\mathcal{H}-\{e_{r+1},\ldots,e_{r+t}\}+\{e_{r+1}^{\prime },\ldots,e_{r+t}^{\prime}\}\).
**Lemma 2.3**.: _Let \(\mathcal{H}^{\prime}\) be obtained from \(\mathcal{H}\) by transformation 2. Then \(M(\mathcal{H}^{\prime})>M(\mathcal{H})\)._
Proof.: By the definition of the Zagreb index, if \(d_{\mathcal{H}}(v)\geq d_{\mathcal{H}}(u)\), we have
\[M(\mathcal{H}^{\prime})-M(\mathcal{H}) =d_{\mathcal{H}^{\prime}}^{2}(v)-d_{\mathcal{H}}^{2}(v)+d_{ \mathcal{H}^{\prime}}^{2}(u)-d_{\mathcal{H}}^{2}(u)\] \[=(d_{\mathcal{H}}(v)+r)^{2}-d_{\mathcal{H}}^{2}(v)+(d_{\mathcal{H }}(u)-r)^{2}-d_{\mathcal{H}}^{2}(u)\] \[=2r(r+d_{\mathcal{H}}(v)-d_{\mathcal{H}}(u))>0.\]
If \(d_{\mathcal{H}}(v)<d_{\mathcal{H}}(u)\), we have
\[M(\mathcal{H}^{\prime})-M(\mathcal{H}) =d_{\mathcal{H}^{\prime}}^{2}(v)-d_{\mathcal{H}}^{2}(v)+d_{ \mathcal{H}^{\prime}}^{2}(u)-d_{\mathcal{H}}^{2}(u)\] \[=(d_{\mathcal{H}}(v)-t)^{2}-d_{\mathcal{H}}^{2}(v)+(d_{\mathcal{ H}}(u)+t)^{2}-d_{\mathcal{H}}^{2}(u)\] \[=2t(t+d_{\mathcal{H}}(u)-d_{\mathcal{H}}(v))>0.\]
The \(m\)-uniform hypertree with a maximum degree of less than or equal to 2 is called the _binary \(m\)-uniform hypertree_. For two vertices \(u,v\) of an \(m\)-uniform hypergraph \(\mathcal{H}\), the _distance_ between \(u\) and \(v\) is the length of a shortest path from \(u\) to \(v\), denoted by \(d_{\mathcal{H}}(u,v)\)[22]. Let \(d_{\mathcal{H}}(u,u)=0\). Let \(\mathcal{H}_{0},\mathcal{H}_{1},\ldots,\mathcal{H}_{p}\) be pairwise disjoint connected hypergraphs with \(v_{1},\ldots,v_{p}\in V(\mathcal{H}_{0})\) and \(u_{i}\in V(\mathcal{H}_{i})\) for each \(i\in[p]\), where \(p\geq 1\). Denote by \(\mathcal{H}_{0}(v_{1},\ldots,v_{p})\bigodot(\mathcal{H}_{1}(u_{1}),\ldots, \mathcal{H}_{p}(u_{p}))\) the hypergraph obtained from \(\mathcal{H}_{0}\) by attaching \(\mathcal{H}_{1},\ldots,\mathcal{H}_{p}\) to \(\mathcal{H}_{0}\) with \(u_{i}\) identified with \(v_{i}\) for each \(i\in[p]\)[23]. Let \(P_{q}\) be a path of length \(q\).
**Transformation 3**: Let \(\mathcal{H}\neq P_{0}^{(m)}\) be an \(m\)-uniform connected hypergraph with \(u\in V(\mathcal{H})\). Let \(\mathcal{T}\) be a binary \(m\)-uniform hypertree with \(v_{k},v_{n},u_{1},u_{2}\in V(\mathcal{T})\) and \(e_{k},e_{k+1}\in E(\mathcal{T})\) such that \(d_{\mathcal{T}}(v_{k})=2\), \(v_{k},u_{1}\in e_{k},v_{k},u_{2}\in e_{k+1}\), \(u_{1},u_{2}\neq v_{k}\), \(v_{n}\) be a pendent vertex and \(d_{\mathcal{T}}(u_{1},v_{n})>d_{\mathcal{T}}(u_{2},v_{n})\). Let \(\mathcal{H}_{1}=\mathcal{H}(u)\bigodot\mathcal{T}(v_{k})\). \(\mathcal{H}_{2}\) is obtained from \(\mathcal{H}_{1}\) by deleting \(e_{k}\) and adding \((e_{k}\setminus\{v_{k}\})\bigcup\{v_{n}\}\).
**Lemma 2.4**.: _Let \(\mathcal{H}_{2}\) be obtained from \(\mathcal{H}_{1}\) by transformation 3. Then \(M(\mathcal{H}_{1})>M(\mathcal{H}_{2})\)._
Proof.: By the definition of the Zagreb index, we have
\[M(\mathcal{H}_{1})-M(\mathcal{H}_{2}) =d_{\mathcal{H}_{1}}^{2}(v_{k})+d_{\mathcal{H}_{1}}^{2}(v_{n})-d_ {\mathcal{H}_{2}}^{2}(v_{k})-d_{\mathcal{H}_{2}}^{2}(v_{n})\] \[=(d_{\mathcal{H}}(u)+2)^{2}+1-(d_{\mathcal{H}}(u)+1)^{2}-4\] \[=2d_{\mathcal{H}}(u)>0.\]
**Transformation 4**: Let \(\mathcal{H}\) be an \(m\)-uniform connected hypergraph with \(u,v\in V(\mathcal{H})\) such that \(u\neq v\), \(d_{\mathcal{H}}(u)>1\) and \(d_{\mathcal{H}}(u)\geq d_{\mathcal{H}}(v)\). Let \(\mathcal{T}_{1},\mathcal{T}_{2}\) be two binary \(m\)-uniform hypertrees, where \(|E(\mathcal{T}_{1})|>0\). \(\mathcal{H}_{1}\) denotes the hypergraph that results from identifying \(u\) with the pendent vertex \(u_{0}\in e_{0}\) of \(\mathcal{T}_{1}\) and identifying \(v\) with the pendent vertex \(v_{0}\) of \(\mathcal{T}_{2}\). Suppose that \(v_{t}\in V(\mathcal{T}_{2})\) is a pendent vertex of \(\mathcal{H}_{1}\), let \(\mathcal{H}_{2}\) be obtained from \(\mathcal{H}_{1}\) by deleting \(e_{0}\) and adding \((e_{0}\setminus\{u\})\bigcup\{v_{t}\}\).
**Lemma 2.5**.: _Let \(\mathcal{H}_{2}\) be obtained from \(\mathcal{H}_{1}\) by transformation 4._
_(1). If \(|E(\mathcal{T}_{2})|>0\), then \(M(\mathcal{H}_{1})>M(\mathcal{H}_{2})\);_
_(2). If \(|E(\mathcal{T}_{2})|=0,d_{\mathcal{H}}(u)>d_{\mathcal{H}}(v)\), then \(M(\mathcal{H}_{1})>M(\mathcal{H}_{2})\)._
Proof.: By the definition of the Zagreb index, if \(|E(\mathcal{T}_{2})|>0\), we have
\[M(\mathcal{H}_{1})-M(\mathcal{H}_{2}) =d_{\mathcal{H}_{1}}^{2}(u)+d_{\mathcal{H}_{1}}^{2}(v_{t})-d_{ \mathcal{H}_{2}}^{2}(u)-d_{\mathcal{H}_{2}}^{2}(v_{t})\] \[=(d_{\mathcal{H}}(u)+1)^{2}+1-d_{\mathcal{H}}^{2}(u)-4\] \[=2d_{\mathcal{H}}(u)-2>0.\]
If \(|E(\mathcal{T}_{2})|=0\), \(d_{\mathcal{H}}(u)>d_{\mathcal{H}}(v)\), we have
\[M(\mathcal{H}_{1})-M(\mathcal{H}_{2}) =d_{\mathcal{H}_{1}}^{2}(u)+d_{\mathcal{H}_{1}}^{2}(v_{t})-d_{ \mathcal{H}_{2}}^{2}(u)-d_{\mathcal{H}_{2}}^{2}(v_{t})\] \[=(d_{\mathcal{H}}(u)+1)^{2}+d_{\mathcal{H}}^{2}(v)-d_{\mathcal{H} }^{2}(u)-(d_{\mathcal{H}}(v)+1)^{2}\] \[=2d_{\mathcal{H}}(u)-2d_{\mathcal{H}}(v)>0.\]
## 3 The \(S\)-order in hypertrees
In this section, we give the first and last hypergraphs in an \(S\)-order of all uniform hypertrees.
In [20], the first \(3k\)th order spectral moments of uniform hypertrees were given. Let \(N_{\mathcal{H}}(\widehat{\mathcal{H}})\) be the number of sub-hypergraphs of \(\mathcal{H}\) isomorphic to \(\widehat{\mathcal{H}}\) and \(S_{q}\) be a star with \(q\) edges.
**Lemma 3.1**.: _[_20_]_ _Let \(\mathcal{T}=(V(\mathcal{T}),E(\mathcal{T}))\) be an \(m\)-uniform hypertree. Then_
\[\mathrm{S}_{m}(\mathcal{T}) =m^{m-1}(m-1)^{(|E(\mathcal{T})|-1)(m-1)}N_{\mathcal{T}}(P_{1}^{(m) }),\] \[\mathrm{S}_{2m}(\mathcal{T}) =m^{m-1}(m-1)^{(|E(\mathcal{T})|-1)(m-1)}N_{\mathcal{T}}(P_{1}^{(m )})+2m^{2m-3}(m-1)^{(|E(\mathcal{T})|-2)(m-1)}N_{\mathcal{T}}(P_{2}^{(m)}),\] \[\mathrm{S}_{3m}(\mathcal{T}) =m^{m-1}(m-1)^{(|E(\mathcal{T})|-1)(m-1)}N_{\mathcal{T}}(P_{1}^{(m )})+6m^{2m-3}(m-1)^{(|E(\mathcal{T})|-2)(m-1)}N_{\mathcal{T}}(P_{2}^{(m)})\] \[+3m^{3m-5}(m-1)^{(|E(\mathcal{T})|-3)(m-1)}N_{\mathcal{T}}(P_{3}^ {(m)})+6m^{3m-5}(m-1)^{(|E(\mathcal{T})|-3)(m-1)}N_{\mathcal{T}}(S_{3}^{(m)}),\] \[\mathrm{S}_{d}(\mathcal{T}) =0,\text{ for }d=1,\ldots,m-1,m+1,\ldots,2m-1,2m+1,\ldots,3m-1.\]
Let \(\textbf{T}_{q}\) be the set of all \(m\)-uniform hypertrees with \(q\) edges. The following theorem gives the last hypergraph in an \(S\)-order of all \(m\)-uniform hypertrees.
**Theorem 3.2**.: _In an \(S\)-order of \(\textbf{T}_{q}\), the last hypergraph is the hyperstar \(S_{q}^{(m)}\)._
Proof.: Since in all \(m\)-uniform hypertrees with \(q\) edges the spectral moments \(\mathrm{S}_{0},\mathrm{S}_{1}\), \(\ldots,\mathrm{S}_{2m-1}\) are the same, the first significant spectral moment is the \(2m\)th. By Lemma 3.1, \(\mathrm{S}_{2m}\) is determined by the number of \(P_{2}^{(m)}\). The number of vertices of \(m\)-uniform hypertrees with \(q\) edges is \(qm-q+1\). For any hypertree \(\mathcal{T}\) in \(\textbf{T}_{q}\), we have
\[N_{\mathcal{T}}(P_{2}^{(m)})=\sum_{i=1}^{qm-q+1}\binom{d_{i}}{2}=\frac{1}{2} \sum_{i=1}^{qm-q+1}d_{i}^{2}-\frac{qm}{2}=\frac{1}{2}M(\mathcal{T})-\frac{qm} {2},\]
where \(d_{1}+d_{2}+\cdots+d_{qm-q+1}=mq\).
Repeating transformation 1, any \(m\)-uniform hypertree with \(q\) edges can changed into \(S_{q}^{(m)}\). And by Lemma 2.2, each application of transformation 1 strictly increases the Zagreb index. Therefore, in an \(S\)-order of \(\textbf{T}_{q}\), the last hypergraph is the hyperstar \(S_{q}^{(m)}\).
Let **T** be the set of all binary \(m\)-uniform hypertrees with \(q\) edges. We characterize the first few hypergraphs in the \(S\)-order of all \(m\)-uniform hypertrees.
**Theorem 3.3**.: \(\textbf{T}\prec_{s}\textbf{T}_{q}\setminus\textbf{T}\)_._
Proof.: As in the proof of Theorem 3.2 we pay attention to the Zagreb index. Repeating transformation 3, any \(m\)-uniform hypertree with \(q\) edges can changed into a binary \(m\)-uniform hypertree with \(q\) edges. And from Lemma 2.4, each application of transformation 3 strictly decreases the Zagreb index. Hence, \(\textbf{T}\prec_{s}\textbf{T}_{q}\setminus\textbf{T}\).
Let \(P_{3}(\mathcal{H})\) be the set of all sub-hyperpaths length \(3\) of an \(m\)-uniform hypergraph \(\mathcal{H}\).
**Lemma 3.4**.: _Let \(e=\{u,v,w_{1},\ldots,w_{m-2}\}\) be an edge and \(\mathcal{H}_{1},\ldots,\mathcal{H}_{p}\) be pairwise disjoint connected \(m\)-uniform hypergraphs with \(\mathcal{H}_{i}\neq P_{0}^{(m)}\) and \(\widetilde{w}_{i}\in V(\mathcal{H}_{i})\) for each \(i\in[p]\), where \(m\geq 3\), \(1\leq p\leq m-2\). Let \(\mathcal{H}=e(w_{1},\ldots,w_{p})\bigodot(\mathcal{H}_{1}(\widetilde{w}_{1}), \ldots,\mathcal{H}_{p}(\widetilde{w}_{p}))\). Let \(\mathcal{H}^{e}_{r,s}=\mathcal{H}(u,v)\bigodot(P_{r}^{(m)}(\widetilde{u}),P_{ s}^{(m)}(\widetilde{v}))\), where \(\widetilde{u},\widetilde{v}\) are respectively the pendent vertices of \(P_{r}^{(m)}\) and \(P_{s}^{(m)}\). If \(r\geq s\geq 1\), then_
\[N_{\mathcal{H}^{e}_{r,s}}(P_{3}^{(m)})>N_{\mathcal{H}^{e}_{r+s,0}}(P_{3}^{(m) }).\]
Proof.: Since \(p\geq 1\), let \(e_{1}\in E(\mathcal{H}_{1})\) be an edge incident with \(\widetilde{w}_{1}\). Let \(e_{2}\in E(P_{r}^{(m)})\) be an edge incident with \(\widetilde{u}\) and \(e_{3}\in E(P_{s}^{(m)})\) be an edge incident with \(\widetilde{v}\). We have \(P_{3}(\mathcal{H}^{e}_{r,0})\subseteq P_{3}(\mathcal{H}^{e}_{r,s})\) and \(P_{3}(\mathcal{H}^{e}_{r,0})\subseteq P_{3}(\mathcal{H}^{e}_{r+s,0})\). For a hyperpath \(\mathcal{P}_{1}\) with \(E(\mathcal{P}_{1})=\{e,e^{\prime},e^{\prime\prime}\}\), \(\mathcal{P}_{1}\) is also written \(ee^{\prime}e^{\prime\prime}\) in this paper.
If \(s=1\), there are hyperpaths \(e_{2}ee_{3},e_{3}ee_{1}\) in \(P_{3}(\mathcal{H}^{e}_{r,1})\) and not in \(P_{3}(\mathcal{H}^{e}_{r,0})\). Since \(p\geq 1\), \(N_{\mathcal{H}^{e}_{r,1}}(P_{3}^{(m)})-N_{\mathcal{H}^{e}_{r,0}}(P_{3}^{(m)})\geq 2.\) There is only one hyperpath \(\mathcal{P}\) in \(P_{3}(\mathcal{H}^{e}_{r+1,0})\) and not in \(P_{3}(\mathcal{H}^{e}_{r,0})\). And the edges of \(\mathcal{P}\) are not in \(E(\mathcal{H}_{i}),i=1,2,\ldots,p\). We have \(N_{\mathcal{H}^{e}_{r+1,0}}(P_{3}^{(m)})-N_{\mathcal{H}^{e}_{r,0}}(P_{3}^{(m)}) =1.\) So, \(N_{\mathcal{H}^{e}_{r,1}}(P_{3}^{(m)})>N_{\mathcal{H}^{e}_{r+1,0}}(P_{3}^{(m)})\).
If \(s=2\), let \(e_{4}\neq e_{3}\in E(P_{s}^{(m)})\). There are hyperpaths \(e_{2}ee_{3},e_{3}ee_{1},ee_{3}e_{4}\) in \(P_{3}(\mathcal{H}^{e}_{r,2})\) and not in \(P_{3}(\mathcal{H}^{e}_{r,0})\). Since \(p\geq 1\), \(N_{\mathcal{H}^{e}_{r,2}}(P_{3}^{(m)})-N_{\mathcal{H}^{e}_{r,0}}(P_{3}^{(m)}) \geq 3.\) There are only two hyperpaths \(\mathcal{P}^{\prime}\), \(\mathcal{P}^{\prime\prime}\) in \(P_{3}(\mathcal{H}^{e}_{r+2,0})\) and not in \(P_{3}(\mathcal{H}^{e}_{r,0})\). And the edges of \(\mathcal{P}^{\prime}\) and \(\mathcal{P}^{\prime\prime}\) are not in \(E(\mathcal{H}_{i}),i=1,2,\ldots,p\). We have \(N_{\mathcal{H}^{e}_{r+2,0}}(P_{3}^{(m)})-N_{\mathcal{H}^{e}_{r,0}}(P_{3}^{(m)} )=2.\) So, \(N_{\mathcal{H}^{e}_{r,2}}(P_{3}^{(m)})>N_{\mathcal{H}^{e}_{r+2,0}}(P_{3}^{(m)})\).
If \(s>2\), similar to \(s=2\), there are hyperpaths \(e_{2}ee_{3},e_{3}ee_{1},ee_{3}e_{4}\) in \(P_{3}(\mathcal{H}^{e}_{r,s})\) and not in \(P_{3}(\mathcal{H}^{e}_{r,0})\). For an \(m\)-uniform hyperpath with \(q\) (\(q>2\)) edges, the number of the sub-hyperpaths with \(3\) edges is \(q-2\). Since \(p\geq 1\),
\[N_{\mathcal{H}^{e}_{r,s}}(P_{3}^{(m)})-N_{\mathcal{H}^{e}_{r,0}}(P_{3}^{(m)}) \geq 3+s-2=s+1.\]
Since \(r\geq s>2\), there are only \(s\) hyperpaths in \(P_{3}(\mathcal{H}^{e}_{r+s,0})\) and not in \(P_{3}(\mathcal{H}^{e}_{r,0})\). We have \(N_{\mathcal{H}^{e}_{r+s,0}}(P_{3}^{(m)})-N_{\mathcal{H}^{e}_{r,0}}(P_{3}^{(m)} )=s.\) So, if \(s>2\), \(N_{\mathcal{H}^{e}_{r,s}}(P_{3}^{(m)})>N_{\mathcal{H}^{e}_{r+s,0}}(P_{3}^{(m)})\).
Therefore, if \(r\geq s\geq 1\), we have \(N_{\mathcal{H}^{e}_{r,s}}(P_{3}^{(m)})>N_{\mathcal{H}^{e}_{r+s,0}}(P_{3}^{(m)})\).
The following theorem gives the first hypergraph in an \(S\)-order of all \(m\)-uniform hypertrees.
**Theorem 3.5**.: _In an \(S\)-order of **T\({}_{q}\)**, the first hypergraph is the hyperpath \(P_{q}^{(m)}\)._
Proof.: In an \(S\)-order of \({\bf T}_{q}\), by Theorem 3.3, the first hypergraph is in \({\bf T}\). When \(m=2\), \({\bf T}=\{P_{q}\}.\) Therefore, in an \(S\)-order of \({\bf T}_{q}\), the first graph is the path \(P_{q}\). When \(m>2\), since the spectral moments \({\rm S}_{0},{\rm S}_{1},\ldots,{\rm S}_{3m-1}\) are the same in \({\bf T}\), the first significant spectral moment is the \(3m\)th. By Lemma 3.1, \({\rm S}_{3m}\) is determined by the number of \(S_{3}^{(m)}\) and \(P_{3}^{(m)}\).
For any hypertree \({\cal T}\) in \({\bf T}\), \(N_{\cal T}(S_{3}^{(m)})=0\). Let \(e({\cal T})\) denote the set of all edges of \({\cal T}\) that contain at least \(3\) vertices whose degree is equal to \(2\). Fix a vertex \(v\) of degree \(2\) as a root. Let \({\cal T}_{1},{\cal T}_{2}\) be the hypertrees attached at \(v\). We can repeatedly apply the transformation from Lemma 3.4 at any two vertices \(u_{1},u_{2}\in e\in e({\cal T})\) with largest distance from the root in every hypertree \({\cal T}_{i}\) and \(d_{u_{1}}=d_{u_{2}}=2\), as long as \({\cal T}_{i}\) does not become a hyperpath. From Lemma 3.4, each application of this transformation strictly decreases the number of sub-hyperpaths with \(3\) edges. In the end of this process, we arrive at the hyperpath \(P_{q}^{(m)}\). Therefore, in an \(S\)-order of \({\bf T}_{q}\), the first hypergraph is the hyperpath \(P_{q}^{(m)}\).
## 4 The \(S\)-order in unicyclic hypergraphs
In this section, the expressions of \(2m\)th and \(3m\)th order spectral moments of linear unicyclic \(m\)-uniform hypergraphs are obtained in terms of the number of sub-hypergraphs. We characterize the first and last hypergraphs in an \(S\)-order of all linear unicyclic \(m\)-uniform hypergraphs with given girth. And we give the last hypergraph in an \(S\)-order of all linear unicyclic \(m\)-uniform hypergraphs.
Let \({\cal H}(\omega)\) be a weighted uniform hypergraph, where \(\omega:E({\cal H})\rightarrow\mathbb{Z}^{+}\). Let \(\omega({\cal H})=\sum_{e\in E({\cal H})}\omega(e)\) and \(d_{v}({\cal H}(\omega))=\sum_{e\in E_{v}({\cal H})}\omega(e)\), where \(E_{v}({\cal H}):=\{e\in E({\cal H})|v\in e\}\). Let \(C_{n}\) be a cycle with \(n\) edges. In [24], the formula for the spectral moments of linear unicyclic \(m\)-uniform hypergraphs was given.
**Theorem 4.1**.: _[_24_]_ _Let \({\cal U}\) be a linear unicyclic \(m\)-uniform hypergraph with girth \(n\). If \(m\mid d\ (d\neq 0)\), then_
\[{\rm S}_{d}({\cal U})=d(m-1)^{|V({\cal U})|}(\sum_{\widehat{\cal T}\in{\cal B} _{tree}({\cal U})}tr_{d}(\widehat{\cal T})+\sum_{{\cal G}\in{\cal B}_{cycle}({ \cal U})}tr_{d}({\cal G})) \tag{4.1}\]
_and_
\[tr_{d}(\widehat{\cal T})=\sum_{\omega:\omega(\widehat{\cal T})=d/m}(m-1)^{-|V( \widehat{\cal T})|}m^{(m-2)|E(\widehat{\cal T})|}\prod_{v\in V(\widehat{\cal T })}(d_{v}(\widehat{\cal T}(\omega))-1)!\prod_{e\in E(\widehat{\cal T})}\frac{ \omega(e)^{m-1}}{(\omega(e)!)^{m}},\]
\[tr_{d}(\mathcal{G})=\sum_{\omega:\omega(\mathcal{G})=d/m}2(m-1)^{-|V(\mathcal{G})|}m ^{(m-2)|E(\mathcal{G})|-1}\prod_{v\in V(\mathcal{G})}(d_{v}(\mathcal{G}(\omega) )-1)!\prod_{e\in E(\mathcal{G})}\frac{\omega(e)^{m-1}}{(\omega(e)!)^{m}}\Omega_{ C_{n}^{(m)}(\omega^{0})},\]
_where_
\[\Omega_{C_{n}^{(m)}(\omega^{0})}=\sum_{x=0}^{2\omega_{min}^{0}}\prod_{i=1}^{n }\frac{(\omega_{i}^{0}!)^{2}}{(\omega_{i-1}^{0}+\omega_{min}^{0}-x)!(\omega_{i }^{0}-\omega_{min}^{0}+x)!}\sum_{l=0}^{n-1}\prod_{i=1}^{l}(\omega_{i}^{0}+ \omega_{min}^{0}-x)\prod_{i=l+2}^{n}(\omega_{i}^{0}-\omega_{min}^{0}+x),\]
\(\omega_{min}^{0}=\min_{i\in n}\omega_{i}^{0}\)_, \(\omega_{i}^{0}=\omega^{0}(e_{i}),i\in[n]\), \(\mathcal{B}_{tree}(\mathcal{U})\) denotes the set of connected sub-hypergraphs of \(\mathcal{U}\) which are hypertrees, \(\mathcal{B}_{cycle}(\mathcal{U})\) denotes the set of connected sub-hypergraphs of \(\mathcal{U}\) which contain the hypercycle._
_If \(m\nmid d\), then \(\mathrm{S}_{d}(\mathcal{U})=0\)._
We give expressions of \(2m\)th and \(3m\)th order spectral moments of a linear unicyclic \(m\)-uniform hypergraph in terms of the number of some sub-hypergraphs.
**Corollary 4.2**.: _Let \(\mathcal{U}\) be a linear unicyclic \(m\)-uniform hypergraph. Then we have_
\[\mathrm{S}_{2m}(\mathcal{U})=m^{(m-1)}(m-1)^{|V(\mathcal{U})|-m}N_{\mathcal{U }}(P_{1}^{(m)})+2m^{2m-3}(m-1)^{|V(\mathcal{U})|-2m+1}N_{\mathcal{U}}(P_{2}^{ (m)}).\]
Proof.: Since \(2m/m<g(\mathcal{U})\), the second summand in (4.1) does not appear. By Theorem 4.1, we have
\[\mathrm{S}_{2m}(\mathcal{U}) =2m(m-1)^{|V(\mathcal{U})|}\sum_{\widehat{\mathcal{T}}\in \mathcal{B}_{tree}(\mathcal{U})}\sum_{\omega:\omega(\widehat{\mathcal{T}})=2 }(m-1)^{-|V(\widehat{\mathcal{T}})|}m^{(m-2)|E(\widehat{\mathcal{T}})|}\] \[\prod_{v\in V(\widehat{\mathcal{T}})}(d_{v}(\widehat{\mathcal{T}} (\omega))-1)!\prod_{e\in E(\widehat{\mathcal{T}})}\frac{\omega(e)^{m-1}}{( \omega(e)!)^{m}}.\]
Since \(\omega(\widehat{\mathcal{T}})=\sum_{e\in E(\widehat{\mathcal{T}})}\omega(e)=2\), \(\widehat{T}\) is an edge \(e\) with \(\omega(e)=2\) or \(\widehat{T}\) is a hyperpath of length \(2\) with \(\omega(e_{i})=1,i\in[2]\), where \(E(\widehat{\mathcal{T}})=\{e_{1},e_{2}\}\). So
\[\mathrm{S}_{2m}(\mathcal{U}) =2m(m-1)^{|V(\mathcal{U})|}((m-1)^{-m}m^{(m-2)}\frac{2^{m-1}}{2^{ m}}N_{\mathcal{U}}(P_{1}^{(m)})+(m-1)^{1-2m}m^{2(m-2)}N_{\mathcal{U}}(P_{2}^{(m)}))\] \[=m^{(m-1)}(m-1)^{|V(\mathcal{U})|-m}N_{\mathcal{U}}(P_{1}^{(m)})+ 2m^{2m-3}(m-1)^{|V(\mathcal{U})|-2m+1}N_{\mathcal{U}}(P_{2}^{(m)}).\]
**Corollary 4.3**.: _Let \(\mathcal{U}\) be a linear unicyclic \(m\)-uniform hypergraph with girth \(g\)\((g>\)
3). Then we have_
\[\mathrm{S}_{3m}(\mathcal{U}) =(m-1)^{|V(\mathcal{U})|-m}m^{m-1}N_{\mathcal{U}}(P_{1}^{(m)})+6m^{2 m-3}(m-1)^{|V(\mathcal{U})|+1-2m}N_{\mathcal{U}}(P_{2}^{(m)})\] \[+3m^{3m-5}(m-1)^{|V(\mathcal{U})|+2-3m}N_{\mathcal{U}}(P_{3}^{(m) })+6m^{3m-5}(m-1)^{|V(\mathcal{U})|+2-3m}N_{\mathcal{U}}(S_{3}^{(m)}).\]
_Let \(\mathcal{U}\) be a linear unicyclic \(m\)-uniform hypergraph with girth \(3\). Then we have_
\[\mathrm{S}_{3m}(\mathcal{U}) =(m-1)^{|V(\mathcal{U})|-m}m^{m-1}N_{\mathcal{U}}(P_{1}^{(m)})+6m ^{2m-3}(m-1)^{|V(\mathcal{U})|+1-2m}N_{\mathcal{U}}(P_{2}^{(m)})\] \[+3m^{3m-5}(m-1)^{|V(\mathcal{U})|+2-3m}N_{\mathcal{U}}(P_{3}^{(m )})+6m^{3m-5}(m-1)^{|V(\mathcal{U})|+2-3m}N_{\mathcal{U}}(S_{3}^{(m)})\] \[+24m^{3m-6}(m-1)^{|V(\mathcal{U})|-3m+3}.\]
Proof.: When \(g>3\), since \(3m/m<g\), the second summand in (4.1) does not appear. By Theorem 4.1, we have
\[\mathrm{S}_{3m}(\mathcal{U}) =3m(m-1)^{|V(\mathcal{U})|}\sum_{\widehat{\mathcal{T}}\in\mathcal{ B}_{tree}(\mathcal{U})}\sum_{\omega:\omega(\widehat{\mathcal{T}})=3}(m-1)^{-|V (\widehat{\mathcal{T}})|}m^{(m-2)|E(\widehat{\mathcal{T}})|}\] \[\prod_{v\in V(\widehat{\mathcal{T}})}(d_{v}(\widehat{\mathcal{T} }(\omega))-1)!\prod_{e\in E(\widehat{\mathcal{T}})}\frac{\omega(e)^{m-1}}{( \omega(e)!)^{m}}.\]
Since \(\omega(\widehat{\mathcal{T}})=\sum_{e\in E(\widehat{\mathcal{T}})}\omega(e)=3\), we have
(1). \(\widehat{T}\) is an edge \(e\) with \(\omega(e)=3\);
(2). \(\widehat{T}\) is a hyperpath of length \(2\) with \(\omega(e_{1})=1\), \(\omega(e_{2})=2\) or \(\omega(e_{1})=2\), \(\omega(e_{2})=1\), where \(E(\widehat{\mathcal{T}})=\{e_{1},e_{2}\}\);
(3). \(\widehat{T}\) is a hyperpath of length \(3\) with \(\omega(e_{i})=1,i\in[3]\), where \(E(\widehat{\mathcal{T}})=\{e_{1},e_{2},e_{3}\}\);
(4). \(\widehat{T}\) is a hyperstar with \(3\) edges and \(\omega(e_{i})=1,i\in[3]\), where \(E(\widehat{\mathcal{T}})=\{e_{1},e_{2},e_{3}\}\).
Therefore,
\[\mathrm{S}_{3m}(\mathcal{U}) =3m(m-1)^{|V(\mathcal{U})|}((m-1)^{-m}m^{(m-2)}(2!)^{m}\frac{3^{m -1}}{(3!)^{m}}N_{\mathcal{U}}(P_{1}^{(m)})\] \[+(m-1)^{1-2m}m^{2(m-2)}2!\frac{2^{m-1}}{(2!)^{m}}2N_{\mathcal{U} }(P_{2}^{(m)})\] \[+(m-1)^{2-3m}m^{3(m-2)}N_{\mathcal{U}}(P_{3}^{(m)})+(m-1)^{2-3m} m^{3(m-2)}2!N_{\mathcal{U}}(S_{3}^{(m)}))\] \[=(m-1)^{|V(\mathcal{U})|-m}m^{m-1}N_{\mathcal{U}}(P_{1}^{(m)})+6m ^{2m-3}(m-1)^{|V(\mathcal{U})|+1-2m}N_{\mathcal{U}}(P_{2}^{(m)})\] \[+3m^{3m-5}(m-1)^{|V(\mathcal{U})|+2-3m}N_{\mathcal{U}}(P_{3}^{(m )})+6m^{3m-5}(m-1)^{|V(\mathcal{U})|+2-3m}N_{\mathcal{U}}(S_{3}^{(m)}).\]
When \(g=3\), since \(\omega(\widehat{\mathcal{T}})=\sum_{e\in E(\widehat{\mathcal{T}})}\omega(e)=3\), we have
(1). \(\widehat{T}\) is an edge \(e\) with \(\omega(e)=3\);
(2). \(\widehat{T}\) is a hyperpath of length \(2\) with \(\omega(e_{1})=1\), \(\omega(e_{2})=2\) or \(\omega(e_{1})=2\), \(\omega(e_{2})=1\), where \(E(\widehat{\mathcal{T}})=\{e_{1},e_{2}\}\);
(3). \(\widehat{T}\) is a hyperpath of length \(3\) with \(\omega(e_{i})=1,i\in[3]\), where \(E(\widehat{\mathcal{T}})=\{e_{1},e_{2},e_{3}\}\);
(4). \(\widehat{T}\) is a hyperstar with \(3\) edges and \(\omega(e_{i})=1,i\in[3]\), where \(E(\widehat{\mathcal{T}})=\{e_{1},e_{2},e_{3}\}\).
Since \(\omega(\mathcal{G})=\sum_{e\in E(\mathcal{G})}\omega(e)=3\), \(\mathcal{G}\) is a hypercycle with girth \(3\), \(\omega_{i}^{0}=\omega^{0}(e_{i})=1,i\in[3]\) and \(\Omega_{C_{3}^{(m)}(\omega^{0})}=4\), where \(E(\mathcal{G})=\{e_{1},e_{2},e_{3}\}\). By Theorem 4.1, we have
\[\mathrm{S}_{3m}(\mathcal{U}) =3m(m-1)^{|V(\mathcal{U})|}((m-1)^{-m}m^{(m-2)}(2!)^{m}\frac{3^{m -1}}{(3!)^{m}}N_{\mathcal{U}}(P_{1}^{(m)})\] \[+(m-1)^{1-2m}m^{2(m-2)}2!\frac{2^{m-1}}{(2!)^{m}}2N_{\mathcal{U} }(P_{2}^{(m)})+(m-1)^{2-3m}m^{3(m-2)}N_{\mathcal{U}}(P_{3}^{(m)})\] \[+(m-1)^{2-3m}m^{3(m-2)}2!N_{\mathcal{U}}(S_{3}^{(m)})+2(m-1)^{-3m+ 3}m^{3(m-2)-1}4)\] \[=(m-1)^{|V(\mathcal{U})|-m}m^{m-1}N_{\mathcal{U}}(P_{1}^{(m)})+6 m^{2m-3}(m-1)^{|V(\mathcal{U})|+1-2m}N_{\mathcal{U}}(P_{2}^{(m)})\] \[+3m^{3m-5}(m-1)^{|V(\mathcal{U})|+2-3m}N_{\mathcal{U}}(P_{3}^{(m) })+6m^{3m-5}(m-1)^{|V(\mathcal{U})|+2-3m}N_{\mathcal{U}}(S_{3}^{(m)})\] \[+24m^{3m-6}(m-1)^{|V(\mathcal{U})|-3m+3}.\]
The set of all linear unicyclic \(m\)-uniform hypergraphs with \(e+f\) edges which contain a hypercycle \(C_{e}^{(m)}\) will be denoted by \(\textbf{U}_{ef}^{m}\). Let \(F_{ef}^{(m)}\) be the linear unicyclic \(m\)-uniform hypergraph obtained from the hypercycle \(C_{e}^{(m)}\) by attached \(f\) pendant edges to one of non core vertices on \(C_{e}^{(m)}\). The following theorem gives the last hypergraph in an \(S\)-order of all linear unicyclic \(m\)-uniform hypergraphs with given girth.
**Theorem 4.4**.: _In an \(S\)-order of \(\textbf{U}_{ef}^{m}\) the last hypergraph is \(F_{ef}^{(m)}\)._
Proof.: Since in \(\textbf{U}_{ef}^{m}\) the spectral moments \(\mathrm{S}_{0},\mathrm{S}_{1},\ldots,\mathrm{S}_{2m-1}\) are the same, the first significant spectral moment is the \(2m\)th. By Corollary 4.2, \(\mathrm{S}_{2m}\) is determined by the number of \(P_{2}^{(m)}\). The number of vertices of linear unicyclic \(m\)-uniform hypergraphs with \(e+f\) edges is \((e+f)(m-1)\). For any \(\mathcal{U}\in\textbf{U}_{ef}^{m}\), we have
\[N_{\mathcal{U}}(P_{2}^{(m)})=\sum_{i=1}^{em+fm-e-f}\binom{d_{i}}{2}=\frac{1}{ 2}\sum_{i=1}^{em+fm-e-f}d_{i}^{2}-\frac{em+fm}{2}=\frac{1}{2}M(\mathcal{U})- \frac{em+fm}{2},\]
where \(d_{1}+d_{2}+\cdots+d_{em+fm-e-f}=em+fm\).
Repeating transformation 1, any linear unicyclic \(m\)-uniform hypergraph in \(\mathbf{U}_{ef}^{m}\) can be changed into a linear unicyclic \(m\)-uniform hypergraph such that all the edges not on \(C_{e}^{(m)}\) are pendant edges and incident with non core vertices of \(C_{e}^{(m)}\).
After repeating transformation 1, if we repeat transformation 2, any linear unicyclic \(m\)-uniform hypergraph in \(\mathbf{U}_{ef}^{m}\) can be changed into a linear unicyclic \(m\)-uniform hypergraph obtained from the hypercycle \(C_{e}^{(m)}\) by attached \(f\) pendant edges to one of non core vertices on \(C_{e}^{(m)}\).
From Lemma 2.2 and Lemma 2.3, each application of transformation 1 or 2 strictly increases the Zagreb index. Hence, in an \(S\)-order of \(\mathbf{U}_{ef}^{m}\) the last hypergraph is \(F_{ef}^{(m)}\).
The set of all linear unicyclic \(m\)-uniform hypergraphs with \(q\) edges will be denoted by \(\mathbf{U}_{q}\). The following theorem gives the last hypergraph in an \(S\)-order of all linear unicyclic \(m\)-uniform hypergraphs.
**Theorem 4.5**.: _In an \(S\)-order of \(\textbf{U}_{q}\) the last hypergraph is \(F_{3(q-3)}^{(m)}\)._
Proof.: By Theorem 4.4, we get that in an \(S\)-order of \(\mathbf{U}_{l(q-l)}^{m}\) the last hypergraph is \(F_{l(q-l)}^{(m)}\). By the definition of the Zagreb index, we have \(M(F_{l(q-l)}^{(m)})=(m-2)l+(q-l)(m-1)+4(l-1)+(q-l+2)^{2}=l^{2}-l-2ql+qm+3q+q^{2},3\leq l\leq q\). Since the derivative of \(M(F_{l(q-l)}^{(m)})\) over \(l\) is equal to \(2l-1-2q<0\), \(M(F_{l(q-l)}^{(m)})\leq M(F_{3(q-3)}^{(m)})\) for \(3\leq l\leq q\) with the equality if and only if \(l=3\). Hence, in an \(S\)-order of \(\mathbf{U}_{q}\) the last hypergraph is \(F_{3(q-3)}^{(m)}\).
For \(m\geq 3\), let \(\mathbf{U}\) be the set of all linear unicyclic \(m\)-uniform hypergraphs with \(e+f\) edges and girth \(e\) such that the degree of all the vertices is less than or equal to 2. We characterize the first few hypergraphs in the \(S\)-order of all linear unicyclic \(m\)-uniform hypergraphs with given girth.
**Theorem 4.6**.: _For \(m\geq 3\),_
\[\textbf{U}\prec_{s}\textbf{U}_{ef}^{m}\setminus\textbf{U}.\]
Proof.: As in the proof of Theorem 4.4 we pay attention to the Zagreb index. Repeating transformation 3, any \(m\)-uniform hypertree attached to an \(m\)-uniform hypergraph \(\mathcal{H}\) can be changed into a binary \(m\)-uniform hypertree. After repeating transformation 3, if we repeat transformation 4, then any linear unicyclic \(m\)-uniform
hypergraph in \(\mathbf{U}_{ef}^{m}\) can be changed into a linear unicyclic \(m\)-uniform hypergraph in \(\mathbf{U}\). And from Lemma 2.4 and Lemma 2.5, the Zagreb indices decrease. Hence, we have \(\mathbf{U}\prec_{s}\mathbf{U}_{ef}^{m}\setminus\mathbf{U}\).
We give a transformation which will decrease the number of sub-hyperpaths with 3 edges of hypergraphs as follows:
**Transformation 5**: Let \(\mathcal{P}_{i}\neq P_{0}^{(m)}\) be an \(m\)-uniform hyperpath, \(u_{i}\) be a pendent vertex of \(\mathcal{P}_{i}\) for each \(i\in[p]\) and \(v_{1},v_{2},\ldots,v_{e(m-2)}\) be core vertices of a linear \(m\)-uniform hypercycle \(C_{e}^{(m)}\), where \(m\geq 3\) and \(2\leq p\leq e(m-2)\). Let \(\mathcal{H}_{1}=C_{e}^{(m)}(v_{1},\ldots,v_{p})\bigodot(\mathcal{P}_{1}(u_{1} ),\ldots,\mathcal{P}_{p}(u_{p}))\). Suppose that \(u_{1}\in e_{1}\) in \(\mathcal{P}_{1}\), \(w_{1}\in V(\mathcal{P}_{2})\) is a pendent vertex of \(\mathcal{H}_{1}\), let \(\mathcal{H}_{2}\) be obtained from \(\mathcal{H}_{1}\) by deleting \(e_{1}\) and adding \((e_{1}\setminus\{u_{1}\})\bigcup\{w_{1}\}\).
**Lemma 4.7**.: _Let \(\mathcal{H}_{2}\) be obtained from \(\mathcal{H}_{1}\) by transformation 5. Then \(N_{\mathcal{H}_{2}}(P_{3}^{(m)})<N_{\mathcal{H}_{1}}(P_{3}^{(m)})\)._
Proof.: Let \(\mathcal{H}_{3}=C_{e}^{(m)}(v_{2},\ldots,v_{p})\bigodot(\mathcal{P}_{2}(u_{2} ),\ldots,\mathcal{P}_{p}(u_{p}))\) and \(\mathcal{P}_{1}^{\prime}=\mathcal{P}_{1}-e_{1}+(e_{1}\setminus\{u_{1}\}) \bigcup\{w_{1}\}\). So \(P_{3}(\mathcal{H}_{1})=P_{3}(\mathcal{H}_{3})+P_{3}(\mathcal{P}_{1})+P_{ \mathcal{H}_{1}}\) and \(P_{3}(\mathcal{H}_{2})=P_{3}(\mathcal{H}_{3})+P_{3}(\mathcal{P}_{1}^{\prime}) +P_{\mathcal{H}_{2}}\), where \(P_{\mathcal{H}_{1}}\) (\(P_{\mathcal{H}_{2}}\)) is the set of all the sub-hyperpaths with 3 edges of \(\mathcal{H}_{1}(\mathcal{H}_{2})\), each of them contains both at least one edge in \(E(\mathcal{H}_{3})\) and at least one edge in \(E(\mathcal{P}_{1})\) (\(E(\mathcal{P}_{1}^{\prime})\)). We have \(|E(\mathcal{P}_{1})|=|E(\mathcal{P}_{1}^{\prime})|\) and \(N_{\mathcal{P}_{1}^{\prime}}(P_{3}^{(m)})=N_{\mathcal{P}_{1}}(P_{3}^{(m)})\).
If \(|E(\mathcal{P}_{1})|=1\), since \(p\geq 2\), in \(P_{\mathcal{H}_{1}}\) there are 2 hyperpaths at least which contain \(e_{1}\) and two edges in \(E(\mathcal{H}_{3})\). In \(P_{\mathcal{H}_{2}}\) there is a hyperpath which contain \((e_{1}\setminus\{u_{1}\})\bigcup\{w_{1}\}\) and two edges in \(E(\mathcal{H}_{3})\). Therefore, we have \(|P_{\mathcal{H}_{1}}|-|P_{\mathcal{H}_{2}}|\geq 1\). Hence, \(N_{\mathcal{H}_{1}}(P_{3}^{(m)})-N_{\mathcal{H}_{2}}(P_{3}^{(m)})\geq 1\). So, \(N_{\mathcal{H}_{2}}(P_{3}^{(m)})<N_{\mathcal{H}_{1}}(P_{3}^{(m)})\).
If \(|E(\mathcal{P}_{1})|\geq 2\), since \(p\geq 2\), in \(P_{\mathcal{H}_{1}}\) there are 2 hyperpaths at least which contain \(e_{1}\) and two edges in \(E(\mathcal{H}_{3})\) and there is a hyperpath which contain two edges in \(E(\mathcal{P}_{1})\) and an edge in \(E(\mathcal{H}_{3})\). In \(P_{\mathcal{H}_{2}}\) there is a hyperpath which contain \((e_{1}\setminus\{u_{1}\})\bigcup\{w_{1}\}\) and two edges in \(E(\mathcal{H}_{3})\) and there is a hyperpath which contain two edges in \(E(\mathcal{P}_{1}^{\prime})\) and an edge in \(E(\mathcal{H}_{3})\). Therefore, we have \(|P_{\mathcal{H}_{1}}|-|P_{\mathcal{H}_{2}}|\geq 1\). Hence, \(N_{\mathcal{H}_{1}}(P_{3}^{(m)})-N_{\mathcal{H}_{2}}(P_{3}^{(m)})\geq 1\). So, \(N_{\mathcal{H}_{2}}(P_{3}^{(m)})<N_{\mathcal{H}_{1}}(P_{3}^{(m)})\).
Let \(E_{ef}^{m}\) be the linear unicyclic \(m\)-uniform hypergraph obtained by the coalescence of \(C_{e}^{(m)}\) at one of its core vertices with \(P_{f}^{(m)}\) at one of its pendent vertices. The following theorem gives the first hypergraph in an \(S\)-order of all linear unicyclic \(m\)-uniform hypergraphs with given girth.
**Theorem 4.8**.: _For \(m\geq 3\), in an \(S\)-order of \(\textbf{U}_{ef}^{m}\) the first hypergraph is \(E_{ef}^{m}\)._
Proof.: In an \(S\)-order of \(\textbf{U}_{ef}^{m}\), by Theorem 4.6, the first hypergraph is in \(\textbf{U}\). Since the spectral moments \(\mathrm{S}_{0},\mathrm{S}_{1},\ldots,\mathrm{S}_{3m-1}\) are the same in \(\textbf{U}\), the first significant spectral moment is the \(3m\)th. By Corollary 4.3, \(\mathrm{S}_{3m}\) is determined by the number of \(S_{3}^{(m)}\) and \(P_{3}^{(m)}\). For any \(\mathcal{H}\in\textbf{U}\), \(N_{\mathcal{H}}(S_{3}^{(m)})=0\).
Let \(\mathcal{T}_{1},\ldots,\mathcal{T}_{p}\) be pairwise disjoint binary \(m\)-uniform hypertrees, \(u_{i}\) be a pendent vertex of \(\mathcal{T}_{i}\) for each \(i\in[p]\) and \(v_{1},\ldots,v_{p}\) be core vertices of \(C_{e}^{(m)}\), where \(1\leq p\leq e(m-2)\) and \(\sum_{i=1}^{p}|E(\mathcal{T}_{i})|=f\). For any \(\mathcal{H}=C_{e}^{(m)}(v_{1},\ldots,v_{p})\bigodot(\mathcal{T}_{1}(u_{1}), \ldots,\mathcal{T}_{p}(u_{p}))\in\textbf{U}\), let \(e(\mathcal{H})\) denote the set of all edges of \(\mathcal{H}-E(C_{e}^{(m)})\) that contain at least \(3\) vertices whose degree is equal to \(2\). Let the vertex \(u_{i}\) as a root in \(\mathcal{T}_{i}\). We can repeatedly apply the transformation from Lemma 3.4 at any two vertices \(u,v\in e\in e(\mathcal{H})\) with largest distance from the root in every hypertree \(\mathcal{T}_{i}\) and \(d_{u}=d_{v}=2\), as long as \(\mathcal{T}_{i}\) does not become a hyperpath. By Lemma 3.4, each application of this transformation strictly decreases the number of sub-hyperpaths with \(3\) edges.
When all hypertrees \(\mathcal{T}_{1},\ldots,\mathcal{T}_{p}\) turn into hyperpaths, we can repeatedly apply the transformation \(5\), as long as there exist at least two hyperpaths of length at least one, By Lemma 4.7, each application of transformation \(5\) strictly decreases the number of sub-hyperpaths with \(3\) edges. In the end of this process, we arrive at the \(E_{ef}^{m}\).
## Acknowledgments
This work is supported by the National Natural Science Foundation of China (No. 11801115, No. 12071097, No. 12042103 and No. 12242105), the Natural Science Foundation of the Heilongjiang Province (No. QC2018002) and the Fundamental Research Funds for the Central Universities.
|
2309.07828 | EMOCONV-DIFF: Diffusion-based Speech Emotion Conversion for Non-parallel
and In-the-wild Data | Speech emotion conversion is the task of converting the expressed emotion of
a spoken utterance to a target emotion while preserving the lexical content and
speaker identity. While most existing works in speech emotion conversion rely
on acted-out datasets and parallel data samples, in this work we specifically
focus on more challenging in-the-wild scenarios and do not rely on parallel
data. To this end, we propose a diffusion-based generative model for speech
emotion conversion, the EmoConv-Diff, that is trained to reconstruct an input
utterance while also conditioning on its emotion. Subsequently, at inference, a
target emotion embedding is employed to convert the emotion of the input
utterance to the given target emotion. As opposed to performing emotion
conversion on categorical representations, we use a continuous arousal
dimension to represent emotions while also achieving intensity control. We
validate the proposed methodology on a large in-the-wild dataset, the
MSP-Podcast v1.10. Our results show that the proposed diffusion model is indeed
capable of synthesizing speech with a controllable target emotion. Crucially,
the proposed approach shows improved performance along the extreme values of
arousal and thereby addresses a common challenge in the speech emotion
conversion literature. | Navin Raj Prabhu, Bunlong Lay, Simon Welker, Nale Lehmann-Willenbrock, Timo Gerkmann | 2023-09-14T16:18:49Z | http://arxiv.org/abs/2309.07828v2 | # Emoconv-Difff: Diffusion-Based Speech Emotion Conversion for Non-Parallel and in-the-Wild Data
###### Abstract
Speech emotion conversion is the task of converting the expressed emotion of a spoken utterance to a target emotion while preserving the lexical content and speaker identity. While most existing works in speech emotion conversion rely on acted-out datasets and parallel data samples, in this work we specifically focus on more challenging in-the-wild scenarios and do not rely on parallel data. To this end, we propose a diffusion-based generative model for speech emotion conversion, the EmoConv-Diff, that is trained to reconstruct an input utterance while also conditioning on its emotion. Subsequently, at inference, a target emotion embedding is employed to convert the emotion of the input utterance to the given target emotion. As opposed to performing emotion conversion on categorical representations, we use a continuous arousal dimension to represent emotions while also achieving intensity control. We validate the proposed methodology on a large in-the-wild dataset, the MSP-Podcast v1.10. Our results show that the proposed diffusion model is indeed capable of synthesizing speech with a controllable target emotion. Crucially, the proposed approach shows improved performance along the extreme values of arousal and thereby addresses a common challenge in the speech emotion conversion literature.
Navin Raj Prabhu\({}^{\star\dagger}\) Bunlong Lay\({}^{\star}\) Simon Welker\({}^{\star}\) Nale Lehmann-Willenbrock\({}^{\dagger}\) Timo Gerkmann\({}^{\star}\)\({}^{\star}\)Signal Processing, Universitat Hamburg, Germany
\({}^{\dagger}\)Industrial and Organizational Psychology, Universitat Hamburg, Germany
[email protected] Speech emotion conversion, diffusion models, non-parallel samples, arousal, in-the-wild
## 1 Introduction
Speech is one of the key social signals used by humans to express their emotions [1]. While significant developments have been made in speech generation and synthesis, _emotion-conditioned_ speech synthesis is still a challenge [1, 2]. In the context of human-machine interaction, to improve the naturalness of machine communication, the generation of emotionally expressive speech is required [1]. Speech emotion conversion (SEC) is a sub-field of emotion-conditioned speech synthesis that aims to map a speech signal into another speech signal by converting its emotional expression while preserving the lexical information and the speaker's identity [3].
Emotions are represented in SEC as either _categorical_ (e.g., six basic emotions [4]) [3, 5] or _continuous_ (e.g., circumplex model [6]) [7] representations. It is well established in the speech emotion recognition (SER) and psychology literature that emotion is a complex construct with _fuzzy_ class boundaries [8], and the categorical representations (e.g., happy, anger) do not aptly capture the subtle difference between human emotions [6]. The circumplex model contrarily represents emotions using _continuous_ and independent dimensions, i.e., _arousal_ (relaxed vs. activated) and _valence_ (positive vs. negative) [6]. While the audio modality typically captures the arousal dimension of emotion well, it insufficiently explains valence [9, 10]. Therefore, in this work, we follow [7] and represent emotion using the continuous arousal dimension. Moreover, by using the continuous representations (arousal on a scale of 1 to 7) we directly achieve intensity control in SEC, as opposed to an additional effort required for categorical representations (e.g., [5, 11]).
Current SEC systems are typically trained on high-quality recorded speech data that are _acted-out_ by professional actors. As a consequence the resulting algorithms are typically sensitive to noise and variabilities pertinent in real-world scenarios [12] (e.g., acoustic noise, speaker variabilities, subtle intentions, or vocal bursts that carry emotion; e.g., [13]). Furthermore, SEC systems trained on acted-out speech may create stereotypical portrayals of emotions [12]. Another crucial drawback of acted-out datasets is that they require _parallel_ utterances, i.e., each source utterance is required to also have a ground-truth utterance of a target emotion [14, 15]. However, parallel utterances are expensive to collect [14], and models trained on them lack scalability [1]. In this work, we address these drawbacks of acted-out and parallel data by specifically focusing on non-parallel _in-the-wild_ data.
A challenge in overcoming the usage of parallel utterances is the problem of _disentanglement_, where a disentanglement technique is required to decompose the source utterance into several constituents (i.e., emotion, lexical, and speaker information) before synthesizing speech for a target emotion [1, 3]. Existing works have employed encoder-decoders [5], generative adversarial networks [14], and self-supervised learning (SSL) [7] for the disentanglement. Recently, the so-called _diffusion models_ have been introduced for the synthesis of high-quality samples, both in the audio- and image-domain [16, 17]. Further in [17], the disentanglement capability of diffusion models was uncovered for the task of text-conditioned image editing and demonstrated strong control over the image synthesis process.
For in-the-wild SEC without relying on parallel utterances, we introduce a diffusion-based approach that is trained to reconstruct a source utterance while also conditioning on its emotion. Subsequently, at inference, a target emotion embedding is employed to convert the emotion of the source utterance to the given target emotion. As such, the contributions of this paper are as follows: We introduce a novel emotion-conditioned diffusion model that does not rely on parallel utterances for SEC, which is in contrast to existing emotion-conditioned diffusion models that rely on parallel utterances and operate on the text-to-speech (TTS) domain [18, 11]. Building up on our previous work [7], our models can cope with unseen real-world scenarios, as it is trained on non-parallel in-the-wild
speech utterances. To the best of our knowledge, we are the first to tackle this problem of non-parallel in-the-wild data for SEC, and the paper at hand is the first to employ diffusion models for this. Finally, the proposed approach improves over the _HiFiGAN_[7] for extreme target emotions, a common problem in SEC and TTS [7, 19].
## 2 Diffusion Models
_Diffusion models_ are used in various applications across domains for the task of generation, such as image editing [17], speech enhancement [20], and TTS [21]. The idea behind these models involves adding Gaussian noise to the data using a stochastic differential equation (SDE). The _forward SDE_ or _forward process_ can be viewed as transforming an initial distribution into a terminating distribution that is usually tractable and available during inference. Under mild constraints, a forward SDE can be inverted by the _reverse SDE_[22]. The reverse SDE or _reverse process_ transforms the terminating distribution of the forward process back into the initial distribution, during which the disentanglement is achieved [17].
In the extant literature, emotion-conditioned diffusion models rely on parallel data and operate on the TTS domain [11, 18]. In [18], the GradTTS-based _EmoDiff_ was introduced. EmoDiff achieves emotion-conditioned speech synthesis from source text using a soft-label guidance technique in the reverse process. [11] introduces _EmoMix_, which uses pretrained SER embeddings of a reference utterance to exemplify the target emotional prosody and condition on the desired emotion. Note that both [18] and [11] rely on acted-out parallel utterances and operate on the TTS domain.
## 3 Proposed Methodology: Emoconv-Diff
We define the SEC task as follows: given the mel spectrogram of a source speech utterance \(\mathbf{X}_{l,s,e}\) (or simply \(\mathbf{X}_{0}\)), containing lexical content \(l\), speaker identity \(s\), and emotion information \(e\), we aim to generate a new mel spectrogram \(\hat{\mathbf{X}}_{l,s,e}\) that only transforms the arousal information to a target value \(\bar{e}\). For this, we introduce a diffusion-based approach, the _EmoConv-Diff_, which is summarized in Fig. 1. The EmoConv-Diff comprises a set of _encoders_, each encoding the attributes to be disentangled, and a diffusion-based _decoder_, which aims to disentangle the attributes and perform emotion-controllable speech synthesis. The output of the diffusion decoder is a mel spectrogram \(\hat{\mathbf{X}}_{l,s,e}\in\mathbb{R}^{n\times T}\) and it is converted into time domain speech signal using a pretrained HiFiGAN vocoder [23].
### Encoders
The EmoConv-Diff comprises three encoders: the _phoneme encoder_\(\phi(.)\), the _speaker encoder_\(S(.)\), and the _emotion encoder_\(E(.)\).
**Phoneme Encoding:** Speaker- and emotion-independent "average voice" phoneme-level mel features are used to encode the lexical content \(l\). Let \(\mathbf{Y}\coloneqq\phi(\mathbf{X}_{0})\) be the "average voice" representation of the source audio, where \(\phi(.)\) is the pretrained phoneme encoder. The transformer-based encoder, adopted from [24], has been used previously in voice conversion tasks. The encoder output (see \(\mathbf{Y}\) in Fig. 1) has the same dimensions as the source mel \(\mathbf{X}_{0}\in\mathbb{R}^{n\times T}\).
**Speaker Encoding:** To encode the speaker identity, we use a pre-trained speaker verification model \(S(.)\)[25], following [24]. The output of \(S(.)\) is a \(d-vector\) speaker representation \(S(.)\in\mathbb{R}^{128}\).
**Emotion Encoding:** To encode emotional information, we use a pretrained SSL-based SER system \(E(.)\in\mathbb{R}^{1024}\), introduced in [26]. The \(E(.)\) network was built by fine-tuning the Wav2Vec2-Large-Robust network [26] on the MSP-Podcast (v1.7) dataset [8].
### Diffusion-based decoder
The diffusion-based decoder follows the SDE formalism by [21]. Specifically, let \(t\) be the continuous diffusion time-step variable describing the progress of the diffusion process. For \(0\leq t\leq 1\) the forward SDE of this work is given by
\[\mathrm{d}\mathbf{X}_{t}=\frac{1}{2}\beta_{t}(\mathbf{Y}-\mathbf{X}_{t}) \mathrm{d}t+\sqrt{\beta_{t}}\mathrm{d}\mathbf{w}, \tag{1}\]
where \(\mathbf{w}\) is the standard Wiener process [27], \(\mathbf{X}_{t}\) is the current process state with initial condition \(\mathbf{X}_{0}=\mathbf{X}_{l,s,e}\) and \(\beta_{t}\) is a non-negative function called the noise schedule. The process state \(\mathbf{X}_{t}\) follows a Gaussian distribution [27, Section 5] that is called the _perturbation kernel_:
\[p_{\mathrm{or}}(\mathbf{X}_{t}|\mathbf{X}_{0},\mathbf{Y})=\mathcal{N}_{ \mathrm{C}}\left(\mathbf{X}_{t};\boldsymbol{\mu}(\mathbf{X}_{0},\mathbf{Y},t ),\sigma(t)^{2}\mathbf{I}\right). \tag{2}\]
The mean evolution of \(\mu(\mathbf{X}_{0},\mathbf{Y},t)\), or simply \(\mu(t)\), is given by
\[\mu(\mathbf{X}_{0},\mathbf{Y},t)=\alpha_{t}\mathbf{X}_{0}+\left(1-\alpha_{t} \right)\mathbf{Y}, \tag{3}\]
Figure 1: Illustration of the training and inference process of the proposed EmoConv-Diff approach. Dotted arrows denote operations performed only during training. The _stop gradient_ function stops the accumulation of the gradients of the inputs during the training.
where \(\alpha_{t}=e^{-\frac{1}{2}\int_{0}^{t}\beta_{t}d_{S}}\) and the variance evolution is given by
\[\sigma(t)^{2}=\left(1-\alpha_{t}^{2}\right)\mathbf{I} \tag{4}\]
We represent the closed-form of \(\alpha_{t}\) as \(\beta_{t}\) and set \(\beta_{t}=b_{0}+t(b_{1}-b_{0})\) and chose \(b_{0},b_{1}>0\) such that \(\alpha_{1}\approx 0\). In this case, the mean evolution describes an interpolation starting at \(t=0\) at the distribution of source \(\mathbf{X}_{0}\) and terminating approximately at the distribution of "average voice" phoneme features \(\mathbf{Y}\) at \(t=1\). The forward SDE (1) has an associated reverse SDE [22]:
\[\mathrm{d}\mathbf{X}_{t}=\left[-\frac{1}{2}\beta_{t}(\mathbf{Y}-\mathbf{X}_{t })+\beta_{t}\nabla\mathbf{x}_{t}\log p_{t}(\mathbf{X}_{t}|\mathbf{Y})\right] \mathrm{d}t+\beta_{t}\mathrm{d}\widetilde{\mathbf{w}}\,, \tag{5}\]
where \(\mathrm{d}\widetilde{\mathbf{w}}\) is a Wiener process going backward through the diffusion time-steps. Moreover, the reverse process follows the same trajectory as the forward process, i.e. the reverse SDE starts approximately with the distribution of average-voice and terminates for \(t=0\) into the distribution of source-targets.
A network called the _score model_\(\mathbf{s}_{\theta}(\mathbf{X}_{t},\mathbf{Y},S(\mathbf{X}_{0}),E(\mathbf{X}_{ 0}),t)\), or simply \(\mathbf{s}_{\theta}(\mathbf{X}_{t},t)\), is _trained_ to approximate the _score function_\(\nabla\mathbf{x}_{t}\log p_{t}(\mathbf{X}_{t}|\mathbf{Y})\), i.e., the gradients of log-density of noisy data \(\mathbf{X}_{t}\). We use the U-Net architecture from [24] as the score model \(\mathbf{s}_{\theta}\). With the trained \(\mathbf{s}_{\theta}\), we can then use the reverse SDE to generate an estimate of the source target \(\mathbf{X}_{0}\) from the "average voice" \(\mathbf{Y}\) given speaker identity \(S(\mathbf{X}_{0})\) and emotion embeddings \(E(\mathbf{X}_{0})\). An intuition behind the reverse process is that the diffusion-based decoder is trained to reconstruct \(\mathbf{X}_{0}\) while learning the disentanglement between the speech attributes \(l\), \(s\), and \(e\). With this setup, we overcome the need for parallel data during the training process.
During _inference_, a target emotion embedding \(E(\bar{e})\) is employed to convert the emotion of the source utterance to the given target emotion. The target emotion embedding \(E(\bar{e})\) is defined as the _averaged_ emotion embedding of a set reference utterance samples belonging to the emotion category \(\bar{e}\), as
\[E(\bar{e})\coloneqq\frac{1}{|A_{p}(\bar{e})|}\sum_{\mathbf{X}_{0}\in A_{p}( \bar{e})}E(\mathbf{X}_{0}), \tag{6}\]
where the set of reference samples \(A_{p}(\bar{e})\) is defined to be the top \(p=20\%\) samples belonging to the particular target arousal \(\bar{e}\).
### Loss functions
The score model is trained on the _score matching_ loss [28] which aims to approximate the score function. The score matching loss for \(\mathbf{X}_{0}\) at time \(t\) is formulated as
\[\mathcal{L}_{s}(\mathbf{X}_{t})=\mathbb{E}_{\epsilon_{t}}\left[||\mathbf{s}_{ \theta}(\mathbf{X}_{t},t)+\sigma(t)^{-1}\epsilon_{t}||_{2}^{2}\right] \tag{7}\]
where \(\mathbf{X}_{t}=\mu(t)+\sigma(t)\epsilon_{t}\) and \(\epsilon_{t}\) is sampled from \(\mathcal{N}(0,\sigma(t))\). In addition to \(\mathcal{L}_{s}\), we follow [7, 11] to use a mel spectrogram reconstruction loss for better conditioning on emotion attributes. \(\mathcal{L}_{m}\) measures the \(L_{1}\)-norm:
\[\mathcal{L}_{m}(\hat{\mathbf{X}}_{0})=\sum_{x}||\mathbf{X}_{0}-\hat{\mathbf{X} }_{0}||_{1}, \tag{8}\]
where \(\hat{\mathbf{X}}_{0}\) is the mel spectrogram of synthesized speech. Note here that during the training of the score model it is expensive to obtain \(\hat{\mathbf{X}}_{0}\) which requires solving the full reverse SDE. For this, in contrast to [11], we utilize a single-step approximation of \(\hat{\mathbf{X}}_{0}\) by only relying on \(\mathbf{X}_{t}\), \(\mathbf{s}_{\theta}\), and \(\mathbf{Y}\), which are available during training. We use Tweedie's formula [29] to approximate \(\hat{\mathbf{X}}_{0}\) as
\[\hat{\mathbf{X}}_{0}=\frac{\hat{\mu}(t)-\left(1-\alpha_{t}\right)\mathbf{Y}}{ \alpha_{t}}\,. \tag{9}\]
where \(\hat{\mu}(t)\) is an estimate of \(\mu(t)\) (3), and is formulated as \(\hat{\mu}(t)=\mathbf{X}_{t}-(\mathbf{s}_{\theta}(\mathbf{X}_{t},t)*\sigma(t)^{ 2})\). With that, the final loss function is
\[\mathcal{L}(\mathbf{X}_{t},\hat{\mathbf{X}}_{0})=\mathcal{L}_{s}(\mathbf{X}_{t })+\lambda_{t}\mathcal{L}_{m}(\hat{\mathbf{X}}_{0}), \tag{10}\]
where \(\lambda_{t}\) is a weighting function depending on the current diffusion time-step \(t\). Considering that \(\mathbf{X}_{t}\) contains more Gaussian noise for larger \(t\), we set \(\lambda_{t}=1-t^{2}\), thereby weighting more for smaller \(t\) values and gradually decreasing the weights for larger \(t\).
## 4 Experimental setup
**Dataset:** The proposed methodology is trained and validated on the _in-the-wild_ MSP-Podcast dataset (v1.10) [8]. The dataset in contrast to predominant SEC datasets (e.g., ESD [30], IEMOCAP [31]) is larger (\(\approx\)23hrs of audio), has utterances of variable duration, has over 1400 speakers, and contains naturalistic emotional expressions. For example, the ESD contains acted-out utterances from only 10 English speakers and only \(\approx\)29 hours of acted-out utterances. The arousal annotations, collected at the utterance-level on a scale of 1 to 7, are distributed with \(\mu=\)4 and \(\sigma=\)0.95.
**Validation measures:** We validate the proposed methodology in terms of both the SEC capabilities and the speech quality of the synthesized signal. As the measure of SEC capability, we use the mean-squared \(L_{mse}\) and mean-absolute \(L_{abs}\) errors, calculated between the target arousal \(\bar{e}\) and the SER prediction on the synthesized output \(E(\hat{\mathbf{X}})\). As the measure of speech quality, we use the DNSMOS [32], a non-intrusive objective speech quality metric designed to predict the mean-opinion score (on a scale of 1 to 5) results of subjective listening tests (i.e., P.835 [32]). Specifically, we use the metric measuring the overall signal quality _OVRL_, and specifically the speech quality _SIG_. Note here that intrusive metrics cannot be used to evaluate in-the-wild recordings, like our dataset, as the reference is not available due to the lack of parallel data. Statistical significance for improved performance is estimated using one-tailed \(t\)-test on error distributions, asserting significance for _p_-values \(\leq 0.05\).
## 5 Results and discussion
**Overall performance:** We validate the overall performance of the proposed _EmoConv-Diff_ against a baseline, the HiFiGAN-based SEC system [7], henceforth mentioned as _HiFiGAN_, which to the best of our knowledge is the only prior work on SEC using in-the-wild and non-parallel data. In addition to HiFiGAN [7], we use three different versions of the EmoConv-Diff, (i) \(\mathcal{L}_{s}\), which is only trained on the score matching loss \(\mathcal{L}_{s}\), (ii) \(\mathcal{L}_{s}+\mathcal{L}_{m}(\mathbf{X}_{t})\), which also uses the mel reconstruction loss \(\mathcal{L}_{m}\) tuned on \(\mathbf{X}_{t}\), and (iii) \(\mathcal{L}_{s}+\mathcal{L}_{m}(\mathbf{X}_{0})\), where the mel reconstruction loss \(\mathcal{L}_{m}\) is tuned on the approximated source mel spectrogram \(\hat{\mathbf{X}}_{0}\) (9).
\begin{table}
\begin{tabular}{l c c|c c} \hline \hline & \multicolumn{2}{c|}{DNSMOS \(\uparrow\)} & \multicolumn{2}{c}{SER Error \(\downarrow\)} \\ & SIG & OVRL & \(\mathcal{L}_{mse}\) & \(\mathcal{L}_{abs}\) \\ \hline HiFiGAN [7] & **3.21** & **2.79** & 0.084 & 24\(\%\) \\ \(\mathcal{L}_{s}\) & 3.20 & 2.78 & 0.091 & 25\(\%\) \\ \(\mathcal{L}_{s}+\mathcal{L}_{m}(\mathbf{X}_{t})\) & 3.08 & 2.62 & 0.121 & 34\(\%\) \\ \(\mathcal{L}_{s}+\mathcal{L}_{m}(\mathbf{X}_{0})\) & **3.21** & 2.78 & **0.072\({}^{*}\)** & **21\(\%\)** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overall performance of model versions. * indicates statistically significant improvements in results.
From the results presented in Table 1, we note the following. First, the EmoConv-Diff version \(\mathcal{L}_{s}+\mathcal{L}_{m}(x_{0})\) achieves the best SER errors with statistical significance, achieving \(\mathcal{L}_{mse}\) of 0.072 and \(\mathcal{L}_{abs}\) of \(21\%\). This confirms the emotion conversion capability of the proposed diffusion model. Second, in terms of the speech quality and overall signal quality, the EmoConv-Diff version \(\mathcal{L}_{s}+\mathcal{L}_{m}(x_{0})\) performs on par with the HiFiGAN baseline. The variant achieves speech quality performance of 3.21 SIG and an overall signal quality of 3.36 OVRL. Third, the introduction of the mel reconstruction loss \(\mathcal{L}_{m}(\tilde{\mathbf{X}}_{0})\) tuned on the derived approximation of source \(\mathbf{X}_{0}\) (9) improves the performance of the diffusion model, in terms of both the DNSMOS scores and SER errors. Finally, when the mel reconstruction loss \(\mathcal{L}_{s}\) is tuned on \(\mathbf{X}_{t}\), the performance in terms of the SER errors diminishes, signifying the noisy nature of \(\mathbf{X}_{t}\) and the efficiency of the derived \(\tilde{\mathbf{X}}_{0}\) during the training phase.
**Qualitative analysis of spectrograms:** Fig. 2 shows sample spectrograms of the source speech \(\mathbf{X}_{0}\) of arousal \(e=3.20\), the converted speech of _reduced_ arousal \(\bar{e}=1\), and of _increased_ arousal \(\bar{e}=7\). A high average pitch of the speech signal is directly associated with a high intensity of emotion [5, 7]. Therefore, in Fig. 2, we also plot the pitch contours of the respective converted speech and the source \(\mathbf{X}_{0}\). Comparing the spectrograms of arousal 1 and arousal \(7\), from the marked eclipses, we can observe that for increased arousal of \(7\) the spectrograms have larger magnitudes in the mid-frequencies. This reveals that the proposed EmoConv-Diff model associates _larger frequency magnitudes_ for high arousal speech than for low arousal speech. From the pitch contours, it can be further noted that the synthesized speech for high arousal (\(\bar{e}=7\)) has a _higher mean and variability of pitch_, than that of both the ground-truth speech (\(e=3.20\)) and the synthesized speech for low arousal (\(\bar{e}=1\)). This difference in pitch is also clearly notable in the audio examples available online1. These results show that the proposed model successfully performs SEC by aptly conditioning on the emotion content.
Footnote 1: [https://uhh.de/inf-sp-emoconvdiff](https://uhh.de/inf-sp-emoconvdiff)
**Performance for target arousal \(\bar{e}\):** SEC systems generally tend to perform well on certain emotion pairs and emotion classes. For example, [14] notes that the emotional pairing of "angry" and "sad" is easier to convert than the pairing of "happy" and "angry". Moreover, given that the emotion classes are imbalanced in in-the-wild datasets [8], with fewer samples along the extremes of emotion scale, SEC for extreme values of \(\bar{e}\) is a general challenge [7, 19]. To investigate this, in Fig. 3a, we plot the \(\mathcal{L}_{mse}\) performance with respect to each of the target arousal classes \(\bar{e}\). From the plot, we note that the proposed EmoConv-Diff model makes the _largest_ improvements along the extreme target arousal values (i.e., \(\bar{e}=\)1 and 7) while performing on par along the mid scale arousal values (i.e., \(\bar{e}=\)2, 3, 4, 5 and 6). This confirms that the proposed EmoConv-Diff overcomes a crucial shortcoming of existing SEC systems by improving along the extreme values of \(\bar{e}\).
**Performance for source arousal \(e\):** While it is important to evaluate the SEC performance with respect to the target arousal \(\bar{e}\), it is also important to the SEC performance with respect to the arousal of the source speech \(\mathbf{X}_{0}\) (i.e., \(e\)). In Fig. 3b, we also plot the \(\mathcal{L}_{mse}\) performances with respect to \(e\) and observe the following. First, for both the HiFiGAN [7] and the proposed EmoConv-Diff, the standard deviation of \(\mathcal{L}_{mse}\) with respect to the extreme emotions (\(e=\)1 and 7) is _larger_ than the mid scale values of \(e\). This indicates that it is generally harder for SEC systems to convert the emotion of source \(\mathbf{X}_{0}\) with already extreme emotions (i.e., \(e=\)1 and 7), while it is easier to convert the emotion neutral emotion source \(\mathbf{X}_{0}\) (i.e., \(e=\)3, 4 and 5). Second, we note contrasting behaviors between the HiFiGAN and the EmoConv-Diff. While the EmoConv-Diff achieves better performance for _higher_ source arousal values (\(e>\)3) than _lower_ arousal values, the HiFiGAN does better for _lower_ source arousal values (\(e<\)3) than _higher_ arousal values. Moreover, the proposed EmoConv-Diff model performs better than the HiFiGAN in four of the seven arousal classes, which points to the superior SEC capability of the EmoConv-Diff compared to the HiFiGAN baseline.
## 6 Conclusion
Emotion-conditioned speech synthesis (ESS) is an important application that can promote the naturalness of machine communication. Speech emotion conversion (SEC) is a sub-field of ESS. In this paper, we moved beyond the typical reliance on acted-out data sets and parallel samples in SEC, by proposing a diffusion-based generative model and using the continuous arousal dimension to represent emotions while also achieving intensity control. We validated our model using the MSP-Podcast v1.10, a large in-the-wild dataset. We show that our proposed diffusion model, the EmoConv-Diff, is indeed able to synthesize speech for a controllable target emotion. In particular, in comparison to our prior work [7], our model shows improved performance along the extreme values of arousal and thereby addresses a common challenge in the SEC literature [7, 19].
Figure 3: Class-wise \(L_{mse}\) performances for target arousal \(\bar{e}\) and ground-truth arousal \(e\).
Figure 2: Sample log-energy spectrogram of emotion converted speech, along with comparisons on pitch contours. |
2304.00163 | Soft-Bellman Equilibrium in Affine Markov Games: Forward Solutions and
Inverse Learning | Markov games model interactions among multiple players in a stochastic,
dynamic environment. Each player in a Markov game maximizes its expected total
discounted reward, which depends upon the policies of the other players. We
formulate a class of Markov games, termed affine Markov games, where an affine
reward function couples the players' actions. We introduce a novel solution
concept, the soft-Bellman equilibrium, where each player is boundedly rational
and chooses a soft-Bellman policy rather than a purely rational policy as in
the well-known Nash equilibrium concept. We provide conditions for the
existence and uniqueness of the soft-Bellman equilibrium and propose a
nonlinear least-squares algorithm to compute such an equilibrium in the forward
problem. We then solve the inverse game problem of inferring the players'
reward parameters from observed state-action trajectories via a
projected-gradient algorithm. Experiments in a predator-prey OpenAI Gym
environment show that the reward parameters inferred by the proposed algorithm
outperform those inferred by a baseline algorithm: they reduce the
Kullback-Leibler divergence between the equilibrium policies and observed
policies by at least two orders of magnitude. | Shenghui Chen, Yue Yu, David Fridovich-Keil, Ufuk Topcu | 2023-03-31T22:50:47Z | http://arxiv.org/abs/2304.00163v2 | # Soft-Bellman Equilibrium in Affine Markov Games:
###### Abstract
Markov games model interactions among multiple players in a stochastic, dynamic environment. Each player in a Markov game maximizes its expected total discounted reward, which depends upon the policies of the other players. We formulate a class of Markov games, termed _affine Markov games_, where an affine reward function couples the players' actions. We introduce a novel solution concept, the _soft-Bellman equilibrium_, where each player is boundedly rational and chooses a soft-Bellman policy rather than a purely rational policy as in the well-known Nash equilibrium concept. We provide conditions for the existence and uniqueness of the soft-Bellman equilibrium and propose a nonlinear least squares algorithm to compute such an equilibrium in the _forward problem_. We then solve the _inverse game problem_ of inferring the players' reward parameters from observed state-action trajectories via a projected gradient algorithm. Experiments in a predator-prey OpenAI Gym environment show that the reward parameters inferred by the proposed algorithm outperform those inferred by a baseline algorithm: they reduce the Kullback-Leibler divergence between the equilibrium policies and observed policies by at least two orders of magnitude.
## I Introduction
Markov games model the interaction of multiple decision-makers in stochastic and dynamic environments [1]. In a Markov game, each player's transition and reward depend on the policies of the other players, and each player aims to find an optimal policy that maximizes its expected discounted total reward.
The Nash equilibrium is a fundamental concept for analyzing strategic interactions among multiple decision-makers [2]. In a Markov game, it refers to a collection of policies where no player can benefit by unilaterally changing its policy [1]. The Nash equilibrium concept assumes that the players are perfectly rational, seeking to maximize their rewards without perceptual errors or cognitive biases.
However, not all players are perfectly rational. Humans, for example, have limited cognitive capacity and are subject to biases and heuristics that can affect their decision-making, bounding their rationality. As a result, the outcomes of games played by humans may not always align with the predictions from the Nash equilibrium concept. Recent efforts attempt to address this limitation of the Nash equilibrium by accounting for players' bounded rationality in games with specific structures, including matrix games [3], fully cooperative games [4, 5, 6], and two-player games [7, 8]. Another recent work tackles the same limitation in dynamic games with continuous state and action spaces [9]. However, to our best knowledge, no work has addressed this limitation in _general-sum, multi-player_ Markov games with _discrete_ state and action spaces yet.
We propose the _soft-Bellman equilibrium_ as a new solution concept in _affine Markov games_, a class of Markov games where an affine reward function couples the players' actions. In a soft-Bellman equilibrium, each player chooses a policy that maximizes the expected reward with causal entropy regularization while satisfying independent transition dynamics. We provide conditions for the existence and uniqueness of the soft-Bellman equilibrium.
We study the _forward problem_ of computing a soft-Bellman equilibrium in a given affine Markov game. We propose a least-squares-based algorithm to solve this problem by minimizing the residuals of the soft-Bellman equilibrium conditions.
We then turn to the _inverse game problem_ of inferring the players' reward parameters that best explain observed interactions. We propose an iterative algorithm that leverages the solutions to the forward problem. In each iteration, the algorithm computes the soft-Bellman equilibrium given the current reward parameters and then updates those parameters with a projected-gradient method based on the implicit function theorem [10].
Using a synthetic dataset in a predator-prey OpenAI Gym environment [11], we compare the proposed inverse game algorithm with a baseline algorithm that ignores the coupling between players. Results show that the proposed algorithm terminates with a Kullback-Leibler divergence between the equilibrium policies and observed policies at least two orders of magnitude lower than that of the baseline algorithm.
## II Related Work
In single-agent settings, literature in inverse reinforcement learning studies the problem of inferring reward parameters from human experts' trajectories. The principle of maximum entropy is a popular approach in this direction [12]. Subsequent studies further extend this principle to accommodate stochastic transitions using causal entropy [13]. For example, recent work extends the maximum causal entropy framework in inverse reinforcement learning to an infinite time horizon setting and proposes the concept of stationary soft-Bellman policy [14]. This policy concept inspires the formulation of the soft-Bellman equilibrium to account for the players' bounded rationality, a feature lacking in the Nash equilibrium concept.
In multi-agent settings, most existing works that try to address the limitation of Nash equilibrium assume specific game structures, including matrix games [3], fully cooperative games [4, 5, 6], two-player zero-sum games [7], and two-player general-sum games [8]. This paper generalizes the existing works to _multi-player, general-sum_ Markov games.
First formulated in normal-form and extensive-form games, the quantal response equilibrium is a solution concept to model the bounded rationality of human players [15, 16]. Inspired by this solution concept, recent work proposes the entropic cost equilibrium to extend the quantal response equilibrium to games with _continuous_ states and actions [9].
The current work, on the other hand, proposes the soft-Bellman equilibrium to support stochastic transitions in affine Markov games with _discrete_ state and action spaces. Although both the entropic cost equilibrium and the soft-Bellman equilibrium extend the quantal response equilibrium, the soft-Bellman equilibrium is different in choosing the state-action frequency matrix, instead of the policy, as the variable to optimize for each player. This subtle difference changes the expected reward from a nonconvex function to a convex one, laying the groundwork for establishing conditions that ensure the existence and uniqueness of solutions.
## III Models
We present our main theoretical models: a special class of Markov games, along with a novel equilibrium concept that accounts for bounded rationality.
### _Affine Markov Games_
We consider a Markov game [1] where each player solves an MDP with independent dynamics and an affine reward function that couples the players' actions. We let \(p\in\mathbb{N}\) denote the number of players. Player \(i\in[p]\) solves an MDP specified by a tuple which includes a set of states, a set of actions, a transition kernel, an initial state distribution, a reward matrix, and a discount factor. We let \(n^{i}\in\mathbb{N}\) and \(m^{i}\in\mathbb{N}\) denote the number of states and actions for player \(i\), respectively. We let \(S^{i}_{t}\in[n^{i}]\) and \(A^{i}_{t}\in[m^{i}]\) denote the state and action of player \(i\) at time \(t\in\mathbb{N}\). Each action triggers a stochastic transition between the current state to the next state. We let \(T^{i}\in\mathbb{R}^{n^{i}\times m^{i}\times n^{i}}\) denote the _transition kernel_ of player \(i\) such that
\[T^{i}_{saj}\coloneqq\mathds{P}(S^{i}_{t+1}=j|S^{i}_{t}=s,A^{i}_{t}=a) \tag{1}\]
for all \(t\in\mathbb{N}\), \(s,j\in[n^{i}]\) and \(a\in[m^{i}]\). We let \(q^{i}\in\mathbb{R}^{n^{i}}\) denote the _initial state distribution_ of player \(i\) such that
\[q^{i}_{s}\coloneqq\mathds{P}(S^{i}_{0}=s) \tag{2}\]
for all \(s\in[n^{i}]\). We let \(R\in\mathbb{R}^{n^{i}\times m^{i}}\) denote the reward matrix, where \(R^{i}_{sa}\) denotes the reward of player \(i\) for choosing choosing action \(a\) in state \(s\). Finally, we let \(\gamma\in[0,1)\) denote a reward discount factor. For each player \(i\in[p]\), a stationary policy maps each state to a probability distribution over actions. We denote such a policy as a matrix \(\Pi^{i}\in\mathbb{R}^{n^{i}\times m^{i}}\) where
\[\Pi^{i}_{sa}\coloneqq\mathds{P}(A^{i}_{t}=a|S^{i}_{t}=s) \tag{3}\]
for all \(t\in\mathbb{N},s\in[n^{i}],a\in[m^{i}]\). An optimal stationary policy in an MDP minimizes the following expected total discounted state-action reward
\[\sum_{t=0}^{\infty}\sum_{s=1}^{n}\sum_{a=1}^{m}\gamma^{t}\mathds{P}(S^{i}_{t} =s,A^{i}_{t}=a)R^{i}_{sa}. \tag{4}\]
We let \(Y^{i}\in\mathbb{R}^{n^{i}\times m^{i}}\) denote the state-action frequency matrix of player \(i\in[p]\) such that
\[Y^{i}_{sa}\coloneqq\sum_{t=0}^{\infty}\gamma^{t}\mathds{P}(S^{i}_{t}=s,A^{i}_ {t}=a). \tag{5}\]
for all \(s\in[n^{i}]\) and \(a\in[m^{i}]\).
We now introduce the definition of a \(p\)-player affine Markov game.
**Definition 1**.: _A \(p\)-player affine Markov game is a collection of MDPs \(\{\mathcal{M}^{i}=\{[n^{i}],[m^{i}],q^{i},T^{i},R^{i},\gamma\}\}_{i=1}^{p}\) such that there exists \(b^{i}\in\mathbb{R}^{m^{i}n^{i}}\) and \(C^{ij}\in\mathbb{R}^{m^{i}n^{i}\times m^{j}n^{j}}\) for each \(i,j\in[p]\) such that_
\[\operatorname{vec}(R^{i})=b^{i}+\sum_{j=1}^{p}C^{ij}\operatorname{vec}(Y^{j}) \tag{6}\]
_for all \(i\in[p]\), where \(Y^{i}\in\mathbb{R}^{m^{i}\times n^{i}}\) satisfies (5)._
The affine reward structure in (6) couples different players' decisions together: the reward for a player is not a fixed number, but depends on the other players' state-action frequencies. Similar coupling appears in matrix games where each player has a finite number of candidate options [17, 18]. This function has two parameters: \(b\) pertains to the individual player, and \(C\) considers the coupling between the players.
### _Soft-Bellman Equilibrium_
We now introduce the notion of _soft-Bellman equilibrium_. It extends the notion of quantal response equilibrium in games with deterministic dynamics to Markov games with stochastic dynamics [15, 16]. Unlike Nash equilibrium, it states that all players choose a soft-Bellman policy--rather than the optimal policy that satisfies the Bellman equations--given other players' actions.
**Definition 2**.: _Let \(\{\mathcal{M}^{i}=\{[n^{i}],[m^{i}],q^{i},T^{i},R^{i},\gamma\}\}_{i=1}^{p}\) be an affine Markov game. Let \(\Pi^{i}\in\mathbb{R}^{n^{i}\times m^{i}}\) be a stationary policy matrix of player \(i\in[p]\). If there exists \(v^{i}\in\mathbb{R}^{n^{i}}\) and \(Q^{i}\in\mathbb{R}^{n^{i}\times m^{i}}\) such that_
\[\Pi^{i}_{sa} =\frac{\exp(Q^{i}_{sa})}{\sum_{j=1}^{m^{i}}\exp(Q^{i}_{sj})}, \tag{7a}\] \[Q^{i}_{sa} =R^{i}_{sa}+\gamma\sum_{j=1}^{n^{i}}T^{i}_{saj}v^{i}_{j},\] (7b) \[v^{i}_{s} =\log\left(\sum_{a=1}^{m^{i}}\exp(Q^{i}_{sa})\right), \tag{7c}\]
_for all \(s\in[n^{i}]\) and \(a\in[m^{i}]\), then \(\{\Pi^{i}\}_{i=1}^{p}\), is a soft-Bellman equilibrium for \(\{\mathcal{M}^{i}\}_{i=1}^{p}\)._
There is a close connection between the soft-Bellman equilibrium in (2) and the following optimization over state-action frequency matrix:
\[\underset{Y\in\mathbb{R}^{n^{i}\times m^{i}}}{\text{maximize}} \ell^{i}(Y)+h(Y)\] (8) subject to \[\sum_{a=1}^{m^{i}}Y_{sa}=q_{s}^{i}+\gamma\sum_{j=1}^{n^{i}}\sum_{a= 1}^{m^{i}}T_{jas}^{i}Y_{ja},\;\;s\in[n^{i}],\]
where
\[\ell^{i}(Y) \coloneqq\operatorname{vec}(Y)^{\top}b^{i}+\frac{1}{2} \operatorname{vec}(Y)^{\top}C^{ii}\operatorname{vec}(Y) \tag{9a}\] \[+\sum_{j=1,j\neq k}^{p}\operatorname{vec}(Y^{j})^{\top}C^{ij} \operatorname{vec}(Y),\] \[h(Y) \coloneqq\sum_{s=1}^{n}\sum_{a=1}^{m}Y_{sa}\left(\log\left(\sum_{ j=1}^{m}Y_{sj}\right)-\log(Y_{sa})\right). \tag{9b}\]
The following theorem shows that, if each player chooses its policy by solving optimization (8), then the resulting policies form a soft-Bellman equilibrium.
**Theorem 1**.: _Let \(\{\mathcal{M}^{i}=\{[n^{i}],[m^{i}],q^{i},T^{i},R^{i},\gamma\}\}_{i=1}^{p}\) be an affine Markov game. Suppose that \(C^{ii}\preceq 0\) and \(Y^{i}\in\mathbb{R}_{>0}^{n^{i}\times m^{i}}\) is an optimal solution of optimization (8) for all \(i\in[p]\). Let \(\Pi^{i}\in\mathbb{R}^{n^{i}\times m^{i}}\) be such that_
\[\Pi^{i}_{sa}=\frac{Y^{i}_{sa}}{\sum_{j=1}^{m^{i}}Y^{i}_{sj}} \tag{10}\]
_for all \(i\in[p]\), \(s\in[n^{i}]\), and \(a\in[m^{i}]\). Then \(\{\Pi^{i}\}_{i=1}^{p}\) is a soft-Bellman equilibrium for \(\{\mathcal{M}^{i}\}_{i=1}^{p}\)._
Proof.: First, \(h(Y)\) is a concave function of matrix \(Y\)[14, 19], and \(\ell^{i}(Y)\) is also a concave function of \(Y\) since \(C^{ii}\preceq 0\). Next, by applying the chain rule to (9a) and (9b) we can show the following:
\[\partial_{Y_{sa}}\ell^{i}(Y)=R^{i}_{sa},\;\;\partial_{Y_{sa}}h(Y)=\log\left( \sum_{j=1}^{m}Y_{sj}\right)-\log(Y_{sa}),\]
where \(R^{i}\in\mathbb{R}^{n^{i}\times m^{i}}\) satisfies (6). Since \(Y\in\mathbb{R}_{>0}^{n^{i}\times m^{i}}\) is an optimal solution for optimization (8), there exists \(v^{i}\in\mathbb{R}^{n^{i}}\) such that the following Karush-Kuhn-Tucker conditions hold:
\[\sum_{a=1}^{m^{i}}Y^{i}_{sa}=q_{s}^{i}+\gamma\sum_{j=1}^{n^{i}} \sum_{a=1}^{m^{i}}T^{i}_{jas}Y^{i}_{ja}, \tag{11a}\] \[\log(Y^{i}_{sa})-\log\left(\sum_{j=1}^{m^{i}}Y^{i}_{sj}\right)=Q ^{i}_{sa}-v^{i}_{s}, \tag{11b}\]
for all \(s\in[n^{i}]\) and \(a\in[m^{i}]\), where \(Q^{i}_{sa}\) is given by (7b). Let \(\Pi^{i}_{sa}\) be given by (10), then (11b) implies that
\[\Pi^{i}_{sa} =\exp(Q^{i}_{sa}-v^{i}_{s}), \tag{12a}\] \[1 =\sum_{j=1}^{m^{i}}\Pi^{i}_{sj} =\sum_{j=1}^{m^{i}}\exp(Q^{i}_{sj}-v^{i}_{s}), \tag{12b}\]
for all \(s\in[n^{i}]\) and \(a\in[m^{i}]\). By combining (12a) with (12b) one can obtain the condition in (7a). Finally, multiplying both sides of (12b) by \(\exp(v^{i}_{s})\) gives (7c), which completes the proof.
## IV Forward Solution via Nonlinear Least-squares
We now discuss how to compute a soft-Bellman equilibrium by solving a nonlinear least-squares problem, together with the existence and uniqueness of the solution. To this end, we introduce the following notation:
\[l \coloneqq\sum_{i=1}^{p}m^{i}n^{i},\;\;b\coloneqq\left[(b^{1})^{ \top}\quad(b^{2})^{\top}\quad\cdots\quad(b^{p})^{\top}\right]^{\top}, \tag{13}\] \[r \coloneqq\sum_{i=1}^{p}n^{i},\;\;q\coloneqq\left[(q^{1})^{\top} \quad(q^{2})^{\top}\quad\cdots\quad(q^{p})^{\top}\right]^{\top}.\]
Let matrices \(D^{i},E^{i}\in\mathbb{R}^{n^{i}\times m^{i}n^{i}}\) be such that
\[D^{i}=I_{n^{i}}\otimes(\mathbf{1}_{m^{i}}^{\top}),\;\;E^{i}_{kj}=T^{i}_{\text{ quo}(j,m)+1,\text{rem}(j,m),k}, \tag{14}\]
for all \(k\in[n^{i}]\) and \(j\in[m^{i}n^{i}]\). Furthermore, let
\[H \coloneqq\operatorname{blkdiag}(D^{1}-\gamma E^{1},D^{2}-\gamma E ^{2},\ldots,D^{p}-\gamma E^{p}), \tag{15}\] \[K \coloneqq\operatorname{blkdiag}((D^{1})^{\top}D^{1},(D^{2})^{\top}D ^{2},\ldots,(D^{p})^{\top}D^{p}),\] \[C \coloneqq\begin{bmatrix}C^{11}&C^{12}&\cdots&C^{1p}\\ C^{21}&C^{22}&\cdots&C^{2p}\\ \vdots&\vdots&\ddots&\vdots\\ C^{p1}&C^{p2}&\cdots&C^{pp}\end{bmatrix}.\]
With these notations, we are ready to establish the following results on the existence and uniqueness of the soft-Bellman equilibrium.
**Theorem 2**.: _Let \(\{\mathcal{M}^{i}=\{[n^{i}],[m^{i}],q^{i},T^{i},R^{i},\gamma\}\}_{i=1}^{p}\) be an affine Markov game where \(C^{ii}\preceq 0\) for all \(i\in[p]\). Then \(Y^{i}\in\mathbb{R}_{>0}^{n^{i}\times m^{i}}\) is an optimal solution of optimization (8) for all \(i\in[p]\) if and only if there exists \(v\in\mathbb{R}^{r}\) such that_
\[\log(y) =\log(Ky)+b+Cy-H^{\top}v, \tag{16}\] \[Hy =q,\]
_where_
\[y=\left[\operatorname{vec}(Y^{1})^{\top}\quad\operatorname{vec}(Y^{2})^{\top} \quad\cdots\quad\operatorname{vec}(Y^{p})^{\top}\right]^{\top}. \tag{17}\]
_Furthermore, there exists \(y\in\mathbb{R}^{l}\) such that (16) holds for some \(v\in\mathbb{R}^{r}\). If \(C+C^{\top}\preceq 0\), then such a \(y\) is unique._
Proof.: First of all, the conditions (16) are the union of the KKT conditions for optimization (8) for all \(i\in[p]\). Due to the assumption that \(C^{ii}\preceq 0\), \(C+C^{\top}\preceq 0\), and the strict concavity of logarithm function, one can verify that \(Y^{i}\in\mathbb{R}^{m^{i}\times n^{i}}\) is an optimal solution of optimization (8) for all \(i\in[p]\) if and only if \(\{Y^{i}\}_{i=1}^{p}\) is a Nash equilibrium of a \(p\)-player diagonally strictly concave game, which exists and is unique [17].
As a result of Theorem 2, one can compute a soft-Bellman equilibrium by solving the following nonlinear least-squares
problem:
\[\underset{y,v}{\text{minimize}} \left\|\log(Ky)+b+Cy-H^{\top}v-\log(y)\right\|^{2} \tag{18}\] \[+\left\|Hy-q\right\|^{2}\]
Notice that the optimal value of the above optimization is zero, since there exists at least one solution for the nonlinear equations in (16).
## V Inverse Learning via Implicit Differentiation
Given the parameters of an affine Markov game, one can compute a soft-Bellman equilibrium of this game by solving the nonlinear least-squares problem in (18). The question remains, however, of how to infer these parameters such that they best explains observed decisions, a problem also known as the _inverse game_. Next, we answer this question by developing a projected gradient method for parameter calibration.
The inverse game problem is a parameter optimization problem defined as follows. We start with a set of empirically observed equilibrium state-action frequencies,
\[\hat{Y}^{1},\hat{Y}^{2},\ldots,\hat{Y}^{p}, \tag{19}\]
where \(\hat{Y}^{i}_{sa}\in\mathbb{R}^{m^{i}\times n^{i}}\) denotes the empirical probability for player \(i\) to choose action \(a\) in state \(s\). Let
\[\hat{y}\coloneqq\left[\operatorname{vec}(\hat{Y}^{1})^{\top}\quad \operatorname{vec}(\hat{Y}^{2})^{\top}\quad\cdots\quad\operatorname{vec}(\hat {Y}^{p})^{\top}\right]^{\top}. \tag{20}\]
To find the best parameters that explain the observed state-action frequency matrices in (20), one can solve the following optimization problem
\[\underset{y,v,b,C}{\text{minimize}} \left\|y-\hat{y}\right\|^{2}\] (21) subject to \[\log(y)=\log(Ky)+b+Cy-H^{\top}v,\] \[Hy=q,\;\;b\in\mathbb{B},\;\;C\in\mathbb{D},\]
where \(\mathbb{B}\subset\mathbb{R}^{l}\) and \(\mathbb{D}\subset\mathbb{R}^{l\times l}\) are closed convex constraint sets for vector \(b\) and matrix \(C\), respectively.
Solving problem (21) is numerically challenging because this optimization contains both nonlinear equation constraints and possible positive semi-definite cone constraints in set \(\mathbb{D}\). As a remedy, we propose an approximate projected gradient method that combines nonlinear equation solving with efficient projections. To this end, we let
\[J=\begin{bmatrix}K\operatorname{diag}(Ky)^{-1}-\operatorname{diag}(y)^{-1}+C& -H^{\top}\\ H&0_{r\times r}.\end{bmatrix} \tag{22}\]
By using the chain rule and the implicit function theorem [10] one can show that, if (16) holds and matrix \(J\) is nonsingular, then
\[\partial_{b}\left\|y-\hat{y}\right\|^{2} =-2\left[(y-\hat{y})^{\top}\quad 0_{r}\right]J^{-1}\begin{bmatrix}I \\ 0_{r\times l}\end{bmatrix}, \tag{23a}\] \[\partial_{C_{j}}\left\|y-\hat{y}\right\|^{2} =-2y_{j}\left[(y-\hat{y})^{\top}\quad 0_{r}\right]J^{-1} \begin{bmatrix}I_{l}\\ 0_{r\times l}\end{bmatrix}, \tag{23b}\]
for all \(j\in[l]\), where \(C_{j}\in\mathbb{R}^{l}\) is the \(j\)-th column of matrix \(C\). Hence one can compute the approximate gradient for vector \(b\) and matrix \(C\) that locally decreases the value of the objective function in (21) as follows:
\[\tilde{\nabla}_{b}\left\|y-\hat{y}\right\|^{2} \coloneqq-2\begin{bmatrix}I_{l}&0_{l\times r}\end{bmatrix}(J^{ \dagger})^{\top}\begin{bmatrix}y-\hat{y}\\ 0_{r}\end{bmatrix}, \tag{24a}\] \[\tilde{\nabla}_{C}\left\|y-\hat{y}\right\|^{2} \coloneqq-2\begin{bmatrix}I_{l}&0_{l\times r}\end{bmatrix}(J^{ \dagger})^{\top}\begin{bmatrix}y-\hat{y}\\ 0_{r}\end{bmatrix}y^{\top}. \tag{24b}\]
Notice that we approximate \(J^{-1}\) with the Moore-Penrose pseudoinverse \(J^{\dagger}\) in (24). Such an approximation is exact if \(J\) is nonsingular, and still well-defined even if \(J\) is singular or \(J^{-1}\) is numerically unstable to compute.
Based on the formulas in (24), we propose an approximate projected gradient method, summarized in Algorithm 1, to solve optimization (21), where we let
\[\text{Proj}_{\text{B}}(b) \coloneqq\underset{z\in\mathbb{B}}{\text{argmin}}\left\|z-b\right\|, \tag{25a}\] \[\text{Proj}_{\text{D}}(C) \coloneqq\underset{X\in\mathbb{D}}{\text{argmin}}\left\|X-C\right\| _{F}, \tag{25b}\]
for all \(b\in\mathbb{R}^{l}\) and \(C\in\mathbb{R}^{l\times l}\). Each iteration of this method first solves the nonlinear least-squares problem in (18), then performs a projected gradient step on \(b\) and \(C\).
```
0: Step size \(\alpha\in\mathbb{R}_{>0}\), number of iterations \(k_{\max}\in\mathbb{N}\), random initial parameters \(b_{init}\in\mathbb{R}^{l},C_{init}\in\mathbb{R}^{l\times l}\), tolerance \(\epsilon\in\mathbb{R}\).
1: Initialize \(k=1\), \(b=b_{init}\), \(C=C_{init}\).
2:while\(k<k_{\max}\)do
3: Solve optimization (18) for \(y\).
4:if change in \(\left\|y-\hat{y}\right\|^{2}<\epsilon\)then
5: terminate.
6:endif
7:\(b\leftarrow\text{Proj}_{\text{B}}(b-\alpha\tilde{\nabla}_{b}\left\|y-\hat{y} \right\|^{2})\)\(\triangleright\) cf. (24a)
8:\(C\leftarrow\text{Proj}_{\text{D}}(C-\alpha\tilde{\nabla}_{C}\left\|y-\hat{y} \right\|^{2})\)\(\triangleright\) cf. (24b)
9:\(k\gets k+1\)
10:endwhile
11:Vector \(b\) and matrix \(C\).
```
**Algorithm 1** Approximated projected gradient method.
## VI Experiments
We evaluate the performance of the proposed algorithm against a baseline algorithm that neglects the fact that players' reward functions depend upon each others' actions in a predator-prey OpenAI Gym environment [11]. We solve the forward problem by specifying the nonlinear least-squares problem (18) in Julia [20] using the JuMP [21] interface and the COIN-OR IPOPT [22] optimizer. The source code is publicly available at [https://github.com/vivanchen98/Inverse_MDFQame](https://github.com/vivanchen98/Inverse_MDFQame).
### _Baseline_
The baseline algorithm is a decoupled version of Algorithm 1, that is, it solves optimization (8) with the coupling parameter \(C^{ij}=0\) for all players \(i,j\in[p]\). Dropping this parameter frees the baseline algorithm to solve an optimization for each player independently, similar to many existing multi-agent inverse reinforcement learning algorithms.
### _Algorithm Parameters_
For the projected gradient method in Algorithm 1, we use a backtracking line search technique to fine-tune the step size in line 7 and 8 based upon the Armijo (sufficient decrease) condition [23]. Each iteration starts with an initial step size \(\alpha=1\), and the algorithm reduces the step size by half until it meets the sufficient decrease condition. Both algorithms terminate when the change in \(\left\|y-\hat{y}\right\|^{2}\) is below a given tolerance \(\epsilon=0.005\). The maximum number of iterations \(k_{\max}\) is \(100\), and the discount factor \(\gamma\) is \(0.99\). We sample the values of the vector \(b_{init}\) and the matrix \(C_{init}\) from a random number generator given a seed. We run both algorithms from seed \(1\) to \(10\).
### _Predator-Prey Environment_
We consider a predator-prey environment from a collection of multi-agent environments based on OpenAI Gym [11]. As shown in Fig. 1, two predators attempt to capture one randomly moving prey in a \(5\times 5\) GridWorld. Each predator has observations of all the players and the coordinates of the prey relative to itself and selects one of five actions: left, right, up, down, or stop. The prey is caught when it is within the catching region (light blue cells in Fig. 1) of at least one predator. An episode terminates when the prey is caught by more than one predator (inside a light purple cell in Fig. 1), resulting in a positive reward. For every new episode, the environment initializes the prey into random locations and the prey never moves voluntarily into the predators' neighborhood. In this environment, only the two predators are controllable, but we collect the trajectories of all three players, including the prey, for solving the inverse game problem.
### _Observed Dataset Collection_
We collect all players' trajectories as the observed interactions. Each trajectory is a sequence of states and actions until termination for the current episode. We train a policy using a multi-agent reinforcement learning algorithm [24] and sample trajectories from this policy. The players in this policy exhibit uncertainties in their decision-making process that are difficult to articulate explicitly, much like humans. As a result, the data from these models can serve as a proxy for human datasets.
We process the collected trajectories from all three players by first pruning the ones shorter than the \(50\)th percentile of trajectory lengths and then capping the remaining trajectories to the same length. After processing, we attain \(100\) useful trajectories of length \(6\). We compute the collection of state-action frequencies for all three players \(\hat{y}\) and approximate the initial state distributions and the transition probabilities for all players using the observed data.
### _Numerical Results_
We demonstrate Algorithm 1 and the baseline algorithm on the predator-prey environment introduced in Section VI-C. Fig. 2 shows \(\left\|y-\hat{y}\right\|^{2}\), the squared norm of the difference between the computed state-action frequency \(y\) and the observed state-action frequency matrices \(\hat{y}\), with respect to the number of iterations. Results show the proposed algorithm terminates within \(31.0\pm 3.6\) iterations, while the baseline
Fig. 1: Two predators (blue) and one prey (red) moving in a 5x5 GridWorld. The light blue cells represent the catching region of the predators, and the light purple cell represents the overlapping of both predators’ catching regions. This episode terminates when the prey is inside a light purple cell.
Fig. 3: Heatmaps showing Kullback–Leibler divergence \(D_{KL}(\hat{\Pi}_{s}^{i})\)\(\hat{\Pi}_{s}^{i})\) between the equilibrium policy \(\Pi_{s}^{i}\) and the observed policy \(\hat{\Pi}_{s}^{i}\) at each state \(s\) in the GridWorld for all three players. All values rounded to two decimal places, the smaller (lighter color) the better.
Fig. 2: Algorithms for the inverse game problem with termination marked in circles (the lower the better).
algorithm takes \(50.0\pm 31.3\) iterations to terminate. As shown in Fig. 2, the final iterate of the proposed algorithm has \(\left\|y-\hat{y}\right\|^{2}\) below \(1\), while the baseline algorithm on average terminates with a value above \(590.7\). This comparison highlights the importance of accounting for the coupling between the players.
Given a state-action frequency matrices \(y^{i}\) for player \(i\), we compute the corresponding policies \(\Pi_{sa}^{i}\) by (10), and denote the equilibrium policy at each state \(s\) as a probability distribution
\[\Pi_{s}^{i}=\left[\Pi_{s,\texttt{left}}^{i}\quad\Pi_{s,\texttt{right}}^{i} \quad\Pi_{s,\texttt{right}}^{i}\quad\Pi_{s,\texttt{up}}^{i}\quad\Pi_{s, \texttt{down}}^{i}\quad\Pi_{s,\texttt{stop}}^{i}\right].\]
We report the Kullback-Leibler divergence \(D_{\text{KL}}(\Pi_{s}^{i}\parallel\hat{\Pi}_{s}^{i})\) between the equilibrium policy \(\Pi_{s}^{i}\), computed using the proposed algorithm and the baseline algorithm, and the observed policy \(\hat{\Pi}_{s}^{i}\) at each state \(s\) for all three players. Fig. 3 shows that Algorithm 1 arrives at an equilibrium policy closer to the observed policy than the baseline algorithm does.
## VII Conclusion & Future Work
We proposed soft-Bellman equilibrium as a novel solution concept in affine Markov games, a class of Markov games where an affine reward function couples the players' actions, to capture interactions of boundedly rational players in stochastic, dynamic environments. We provided conditions for the existence and uniqueness of the soft-Bellman equilibrium. We solved the forward problem of computing such an equilibrium for a given affine Markov game and proposed an algorithm to tackle the inverse game problem of inferring players' reward parameters from observed interactions.
Future work should validate the effectiveness of the proposed algorithms using human datasets instead of synthetic datasets. For example, the INTERACTION dataset contains human driving trajectories in interactive traffic scenes [25], and can serve as a more representative dataset for the inverse game problem.
## Acknowledgment
The authors would like to thank Negar Mehr and Xiao Xiang for their constructive feedback.
|
2309.17385 | Dichromatic number of chordal graphs | The dichromatic number of a digraph is the minimum integer $k$ such that it
admits a $k$-dicolouring, i.e. a partition of its vertices into $k$ acyclic
subdigraphs. We say that a digraph $D$ is a super-orientation of an undirected
graph $G$ if $G$ is the underlying graph of $D$. If $D$ does not contain any
pair of symmetric arcs, we just say that $D$ is an orientation of $G$. In this
work, we give both lower and upper bounds on the dichromatic number of
super-orientations of chordal graphs. We also show a family of orientations of
cographs for which the dichromatic number is equal to the clique number of the
underlying graph. | Stéphane Bessy, Frédéric Havet, Lucas Picasarri-Arrieta | 2023-09-29T16:42:42Z | http://arxiv.org/abs/2309.17385v1 | # Dichromatic number of chordal graphs +
###### Abstract
The dichromatic number \(\vec{\chi}(D)\) of a digraph \(D\) is the minimum integer \(k\) such that \(D\) admits a \(k\)-dicolouring, _i.e._ a partition of its vertices into \(k\) acyclic subdigraphs. We say that a digraph \(D\) is a super-orientation of an undirected graph \(G\) if \(G\) is the underlying graph of \(D\). If \(D\) does not contain any pair of symmetric arcs, we just say that \(D\) is an orientation of \(G\).
In this work, we give both lower and upper bounds on the dichromatic number of super-orientations of chordal graphs. In general, the dichromatic number of such digraphs is bounded above by the clique number of the underlying graph (because chordal graphs are perfect). However, this bound can be improved when we restrict the symmetric part of such a digraph.
Let \(D=(V,A)\) be a super-orientation of a chordal graph \(G\). Let \(B(D)\) be the undirected graph with vertex set \(V\) in which \(uv\) is an edge if and only if both \(uv\) and \(vu\) belongs to \(A\). An easy greedy procedure shows \(\vec{\chi}(D)\leq\left\lceil\frac{\omega(G)+\Delta(B(D))}{2}\right\rceil\). We show that this bound is best possible by constructing, for every fixed \(k,\ell\) with \(k\geq\ell+1\), a super-orientation \(D_{k,\ell}\) of a chordal graph \(G_{k,\ell}\) such that \(\omega(G_{k,\ell})=k\), \(\Delta(B(D_{k,\ell}))=\ell\) and \(\vec{\chi}(D_{k,\ell})=\left\lceil\frac{k+\ell}{2}\right\rceil\). When \(\Delta(B(D))=0\) (_i.e._\(D\) is an orientation of \(G\)), we give another construction showing that this is tight even for orientations of interval graphs.
Next, we show that \(\vec{\chi}(D)\leq\frac{1}{2}\omega(G)+O(\sqrt{d\cdot\omega(G)})\) with \(d\) the maximum average degree of \(B(D)\).
Finally, we show that if \(B(D)\) contains no \(C_{4}\) as a subgraph, then \(\vec{\chi}(D)\leq\left\lceil\frac{\omega(G)+3}{2}\right\rceil\). We justify that this is almost best possible by constructing, for every fixed \(k\), a super-orientation \(D_{k}\) of a chordal graph \(G_{k}\) with clique number \(k\) such that \(B(D_{k})\) is a disjoint union of paths and \(\vec{\chi}(D_{k})=\left\lfloor\frac{k+3}{2}\right\rfloor\).
We also show a family of orientations of cographs for which the dichromatic number is equal to the clique number of the underlying graph.
\({}^{1}\) LIRMM, Univ Montpellier, CNRS, Montpellier, France
[email protected]
\({}^{2}\) CNRS, Universite Cote d'Azur, I3S, Inria, Sophia-Antipolis, France
{frederic.havet,lucas.picasarri-arrieta}@inria.fr
## 1 Introduction
We denote by \([k]\) the set \(\{1,\ldots,k\}\). Given an undirected graph \(G=(V,E)\) and a positive integer \(k\), a \(k\)_-colouring_ of \(G\) is a function \(\alpha:V\to[k]\). It is _proper_ if, for every edge \(xy\in E\), we have \(\alpha(x)\neq\alpha(y)\). So, for every \(i\in[k]\), \(\alpha^{-1}(i)\) induces an independent set on \(G\). The _chromatic number_ of \(G\), denoted by \(\chi(G)\), is the smallest \(k\) such that \(G\) admits a proper \(k\)-colouring. An undirected graph is _chordal_ if it does not contain any induced cycle of length at least \(4\). Proper colourings of chordal graphs have been largely studied and it is well-known that chordal graphs are perfect. Recall that a graph \(G\) is _perfect_ if every induced subgraph \(H\) of \(G\) satisfies \(\chi(H)=\omega(H)\), where \(\omega(H)\) denotes the size of a largest clique in \(H\).
We refer the reader to [5] for notation and terminology on digraphs not explicitly defined in this paper. Let \(D=(V,A)\) be a digraph. A _digon_ is a pair of arcs in opposite directions between the same vertices. A _simple arc_ is an arc which is not in a digon. An _oriented graph_ is a digraph with no digon. The _bidirected graph_ associated with a graph \(G\), denoted by \(\overleftarrow{G}\), is the digraph obtained from \(G\) by replacing every edge by a digon. The _underlying graph_ of \(D\), denoted by \(\operatorname{UG}(D)\), is the undirected graph with vertex set \(V(D)\) in which \(uv\) is an edge if and only if \(uv\) or \(vu\) is an arc of \(D\). We say that \(D\) is a _super-orientation_ of \(\operatorname{UG}(D)\), and it is an _orientation_ of \(\operatorname{UG}(D)\) if \(D\) is an oriented graph. A _tournament_ on \(n\) vertices is an orientation of the complete graph on \(n\) vertices. The _bidirected graph_ of \(D\), denoted by \(B(D)\), is the undirected graph \(G\) with vertex set \(V(D)\) in which \(uv\) is an edge if and only if \(uv\) is a digon of \(D\). We denote by \(\overleftarrow{\omega}(D)\) the size of a largest bidirected clique of \(D\), _i.e._ the size of the largest clique of \(B(D)\).
In 1982, Neumann-Lara [14] introduced the notions of dicolouring and dichromatic number, which generalize the ones of proper colouring and chromatic number. For a positive integer \(k\), a _\(k\)-colouring_ of \(D=(V,A)\) is a function \(\alpha:V\to[k]\). It is a _\(k\)-dicolouring_ if \(\alpha^{-1}(i)\) induces an acyclic subdigraph in \(D\) for each \(i\in[k]\). In other words, no directed cycle of \(D\) is monochromatic in \(\alpha\). The _dichromatic number_ of \(D\), denoted by \(\vec{\chi}(D)\), is the smallest \(k\) such that \(D\) admits a \(k\)-dicolouring.
There is a one-to-one correspondence between the proper \(k\)-colourings of a graph \(G\) and the \(k\)-dicolourings of its associated bidirected graph \(\overleftarrow{G}\), and in particular \(\chi(G)=\vec{\chi}(\overleftarrow{G})\). Hence every result on proper colouring of undirected graphs can be seen as a result on dicolouring of bidirected graphs, and it is natural to study whether the result can be extended to all digraphs. Indeed, a lot of classical results on graph proper colourings have already been extended to digraphs dicolouring. For instance, Brooks' Theorem (Brooks [7]) has been generalised to digraphs by Harutyunyan and Mohar in [11] (see also [1]). Another example is the celebrated Strong Perfect Graph Theorem (Chudnovsky, Robertson, Seymour and Thomas [8]) extended to digraphs by Andres and Hochstattler in [3] (the proof is strongly based on the result of Chudnovsky et al). A digraph \(D\) is _perfect_ if \(\vec{\chi}(H)=\overleftarrow{\omega}(H)\) for every induced subdigraph \(H\) of \(D\).
**Theorem 1** (Andres and Hochstattler [3]).: _A digraph \(D\) is perfect if and only if \(B(D)\) is perfect and \(D\) does not contain an induced directed cycle of length at least 3._
We refer the interested reader to [13], in which the authors define a class of chordal digraphs, which extends the class of undirected chordal graphs. One can easily prove that every digraph \(D\) in this class is actually a perfect digraph, so it satisfies \(\vec{\chi}(D)=\overleftarrow{\omega}(D)\) by Theorem 1.
In this work, we look for lower and upper bounds on the dichromatic number of orientations and super-orientations of chordal graphs. Dicolourings of such digraphs have also been studied in [2], in which the authors characterise exactly the digraphs \(H\) for which there exists \(c_{H}\in\mathbb{N}\) such that every oriented chordal graph \(\vec{G}\) with \(\vec{\chi}(\vec{G})\geq c_{H}+1\) contains \(H\) as an induced subdigraph.
The very first interesting class of such digraphs are tournaments for which the question has been settled by Erdos, Gimbel and Kratsch in [10]. They showed that the dichromatic number of a tournament \(T\) on \(n\) vertices is always at most \(O\left(\frac{n}{\log n}\right)\), and that this bound is tight (up to a constant factor). One can ask if this result is true not only for tournaments but for all orientations of chordal graphs. That is, do we always have \(\vec{\chi}(\vec{G})=O\left(\frac{\omega(G)}{\log\omega(G)}\right)\) when \(\vec{G}\) is an orientation of a chordal graph \(G\)? We answer this by the negative. Indeed, we show in Section 3 that it is not even true for orientations of interval graphs. Recall that an _interval graph_ is obtained from a set of intervals on the real line: the intervals are the vertices and there is an edge between two intervals if and only if they intersect. It is well-known that interval graphs are chordal.
**Theorem 2**.: _For every fixed \(k\in\mathbb{N}\), there exists an interval graph \(G_{k}\) and an orientation \(\vec{G}_{k}\) of this graph such that \(\omega(G_{k})=k\) and \(\vec{\chi}(\vec{G}_{k})\geq\lceil\frac{k}{2}\rceil\)._
On the positive side, if \(\vec{G}\) is the orientation of a proper interval graph \(G\) (which is an interval graph where each interval has length exactly one), then \(\vec{\chi}(\vec{G})=O\left(\frac{\omega(G)}{\log(\omega(G))}\right)\), as proved in [2]. The key idea is that \(G\) admits a partition \((V_{1},V_{2})\) of its vertex-set such that both \(G\langle V_{1}\rangle\) and \(G\langle V_{2}\rangle\) are disjoint union of cliques.
Another well-known class of perfect graphs is the one of cographs. The _join_ of two undirected graphs \(G_{1}\) and \(G_{2}\) is the graph built from the disjoint union of \(G_{1}\) and \(G_{2}\) where every edge between vertices of \(G_{1}\) and vertices of \(G_{2}\) are added. Cographs form the smallest class of graphs containing the single-vertex graph that is closed under disjoint union and the join operation. One can easily prove that the oriented graphs built in the proof of Theorem 2 are indeed orientations of cographs. In Section 4, we improve this result for cographs in general.
**Theorem 3**.: _For every fixed \(k\in\mathbb{N}\), there exists a cograph \(G_{k}\) and an orientation \(\vec{G}_{k}\) of this graph such that \(\vec{\chi}(\vec{G}_{k})=\omega(G_{k})=k\)._
Next we consider super-orientations of chordal graphs. If \(D\) is a super-orientation of a chordal graph \(G\), then obviously \(\vec{\chi}(D)\leq\omega(G)\) because \(\vec{\chi}(D)\leq\chi(G)=\omega(G)\). Note that we cannot expect any improvement of this bound in general, because if \(D\) is the bidirected graph \(\overleftrightarrow{G}\) then \(\vec{\chi}(D)=\omega(G)\). But one can ask what happens if we restrict the structure of \(B(D)\), the bidirected graph of \(D\).
In Section 5, we consider digraphs for which the bidirected graph has bounded maximum degree. Using the degeneracy of the underlying graph, we show the following easy proposition.
**Proposition 4**.: _Let \(D\) be a super-orientation of a chordal graph \(G\). Then_
\[\vec{\chi}(D)\leq\left\lceil\frac{\omega(G)+\Delta(B(D))}{2}\right\rceil.\]
This proposition is best possible when \(\Delta(B(D))=0\) by Theorem 2. In the following, we show that it is indeed best possible for every fixed value of \(\Delta(B(D))\).
**Theorem 5**.: _For every fixed \(k,\ell\in\mathbb{N}\) such that \(k\geq\ell+1\), there exists a chordal graph \(G_{k,\ell}\) and a super-orientation \(D_{k,\ell}\) of \(G_{k,\ell}\) such that \(\omega(G_{k,\ell})=k\), \(\Delta(B(D_{k,\ell}))=\ell\) and \(\vec{\chi}(D_{k,\ell})=\left\lceil\frac{k+\ell}{2}\right\rceil\)._
The _maximum average degree_ of an undirected graph \(G\) is \(\mathrm{Mad}(G)=\max\left\{\frac{2|E(H)|}{|V(H)|}\mid H\text{ subgraph of }G\right\}\). In Section 6, we show the following bound on digraphs \(D\) for which \(\mathrm{Mad}(B(D))\) is bounded.
**Theorem 6**.: _Let \(D\) be a super-orientation of a chordal graph \(G\). If \(\mathrm{Mad}(B(D))\leq d\), then_
\[\vec{\chi}(D)\leq\frac{1}{2}\omega(G)+O(\sqrt{d\cdot\omega(G)}).\]
Finally in Section 7 we show the following bound on super-orientations \(D\) of chordal graphs that do not contain \(\overleftrightarrow{C_{4}}\).
**Theorem 7**.: _Let \(D\) be a super-orientation of a chordal graph \(G\). If \(B(D)\) is \(C_{4}\)-free, then_
\[\vec{\chi}(D)\leq\left\lceil\frac{\omega(G)+3}{2}\right\rceil.\]
We also prove that the bound of Theorem 7 is almost tight by proving the following.
**Theorem 8**.: _For every fixed \(k\geq 3\) and every \(n\geq\mathbb{N}\), there exists a super-orientation \(D_{k,n}\) of a chordal graph \(G_{k,n}\) on at least \(n\) vertices such that \(B(D_{k,n})\) is a disjoint union of paths, \(\omega(G_{k,n})=k\) and \(\vec{\chi}(D_{k,n})=\left\lfloor\frac{k+3}{2}\right\rfloor\)._
A _tree-decomposition_ of a graph \(G=(V,E)\) is a pair \((T,\mathcal{X})\) where \(T=(I,F)\) is a tree, and \(\mathcal{X}=(B_{i})_{i\in I}\) is a family of subsets of \(V(G)\), called _bags_ and indexed by the vertices of \(T\), such that:
1. each vertex \(v\in V\) appears in at least one bag, _i.e._\(\bigcup_{i\in I}B_{i}=V\),
2. for each edge \(e=xy\in E\), there is an \(i\in I\) such that \(x,y\in B_{i}\), and
3. for each \(v\in V\), the set of nodes indexed by \(\{i\mid i\in I,v\in B_{i}\}\) forms a subtree of \(T\).
The _width_ of a tree decomposition is defined as \(\max_{i\in I}\{|B_{i}|-1\}\). The _treewidth_ of \(G\), denoted by \(\operatorname{tw}(G)\), is the minimum width of a tree-decomposition of \(G\). It is well-know that every graph \(G\) is a subgraph of a chordal graph \(G^{\prime}\) with \(\omega(G^{\prime})=\operatorname{tw}(G)+1\). Hence the following is a direct consequence of Proposition 4 and Theorems 6 and 7.
**Corollary 9**.: _Let \(D\) be a super-orientation of \(G\). Then we have:_
* \(\vec{\chi}(D)\leq\left\lceil\frac{\operatorname{tw}(G)+\Delta(B(D))+1}{2}\right\rceil\)_, and_
* \(\vec{\chi}(D)\leq\frac{1}{2}\operatorname{tw}(G)+O(\sqrt{\operatorname{Mad}( B(D))\cdot\operatorname{tw}(G)})\)_, and_
* \(\vec{\chi}(D)\leq\left\lceil\frac{\operatorname{tw}(G)+4}{2}\right\rceil\) _if_ \(B(D)\) _is_ \(C_{4}\)_-free._
## 2 Definitions and preliminary results
Let \(G=(V,E)\) be an undirected graph. A _perfect elimination ordering_ of \(G\) is an ordering \(v_{1},\ldots,v_{n}\) of its vertex-set such that, for every \(i\in[n]\), the subgraph of \(G\) induced by \(N(v_{i})\cap\{v_{i+1},\ldots,v_{n}\}\) is a clique.
**Proposition 10** (Folklore).: _A graph \(G\) is chordal if and only if \(G\) admits a perfect elimination ordering._
**Proposition 11** (Folklore).: _The treewidth of a chordal graph \(G\) is exactly \(\omega(G)-1\)._
A tree-decomposition \((T,\mathcal{X})\) is _reduced_ if, for every \(tt^{\prime}\in E(T)\), \(X_{t}\setminus X_{t^{\prime}}\) and \(X_{t^{\prime}}\setminus X_{t}\) are non-empty. It is easy to see that any graph \(G\) admits an optimal (i.e., of width \(\operatorname{tw}(G)\)) tree-decomposition which is reduced (indeed, if \(X_{t}\subseteq X_{t^{\prime}}\) for some edge \(tt^{\prime}\in E(T)\), then contract this edge and remove \(X_{t}\) from \(\mathcal{X}\)).
A tree-decomposition \((T,\mathcal{X})\) of a graph \(G\) of width \(k\geq 0\) is _full_ if every bag has size exactly \(k+1\). It is _valid_ if \(|X_{t}\setminus X_{t^{\prime}}|=|X_{t^{\prime}}\setminus X_{t}|=1\) for every \(tt^{\prime}\in E(T)\). Note that any valid tree-decomposition is full and reduced.
The following result is well-known, see for instance [6]. We give here a short proof for sake of completeness.
**Lemma 12**.: _Every graph \(G=(V,E)\) admits a valid tree-decomposition of width \(\operatorname{tw}(G)\)._
Proof.: Let \((T,\mathcal{X})\) be an optimal reduced tree-decomposition of \(G=(V,E)\), which exists by the remark above the lemma. We will progressively modify \((T,\mathcal{X})\) in order to make it first full and then valid.
While the current decomposition is not full, let \(tt^{\prime}\in E(T)\) such that \(|X_{t}|<|X_{t^{\prime}}|=\operatorname{tw}(G)+1\) and let \(v\in X_{t^{\prime}}\setminus X_{t}\). Add \(v\) to \(X_{t}\). The obtained decomposition is still a tree-decomposition. Moreover, the updated decomposition remains reduced all along the process, as since \(|X_{t}|<|X_{t^{\prime}}|\) and the initial decomposition is reduced, \(X_{t^{\prime}}\) must contain another vertex \(u\neq v\) with \(u\notin X_{t}\). At the end of the process, we obtain an optimal decomposition \((T,\mathcal{X})\) that is full.
Now, while \((T,\mathcal{X})\) is not valid, let \(tt^{\prime}\in E(T)\), \(x,y\in X_{t}\setminus X_{t^{\prime}}\) and \(u,v\in X_{t^{\prime}}\setminus X_{t}\) (such an edge of \(T\) and four distinct vertices of \(V\) must exist since \((T,\mathcal{X})\) is full and reduced but not valid). Then, add
a new node \(t^{\prime\prime}\) to \(T\), with corresponding bag \(X_{t^{\prime\prime}}=(X_{t^{\prime}}\setminus\{u\})\cup\{x\}\) and replace the edge \(tt^{\prime}\) in \(T\) by the two edges \(tt^{\prime\prime}\) and \(t^{\prime\prime}t^{\prime}\). Clearly, subdividing the edge \(tt^{\prime}\) by adding a bag \(X_{t^{\prime\prime}}=X_{t^{\prime}}\setminus\{u\}\cup\{x\}\) still leads to an optimal full tree-decomposition of the same width.
Note that, after the application of each step as described above, either the maximum of \(|X_{t}\setminus X_{t^{\prime}}|\) over all edges \(tt^{\prime}\in E(T)\), or the number of edges \(tt^{\prime}\in E(T)\) that maximize \(|X_{t}\setminus X_{t^{\prime}}|\), strictly decreases, and none of these two quantities increases. Therefore, the process terminates, and eventually \((T,\mathcal{X})\) becomes an optimal valid tree-decomposition.
Let \(D_{1}\) and \(D_{2}\) be two digraphs. Let \(u_{1}v_{1}\) be an arc of \(D_{1}\) and \(v_{2}u_{2}\) be an arc of \(D_{2}\). The _directed Hajos join_ of \(D_{1}\) and \(D_{2}\), denoted by \(D_{1}\triangledown D_{2}\), is the digraph obtained from the union \(D_{1}\cup D_{2}\) by deleting the arcs \(u_{1}v_{1}\) as well as \(v_{2}u_{2}\), identifying the vertices \(v_{1}\) and \(v_{2}\) into a new vertex \(v\) and adding the arc \(u_{1}u_{2}\).
**Theorem 13** (Bang-Jensen et al. [4] (see also [12])).: _Let \(D_{1}\) and \(D_{2}\) be two digraphs, then_
\[\vec{\chi}(D_{1}\triangledown D_{2})\geq\min\{\vec{\chi}(D_{1}),\vec{\chi}(D_ {2})\}.\]
## 3 Orientations of interval graphs with large dichromatic number
This section is devoted to the proof of Theorem 2.
**Theorem 2**.: _For every fixed \(k\in\mathbb{N}\), there exists an interval graph \(G_{k}\) and an orientation \(\vec{G}_{k}\) of this graph such that \(\omega(G_{k})=k\) and \(\vec{\chi}(\vec{G}_{k})\geq\lceil\frac{k}{2}\rceil\)._
Proof.: Let us fix \(k\in\mathbb{N}\), we will build an orientation \(D_{k}\) of an interval graph \(G_{k}\) such that \(\omega(G_{k})=k\) and \(\vec{\chi}(D_{k})\geq\left\lfloor\frac{k+1}{2}\right\rfloor\).
We start from one interval \(I_{1}^{1}\). Then, for every \(i\) from \(2\) to \(k\), we do the following: for each interval \(I_{i-1}^{s}\) we added at step \(i-1\), we add \(2^{i-1}\) new pairwise disjoint intervals whose union is included in \(I_{i-1}^{s}\), and we associate to each of these new intervals \(I_{i}^{\ell}\) a distinct binary number \(b_{i}^{\ell}\) on \(i-1\) bits. By construction, every new interval intersects exactly \(i-1\) other intervals (one for each step).
Let \(G_{k}\) be the interval graph made of the intervals built above. By construction, \(\omega(G_{k})=k\). Now we consider \(D_{k}\) the orientation of \(G_{k}\) defined as follows. For every pair \(j<i\), we orient the edge \(I_{j}^{s}I_{i}^{\ell}\) from \(I_{i}^{\ell}\) to \(I_{j}^{s}\) if the \(j^{\text{th}}\) bit of \(b_{i}^{\ell}\) is \(1\), and from \(I_{j}^{s}\) to \(I_{i}^{\ell}\) otherwise. Figure 1 illustrates the construction of \(D_{3}\).
Let us prove that \(\vec{\chi}(D_{k})\geq\lceil\frac{k}{2}\rceil\). To do this, let \(\varphi\) be any optimal dicolouring of \(D_{k}\). We will find a tournament \(T\) of size \(k\) in \(D_{k}\) such that, for each colour \(c\) in \(\varphi\), \(c\) appears at most twice in \(T\). This will prove that \(\varphi\) uses at least \(\lceil\frac{k}{2}\rceil\) colours, implying the result.
Start from the universal vertex \(I_{1}^{1}\). Then, for \(i\in\{2,\ldots,k\}\), we do the following : let \(I_{i-1}^{s}\) be the last vertex added to \(T\), we will extend \(T\) with a vertex \(I_{i}^{\ell}\) so that \(I_{i}^{\ell}\subseteq I_{i-1}^{s}\). For each colour \(c\in\varphi\) that appears exactly twice in \(T\), let \(x_{c}y_{c}\) be a monochromatic arc of \(T\) coloured \(c\). Then we choose \(I_{i}^{\ell}\) so
Figure 1: The oriented interval graph \(D_{3}\) (bits of \(b_{i}^{\ell}\) are read from left to right).
for each such colour \(c\), \(x_{c}y_{c}I_{i}^{\ell}\) is a directed triangle. The existence of \(I_{i}^{\ell}\) is guaranteed by construction. This implies that the colour of \(I_{i}^{\ell}\) in \(\varphi\) appears at most twice in \(T\).
## 4 Orientations of cographs with large dichromatic number
This section is devoted to the proof of Theorem 3.
**Theorem 3**.: _For every fixed \(k\in\mathbb{N}\), there exists a cograph \(G_{k}\) and an orientation \(\vec{G}_{k}\) of this graph such that \(\vec{\chi}(\vec{G}_{k})=\omega(G_{k})=k\)._
Proof.: We define \(\vec{G}_{1}\) as the only orientation of \(G_{1}\), the graph on one vertex. We obviously have \(\vec{\chi}(\vec{G}_{1})=\omega(G_{1})=1\), and \(G_{1}\) is a cograph.
Let us fix \(k\geq 1\), we build \(\vec{G}_{k+1}\) from \(\vec{G}_{k}\) as follows. Start from \(k+1\) disjoint copies \(\vec{G}_{k}^{1},\ldots,\vec{G}_{k}^{k+1}\) of \(\vec{G}_{k}\) and \(k+1\) new vertices \(v_{1},\ldots,v_{k+1}\). Then, for every \(i\in[k+1]\), we add all arcs from \(v_{i}\) to \(V(\vec{G}_{k}^{i})\) and all arc from \(\bigcup_{j\neq i}V(\vec{G}_{k}^{j})\) to \(v_{i}\). Let \(\vec{G}_{k+1}\) be the obtained oriented graph and \(G_{k+1}\) be its underlying graph. Figure 2 illustrates the construction of \(\vec{G}_{3}\).
Note first that \(G_{k+1}\) is a cograph: the disjoint union of \(G_{k}^{1},\ldots,G_{k}^{k}\) is a cograph, the independent set \(v_{1},\ldots,v_{k}\) is a cograph, and \(G_{k+1}\) is the join of these two cographs. Let us prove by induction on \(k\) that \(\vec{\chi}(\vec{G}_{k})=\omega(G_{k})=k\). For \(k=1\), the result is immediate, and assume it holds for \(k\geq 1\). Note first that \(\omega(G_{k+1})=k+1\) since every clique of \(G_{k+1}\) contains at most one vertex of \(\{v_{1},\ldots,v_{k+1}\}\) and do not contain two vertices from distinct copies of \(G_{k}\). So every maximum clique of \(G_{k+1}\) is made of a maximum clique of \(G_{k}\) and one additional vertex \(v_{i}\).
Moreover \(\vec{\chi}(\vec{G}_{k+1})\leq\chi(G_{k+1})=\omega(G_{k+1})=k+1\). Let us now show that the dichromatic number of \(\vec{G}_{k+1}\) is at least \(k+1\). Assume for the purpose of contradiction that \(\vec{G}_{k+1}\) admits a \(k\)-dicolouring \(\varphi\). Then there exist \(i\neq j\) such that \(\varphi(v_{i})=\varphi(v_{j})\). Since \(\vec{\chi}(\vec{G}_{k})\geq k\), there exist \(x\in V(\vec{G}_{k}^{i})\) and \(y\in V(\vec{G}_{k}^{j})\) such that \(\varphi(x)=\varphi(y)=\varphi(v_{i})=\varphi(v_{j})\). Hence \(v_{i}xv_{j}yv_{i}\) is a monochromatic \(\vec{C}_{4}\) of \(\vec{G}_{k+1}\) coloured with \(\varphi\), a contradiction.
## 5 Super-orientations of chordal graphs with a bidirected graph having bounded maximum degree
This section is devoted to the proofs of Proposition 4 and Theorem 5.
**Proposition 4**.: _Let \(D\) be a super-orientation of a chordal graph \(G\). Then_
\[\vec{\chi}(D)\leq\left\lceil\frac{\omega(G)+\Delta(B(D))}{2}\right\rceil.\]
Proof.: Let \(v_{1},\ldots,v_{n}\) be a perfect elimination ordering of \(G\) (which exists by Proposition 10). Then, in \(G\), every vertex \(v_{i}\) has at most \(\omega(G)-1\) neighbours in \(\{v_{i+1},\ldots,v_{n}\}\). Hence, in \(D(\{v_{i},\ldots,v_{n}\})\), \(d^{+}(v_{i})+d^{-}(v_{i})\leq\omega(G)-1+\Delta(B(D))\).
Thus, considering the vertices from \(v_{n}\) to \(v_{1}\), we can greedily find a dicolouring of \(D\) using at most \(\left\lceil\frac{\omega(G)+\Delta(B(D))}{2}\right\rceil\) by choosing for \(v_{i}\) a colour that is not appearing in \(N^{+}(v_{i})\cap\{v_{i+1},\ldots,v_{n}\}\) or in \(N^{-}(v_{i})\cap\{v_{i+1},\ldots,v_{n}\}\).
**Theorem 5**.: _For every fixed \(k,\ell\in\mathbb{N}\) such that \(k\geq\ell+1\), there exists a chordal graph \(G_{k,\ell}\) and a super-orientation \(D_{k,\ell}\) of \(G_{k,\ell}\) such that \(\omega(G_{k,\ell})=k\), \(\Delta(B(D_{k,\ell}))=\ell\) and \(\vec{\chi}(D_{k,\ell})=\left\lceil\frac{k+\ell}{2}\right\rceil\)._
Proof.: Let us fix \(\ell\in\mathbb{N}\). We define \(D_{\ell+1,\ell}\) as the bidirected complete digraph on \(\ell+1\) vertices. Note that \(D_{\ell+1,\ell}\) clearly satisfies the desired properties.
Then, for every \(k\geq\ell+2\), we iteratively build \(D_{k,\ell}\) from \(D_{k-1,\ell}\) or \(D_{k-2,\ell}\) as follows:
* If \(k+\ell\) is even, we just add a dominating vertex to \(D_{k-1,\ell}\) to construct \(D_{k,\ell}\). We obtain that \(\omega(\text{UG}(D_{k,\ell}))=1+\omega(\text{UG}(D_{k-1,\ell}))=k\), \(\Delta(B(D_{k,\ell}))=\Delta(B(D_{k-1,\ell}))=\ell\) and \(\vec{\chi}(D_{k,\ell})=\vec{\chi}(D_{k-1,\ell})=\left\lceil\frac{k+\ell-1}{2} \right\rceil=\left\lceil\frac{k+\ell}{2}\right\rceil\) (the last equality holds because \(k+\ell\) is even).
* If \(k+\ell\) is odd (implying that \(k\) is at least \(\ell+3\)), we start from \(T\), a copy of \(TT_{\frac{k+\ell+1}{2}}\), the transitive tournament on \(\frac{k+\ell+1}{2}\) vertices. Note that \(\frac{k+\ell+1}{2}\leq k-1\) because \(k\geq\ell+3\). For each arc \(xy\) in \(T\), we add a copy \(D^{xy}\) of \(D_{k-2,\ell}\) with all arcs from \(y\) to \(D^{xy}\) and all arcs from \(D^{xy}\) to \(x\). Let \(D_{k,\ell}\) be the obtained digraph. First, \(\text{UG}(D_{k,\ell})\) is chordal because it has a perfect elimination ordering: we first eliminate each copy \(D^{xy}\) of \(D_{k-2,\ell}\), which is possible because \(\text{UG}(D_{k-2,\ell})\) is chordal, and \(x,y\) are adjacent to every vertex of \(D^{xy}\). When every copy of \(D_{k-2,\ell}\) is eliminated, the remaining digraph is \(T\), which is clearly chordal because it is a tournament. Next, we have \(\omega(\text{UG}(D_{k,\ell}))=\max(\omega(\text{UG}(T)),\omega(\text{UG}(D_{k -2,\ell}))+2)=k\), and \(\Delta(B(D_{k,\ell}))=\Delta(B(D_{k-2,\ell}))=\ell\). Finally, let us show that \(\vec{\chi}(D_{k,\ell})\geq\frac{k+\ell+1}{2}\) (the equality then comes from Proposition 4). In order to get a contradiction, assume that \(\varphi\) is a dicolouring of \(D_{k,\ell}\) that uses at most \(\frac{k+\ell-1}{2}\) colours. We know by induction that each copy of \(D_{k-2,\ell}\) uses all the colours in \(\varphi\). Since \(T\) is a tournament on \(\frac{k+\ell+1}{2}\) vertices, we know that it must contain a monochromatic arc \(xy\). Now let \(z\) be a vertex in \(D^{xy}\) such that \(\varphi(x)=\varphi(y)=\varphi(z)\), then \(xyz\) is a monochromatic triangle, a contradiction.
Figure 3 illustrates the construction of \(D_{1,0}\), \(D_{3,0}\) and \(D_{5,0}\).
6 Super-orientations of chordal graphs with a bidirected graph having bounded maximum average degree
This section is devoted to the proof of Theorem 6. We first need to prove the following.
**Lemma 14**.: _Let \(G=(V,E)\) be a chordal graph. There exists an ordering \(a_{1},\ldots,a_{n}\) of \(V\) such that for any \(k\in[n]\) :_
\[|N(a_{k})| \leq\omega(G)+k-2 \text{(P1)}\] \[\text{and}\ \ \left|\bigcup_{i=1}^{k}N[a_{i}]\right| \leq\omega(G)+2k-1 \text{(P2)}\]
Proof.: Let \((T=(I,F),\mathcal{X}=(B_{u})_{u\in I})\) be a valid tree-decomposition of \(G\) of width \(\omega(G)-1\), which exists by Lemma 12 (recall that \(\operatorname{tw}(G)=\omega(G)-1\) by Proposition 11). One can easily show that, since \(T\) is valid, \(|I|=n-\omega(G)+1\) (see [6, Lemma 2.5]).
Let \(P=u_{0},\ldots,u_{r}\) be a longest path in \(T\). We root \(T\) in \(u_{r}\). For any vertex \(u\) of \(T\) different from \(u_{r}\), father\((u)\) denotes the father of \(u\) in \(T\).
We now consider a Depth-First Search of \(T\) from \(u_{r}\). The vertices of \(P\) have the priority. Along this route, we label the vertices of \(T\). A vertex is labelled when all of its children are labelled. We denote by \(v_{1},\ldots,v_{n-\omega(G)+1}\) the vertices of \(T\) in this labelling. Note that \(v_{1}\) corresponds to \(u_{0}\) and \(v_{n-\omega(G)+1}\) corresponds to \(u_{r}\).
Now, for each \(i\in\{1,\ldots,n-\omega(G)\}\), we denote by \(a_{i}\) the unique vertex of \(G\) that belongs to \(B_{v_{i}}\) but not to father\((B_{v_{i}})\) (recall that \(T,\mathcal{X}\) is valid so \(a_{i}\) is well defined). We finally label \(a_{n-\omega(D)+1},\ldots,a_{n}\) the remaining vertices of \(G\) in \(B_{u_{r}}\) in an arbitrary way. See Figure 4 for an example of building \(a_{1},\ldots,a_{n}\).
We will now prove that \((a_{i})_{1\leq i\leq n}\) satisfies the two properties of the statement. First observe that, for every \(i\in[n]\), \(N(a_{i})\subseteq\{a_{1},\ldots,a_{i-1}\}\cup X_{v_{i}}\) because \(a_{i}\notin\bigcup_{j=i+1}^{n-\omega(G)+1}X_{v_{j}}\). Hence we have \(|N(a_{i})|\leq i-1+\omega-1=\omega(G)-2+i\), which shows (P1).
To show that (P2) holds, we fix \(k\in[n]\). Note that the result is trivially true when \(k\geq n-\omega+1\), thus we assume that \(k\leq n-\omega\). Hence, both \(v_{k}\) and father\((v_{k})\) are well defined. We set \(X_{T}=\{v_{1},\ldots,v_{k}\}\), \(X_{G}=\{a_{1},...,a_{k}\}\) and we let \(T^{\prime}\) be the smallest subtree of \(T\) that contains all vertices of \(X_{T}\). Let \(\ell\) be the largest integer such that \(u_{\ell}\) belongs to \(V(T^{\prime})\) (\(\ell\) is well defined because \(T^{\prime}\) contains \(v_{1}=u_{0}\)). We root \(T^{\prime}\) in \(u_{\ell}\).
We will now show that \(T^{\prime}\) contains at most \(2k\) vertices. If \(u_{\ell}=v_{k}\), then the vertices of \(T^{\prime}\) are exactly \(\{v_{1},\ldots,v_{k}\}\) and this is clear. Otherwise let us show that \(T^{\prime\prime}=T^{\prime}\setminus X_{T}\) contains at most \(k\) vertices, and we will get the result since \(|X_{T}|=k\). By construction we know that every descendant of a vertex \(v_{i}\) is labelled less than \(i\). Hence, \(T^{\prime\prime}=T^{\prime}\setminus X_{T}\) is a tree rooted in \(u_{\ell}\).
Assume first that \(T^{\prime\prime}\) contains at least two leaves \(f_{1}\) and \(f_{2}\) different from \(u_{\ell}\) (\(u_{\ell}\) may be a leaf it has only one child). We denote by \(P_{1}\) and \(P_{2}\) two paths from their lowest common ancestor. Without loss of generality, we assume that \(f_{1}\) is before \(f_{2}\) in \((v_{1},\ldots,v_{n})\). Since \(f_{2}\) has a child \(g_{2}\) in \(X_{T}\) and by construction of \((v_{i})_{1\leq i\leq n}\), the internal vertices of \(P_{1}\) are before \(g_{2}\) in \((B_{1},\ldots,B_{n})\). This implies that
Figure 3: The digraphs \(D_{1,0}\), \(D_{3,0}\) and \(D_{5,0}\).
all internal vertices in \(P_{1}\) must belong to \(X_{T}\), which contradicts the existence of \(f_{1}\). This shows that \(T^{\prime\prime}\) must have exactly two leaves (one of them is \(u_{\ell}\)) and then \(T^{\prime\prime}\) is a path rooted in \(u_{\ell}\). Since \(P\) is a longest path in \(T\), we get that \(|V(T^{\prime\prime})|\leq\ell\leq k\) and \(T^{\prime}\) contains at most \(2k\) vertices as desired.
We now consider the set \(N_{G}=\{a_{j}\in V(G)\mid v_{j}\in V(T^{\prime})\setminus\{u_{\ell}\}\}\). Let \(x\) be any vertex in \(X_{G}\). Then every neighbour of \(x\) must belong to some bag in \(T^{\prime}\). Moreover, if a vertex belongs to a bag of \(T^{\prime}\), then either it belongs to \(B_{u_{\ell}}\) or it belongs to \(N_{G}\). Then the neighbourhood of \(x\) is a subset of \(N_{G}\cup B_{u_{\ell}}\). Also, \(x\) itself belongs to \(N_{G}\). Since \(x\) is any vertex in \(X_{G}\), we have:
\[\bigcup_{x\in X_{G}}N[x]\subseteq(N_{G}\cup B_{u_{\ell}})\]
Since \(|N_{G}|\leq 2k-1\) and \(|B_{u_{\ell}}|=\omega-1\), we get (P2).
In order to prove Theorem 6, we prove the more general following result.
**Theorem 15**.: _Let \(D\) be a super-orientation of a chordal graph \(G\) such that \(\operatorname{Mad}(B(D))\leq d\). For every \(\varepsilon>0\), we have_
\[\vec{\chi}(D)\leq\left(\frac{1+\varepsilon}{2}\right)\omega(G)+\frac{d}{ \varepsilon}+1\]
Proof.: Let \(\varepsilon>0\) and \(d\geq 1\), we assume that \(\varepsilon\leq 1\) for otherwise the result is trivial. We fix \(c_{d,\varepsilon}=\max\left(\left\lceil\frac{d}{2\varepsilon}\right\rceil, \frac{3}{4}d+\frac{d}{8\varepsilon}+\frac{1}{2}\right)\). Straightforward calculations imply \(c_{d,\varepsilon}\leq\frac{d}{\varepsilon}+1\). We will show that every super-orientation \(D\) of a chordal graph \(G\) with \(\operatorname{Mad}(B(D))\leq d\) satisfies
\[\vec{\chi}(D)\leq\left(\frac{1+\varepsilon}{2}\right)\omega(G)+c_{d,\varepsilon}\]
We prove it by reductio ad absurdum, so assume that \(D=(V,A)\) is a smaller counterexample, meaning that \(\vec{\chi}(D)>\left(\frac{1+\varepsilon}{2}\right)\omega(G)+c_{d,\varepsilon}\). Thus \(D\) must be vertex-dicritical (meaning that \(\vec{\chi}(H)<\vec{\chi}(D)\) for every induced subdigraph \(H\) of \(D\)), for otherwise there exists a vertex \(x\in V\) such that \(\vec{\chi}(D-x)=\vec{\chi}(D)\), and \(D-x\) would be a smaller counterexample.
For the simplicity of notations, from now on, we write \(\omega\) for \(\omega(G)\). Let \(v\) be any vertex of \(D\) and \(\alpha\) be any optimal dicolouring of \(D-v\) (meaning that \(\alpha\) uses exactly \(\vec{\chi}(D)-1\) colours). Then \(\alpha\) cannot be extended to \(D\) without using a new colour for \(v\) (because \(D\) is dicritical). Since every digon (incident to \(v\)) may forbid at most one colour at \(v\), and each pair of simple arcs (incident to \(v\)
Figure 4: A chordal graph \(G\) (on the left) and its valid tree-decomposition \(T\) (on the right). The orange dashed arcs represent the chosen maximum path \(P\). The ordering \(a_{1},\ldots,a_{n}\) of \(V(G)\) we built is \(a,b,c,i,j,h,d,l,e,f,g,k,m\).
may forbid at most one colour at \(v\), we get the following inequalities with \(\mathrm{dig}(v)\) the number of digons incident to \(v\):
\[\mathrm{dig}(v)+\frac{|N(v)|-\mathrm{dig}(v)}{2} \geq\vec{\chi}(D)-1>\left(\frac{1+\varepsilon}{2}\right)\omega+c_{ d,\varepsilon}-1 \tag{1}\] \[\mathrm{implying} \mathrm{dig}(v) >(1+\varepsilon)\omega+2c_{d,\varepsilon}-2-|N(v)| \tag{2}\]
Note that these inequalities hold for every vertex \(v\) of \(D\). By Lemma 14, there is an ordering \(a_{1},\ldots,a_{n}\) of \(V(D)\) such that, for any \(i\in[n]\),
\[|N(a_{i})| \leq\omega+i-2 \mathrm{(P1)}\] \[\mathrm{and}\ \ \left|\bigcup_{j=1}^{i}N(a_{j})\right| \leq\omega+2i-1 \mathrm{(P2)}\]
Let us fix \(i=\left\lceil\frac{d}{2\varepsilon}\right\rceil\). Note that \(i\leq c_{d,\varepsilon}\). Thus, since \(\vec{\chi}(D)>c_{d,\varepsilon}\), we obviously have \(i\leq n\). Let \(X=\{a_{j}\mid j\leq i\}\) and \(W=\bigcup_{j=1}^{i}N[a_{j}]\). Together with inequality (2), property (P1) implies, for every \(j\in[i]\), \(\mathrm{dig}(a_{j})>\varepsilon\omega+2c_{d,\varepsilon}-j\). Hence we get:
\[\sum_{v\in X}\mathrm{dig}(v)=\sum_{j=1}^{i}\mathrm{dig}(a_{j})>\varepsilon \omega i+2c_{d,\varepsilon}i-\frac{i(i+1)}{2} \tag{3}\]
By (P2), we know that \(|W|\leq\omega+2i-1\). Thus \(D\langle W\rangle\) contains at most \(\frac{d}{2}(\omega+2i-1)\) digons. Similarly, since \(|X|=i\), \(D\langle X\rangle\) contains at most \(\frac{di}{2}\) digons. When we sum \(\mathrm{dig}(v)\) over all vertices \(v\) in \(X\), we count exactly once every digon between \(X\) and \(W\setminus X\), and exactly twice every digon in \(X\). Then, the following is a consequence of (3).
\[\varepsilon\omega i+2c_{d,\varepsilon}i-\frac{i(i+1)}{2}<\sum_{v \in X}\mathrm{dig}(v) \leq \mathrm{dig}(D\langle W\rangle)+\mathrm{dig}(D\langle X\rangle)\] \[\leq \frac{d}{2}(\omega+2i-1)+\frac{di}{2}\]
Since \(i=\left\lceil\frac{d}{2\varepsilon}\right\rceil\), we conclude that \(c_{d,\varepsilon}<\frac{3}{4}d+\frac{d}{8\varepsilon}+\frac{1}{2}\), a contradiction.
The proof of Theorem 6 now follows.
**Theorem 6**.: _Let \(D\) be a super-orientation of a chordal graph \(G\). If \(\mathrm{Mad}(B(D))\leq d\), then_
\[\vec{\chi}(D)\leq\frac{1}{2}\omega(G)+O(\sqrt{d\cdot\omega(G)}).\]
Proof.: This is a direct consequence of Theorem 15 applied for \(\varepsilon=\sqrt{\frac{d}{\omega(G)}}\).
## 7 Super-orientations of chordal graphs \(\overleftrightarrow{C_{4}}\)-free
This section is devoted to the proof of Theorems 7 and 8.
**Theorem 7**.: _Let \(D\) be a super-orientation of a chordal graph \(G\). If \(B(D)\) is \(C_{4}\)-free, then_
\[\vec{\chi}(D)\leq\left\lceil\frac{\omega(G)+3}{2}\right\rceil.\]
Proof.: We assume that \(\omega=\omega(G)\) is odd, otherwise we select an independent set \(I\) of \(D\) such that \(D^{\prime}=D-I\) satisfies \(\omega(\operatorname{UG}(D^{\prime}))=\omega-1\), so \(\omega(\operatorname{UG}(D^{\prime}))\) is odd and \(\vec{\chi}(D)\leq\vec{\chi}(D^{\prime})+1\) (the existence of \(I\) is guaranteed because \(G\) is chordal).
Let \((T,\mathcal{X}=(B_{u})_{u\in V(T)})\) be a valid tree-decomposition of \(G\), that is each bag \(B\in\mathcal{X}\) has size exactly \(\omega\) and, for every two adjacent bags \(B\) and \(B^{\prime}\), \(|B\setminus B^{\prime}|=1\). Recall that the existence of such a tree-decomposition is guaranteed by Lemma 12. We assume that each bag induces a clique on \(G\), otherwise we just add the missing arcs (oriented in an arbitrary direction). Note that this operation does increase \(\omega\) nor decrease \(\vec{\chi}(D)\) and does not create any \(\overleftrightarrow{C_{4}}\).
Let \(k=\frac{\omega+3}{2}\). A \(k\)-dicolouring \(\varphi\) of \(D\) is _balanced_ if, for each bag \(B\) and colour \(c\in[k]\), \(0\leq|\varphi^{-1}(c)\cap B|\leq 2\). Note that every balanced \(k\)-dicolouring satisfies \(|\varphi^{-1}(c)\cap B|=1\) for either \(1\) or \(3\) colours. Moreover, in the former case, exactly one colour of \([k]\) is missing in \(\varphi(B)\). We will show that \(\vec{\chi}(D)\leq k\) by proving the existence of a balanced \(k\)-dicolouring \(\varphi\) of \(D\) such that, for each bag \(B\), we have:
1. \(|\varphi^{-1}(c)\cap B|=1\) holds for exactly one colour \(c\), or
2. \(|\varphi^{-1}(c_{i})\cap B|=1\) for exactly three distinct colours \(c_{1},c_{2},c_{3}\) and two vertices of \(\{v_{1},v_{2},v_{3}\}\) are connected by a \(\overleftrightarrow{P_{3}}\) in \(D\) (where \(\{v_{i}\}=\varphi^{-1}(c_{i})\cap B\) and a \(\overleftrightarrow{P_{3}}\) is a bidirected path on \(3\) vertices).
We will say that a bag \(B\) is of type (1) or (2), depending if \(\varphi\) satisfies condition (1) or (2) respectively on \(B\).
We show the existence of \(\varphi\) by induction on the number of bags in the tree-decomposition. If \(|V(T)|=1\), let \(\mathcal{X}=\{B\}\), then \(D\) is a semi-complete digraph on \(\omega\) vertices which is \(\overleftrightarrow{C_{4}}\)-free. We construct \(\varphi\) greedily as follows: choose a simple arc \(uv\) such that both \(u\) and \(v\) have not been coloured yet, and use a new colour for them. At the end, there are either one or three uncoloured vertices. If there is only one, we just use a new colour for it and \(B\) is of type (1), otherwise the three remaining vertices induce a bidirected triangle on \(D\) and we can use one new colour for each of them, so \(B\) is of type (2).
Assume now that \(|V(T)|\geq 2\). Let \(x\) be a leaf of \(T\) and \(y\) its only neighbour in \(T\). Let \(\{u\}=B_{y}\setminus B_{x}\) and \(\{v\}=B_{x}\setminus B_{y}\). By induction, with \(D-v\) and \((T-x,\mathcal{X}\setminus B_{x})\) playing the role of \(D\) and \((T,\mathcal{X})\) respectively, there exists a balanced \(k\)-dicolouring \(\varphi\) of \(D-v\) for which each bag is of type (1) or (2). We will show by a case analysis that \(\varphi\) can be extended to \(v\).
* Assume first that \(B_{y}\) is of type (1), and let \(r\) be the only vertex alone in its colour class in \(D\langle B_{y}\rangle\). If \(r=u\), then we set \(\varphi(v)=\varphi(u)\) and \(\varphi\) is a balanced \(k\)-dicolouring of \(D\) with \(B_{x}\) being of type (1). Henceforth assume \(u\neq r\). Let \(w\) be the neighbour of \(u\) in \(B_{y}\) such that \(\varphi(w)=\varphi(u)\). Since \(u\) and \(v\) are not adjacent, setting \(\varphi(v)=\varphi(u)\) yields a balanced \(k\)-dicolouring of \(D\), with \(B_{x}\) being of type (1), except if \(w\) and \(v\) are linked by a digon. Analogously, setting \(\varphi(v)=\varphi(r)\) yields a balanced \(k\)-dicolouring of \(D\), with \(B_{x}\) being of type (1) since \(|\varphi^{-1}(c)\cap B_{x}|=1\) holds only for \(c=\varphi(w)\), except if \(r\) and \(v\) are linked by a digon. But then, if both \([v,w]\) and \([v,r]\) are digons, we can set \(\varphi(v)\) to the missing colour of \(\varphi(B_{y})\). Then \(\varphi\) is a balanced \(k\)-dicolouring of \(D\) with \(B_{x}\) being of type (2), since \(|\overleftrightarrow{r}|(c)\cap B_{x}|=1\) holds exactly for every \(c\in\{\varphi(w),\varphi(v),\varphi(r)\}\) with \(r,w\) being connected by a \(P_{3}\) in \(D\).
* Henceforth assume that \(B_{y}\) is of type (2) and let \(r,s,t\) be the only vertices alone in their colour class in \(D\langle B_{y}\rangle\) such that \(s\) and \(t\) are connected by a \(\overleftrightarrow{P_{3}}\) in \(D-v\). If \(u=r\), then we set \(\varphi(v)=\varphi(u)\) and \(\varphi\) is a balanced \(k\)-dicolouring of \(D\) with \(B_{x}\) being of type (2).
Assume now that \(u\in\{s,t\}\). Without loss of generality, we assume that \(u=s\). If \(r\) and \(v\) are not linked by a digon, we can set \(\varphi(v)=\varphi(r)\) and \(\varphi\) is a balanced \(k\)-dicolouring of \(D\) with \(B_{x}\) being of type (1). The same argument holds if \(t\) and \(v\) are not linked by a digon. But if both \([v,r]\) and \([v,t]\) are digons, we can set \(\varphi(v)=\varphi(s)\). Then \(\varphi\) is a balanced \(k\)-dicolouring of \(D\) with \(B_{x}\) being of type (2), since \(|\varphi^{-1}(c)\cap B_{x}|=1\) holds exactly for every \(c\in\{\varphi(v),\varphi(r),\varphi(t)\}\) with \(r,t\) being connected by a \(\overleftrightarrow{P_{3}}\) in \(D\). Assume finally that \(u\notin\{r,s,t\}\) and let \(w\) be the neighbour of \(u\) in \(B_{y}\) such that \(\varphi(w)=\varphi(u)\). If \(r\) and \(v\) are not linked by a digon, we can set \(\varphi(v)=\varphi(r)\) and \(\varphi\) is a balanced \(k\)-dicolouring of \(D\) with \(B_{x}\) being of type (2), where \(|\varphi^{-1}(c)\cap B_{x}|=1\) holds exactly for every \(c\in\{\varphi(w),\varphi(s),\varphi(t)\}\) with \(s,t\) being connected by a \(\overleftrightarrow{P_{3}}\) in \(D-v\). The same argument holds if \(v\) and \(w\) are not linked by a digon. Henceforth we assume that both \([v,w]\) and \([v,r]\) are digons. Since \(D\) is \(\overleftrightarrow{C_{4}}\)-free, and because \(s,t\) are connected by a \(\overleftrightarrow{P_{3}}\) in \(D-v\), we know that either \([v,s]\) or \([v,t]\) is not a digon of \(D\). Assume without loss of generality that \([v,s]\) is not, then we set \(\varphi(v)=\varphi(s)\). Then \(\varphi\) is a balanced \(k\)-dicolouring of \(D\) with \(B_{x}\) being of type (2), since \(|\underline{\varphi}^{-1}(c)\cap B_{x}|=1\) holds exactly for every \(c\in\{\varphi(w),\varphi(r),\varphi(t)\}\) with \(w,r\) being connected by a \(\overleftrightarrow{P_{3}}\) in \(D\).
**Theorem 8**.: _For every fixed \(k\geq 3\) and every \(n\geq\mathbb{N}\), there exists a super-orientation \(D_{k,n}\) of a chordal graph \(G_{k,n}\) on at least \(n\) vertices such that \(B(D_{k,n})\) is a disjoint union of paths, \(\omega(G_{k,n})=k\) and \(\vec{\chi}(D_{k,n})=\left\lfloor\frac{k+3}{2}\right\rfloor\)._
Proof.: We only have to prove it for \(k=3\). For larger values of \(k\), we build \(D_{k,n}\) from \(D_{k-1,n}\) or \(D_{k-2,n}\) as in the proof of Theorem 5. The digraph \(D_{3,n}\), depicted in Figure 5, is clearly a super-orientation of a 2-tree. As a consequence of Theorem 13, it has dichromatic number 3, since it is obtained from successive Hajos joins applied on \(\overleftrightarrow{K_{3}}\).
## 8 Further research
In this work, we gave both lower and upper bounds on the dichromatic number orientations and super-orientations of different classes of chordal graphs and cographs. A lot of questions arise and we detail a few of them.
First, we do not know if the bound of Theorem 6 is optimal, and we ask the following.
**Question 16**.: Does there exist a computable function \(f\) such that every super-orientation \(D\) of a chordal graph \(G\) satisfies \(\vec{\chi}(D)\leq\frac{1}{2}\omega(G)+f(\operatorname{Mad}(B(D)))\)?
We also ask if Theorem 7 is true not only for \(\overleftrightarrow{C_{4}}\)-free digraphs but for every \(\overleftrightarrow{C_{\ell}}\)-free digraphs.
**Question 17**.: For every \(\ell\geq 3\), does there exist \(k_{\ell}\in\mathbb{N}\) such that every \(\overleftrightarrow{C_{\ell}}\)-free super-orientation \(D\) of a chordal graph \(G\) satisfies \(\vec{\chi}(D)\leq\frac{1}{2}\omega(G)+k_{\ell}\)?
A famous class of graphs is the class of claw-free graphs (a graph is _claw-free_ if it does not contain \(K_{1,3}\) as an induced subgraph). Line-graphs and proper interval graphs are examples of claw-free graphs. We ask the following.
Figure 5: The digraph \(D_{3,n}\).
**Question 18**.: Let \(\vec{G}\) be an orientation of a claw-free graph \(G\). Is it true that \(\vec{\chi}(\vec{G})=O\left(\frac{\omega(G)}{\log\omega(G)}\right)\)?
A celebrated conjecture of Erdos and Neumann-Lara (see [9]) states that every orientation \(\vec{G}\) of a graph \(G\) satisfies \(\vec{\chi}(\vec{G})=O\left(\frac{\Delta(G)}{\log\Delta(G)}\right)\). Since every claw-free graph \(G\) satisfies \(\Delta(G)\leq 2\omega(G)-2\), the question above is a consequence of Erdos and Neumann-Lara's conjecture.
|
2306.17500 | Empirical Interpretation of the Relationship Between Speech Acoustic
Context and Emotion Recognition | Speech emotion recognition (SER) is vital for obtaining emotional
intelligence and understanding the contextual meaning of speech. Variations of
consonant-vowel (CV) phonemic boundaries can enrich acoustic context with
linguistic cues, which impacts SER. In practice, speech emotions are treated as
single labels over an acoustic segment for a given time duration. However,
phone boundaries within speech are not discrete events, therefore the perceived
emotion state should also be distributed over potentially continuous
time-windows.
This research explores the implication of acoustic context and phone
boundaries on local markers for SER using an attention-based approach. The
benefits of using a distributed approach to speech emotion understanding are
supported by the results of cross-corpora analysis experiments. Experiments
where phones and words are mapped to the attention vectors along with the
fundamental frequency to observe the overlapping distributions and thereby the
relationship between acoustic context and emotion. This work aims to bridge
psycholinguistic theory research with computational modelling for SER. | Anna Ollerenshaw, Md Asif Jalal, Rosanna Milner, Thomas Hain | 2023-06-30T09:21:48Z | http://arxiv.org/abs/2306.17500v1 | Empirical Interpretation of the Relationship Between Speech Acoustic Context and Emotion Recognition
###### Abstract
Speech emotion recognition (SER) is vital for obtaining emotional intelligence and understanding the contextual meaning of speech. Variations of consonant-vowel (CV) phonemic boundaries can enrich acoustic context with linguistic cues, which impacts SER. In practice, speech emotions are treated as single labels over an acoustic segment for a given time duration. However, phone boundaries within speech are not discrete events, therefore the perceived emotion state should also be distributed over potentially continuous time-windows.
This research explores the implication of acoustic context and phone boundaries on local markers for SER using an attention-based approach. The benefits of using a distributed approach to speech emotion understanding are supported by the results of cross-corpora analysis experiments. Experiments where phones and words are mapped to the attention vectors along with the fundamental frequency to observe the overlapping distributions and thereby the relationship between acoustic context and emotion. This work aims to bridge psycholinguistic theory research with computational modelling for SER.
emotion recognition, context modelling, speech, attention, computational paralinguistics, acoustic modelling.
## I Introduction
Speech emotion understanding and recognition (SER) is a complex research area, with modeling approaches that aim to adapt to speech variability, while reducing redundancy in acoustic and linguistic perceptual cue recognition. These approaches are particularly challenging to develop because the target labels or the perceived emotion states can be considered very subjective and biased by cultural and linguistic perception differences. Speech emotion, within the domain of SER, is typically represented by two approaches: categorical and dimensional. Speech acoustic segments can be treated as a categorical entity consisting of discrete emotions such as _happy_, _sad_, _fear_, etc [1]. In the categorical approach, annotators label audio segments as emotion categories and use them to model speech emotion. The dimensional approach proposes two fundamental dimensions, valence and arousal, to represent emotion at a given time [2].
Typically, when a speech emotion corpora is created, each audio segment is labelled as a specific emotion category by the annotators, and it is assumed that the whole audio segment signifies that single emotion label [3, 4, 5]. It is theorised that the perceptual cues for phone boundaries and acoustic context are ambiguous as they share information for various emotion states [6, 7]. The acoustic stimuli change in speech segments are distributed events and can therefore overlap. From a psycholinguistic perspective, these distributed, continuous stimuli transitions constitute theories of human perception of SER [6, 7]. The context cues can be of different lengths, and the perceptual acoustic context can be modelled with different length acoustic cues. Work from [8] shows that speech emotion can be modelled with small acoustic cues (200 ms). Therefore, the assumption that each acoustic speech segment is attributed to only one emotion state likely negatively impacts recognition performance. Multiple sub-emotions can be present depending on the contextual variation between different segment regions, as shown in Figure 1. This paper focuses on acoustic perceptual cues and the implication of the length and the distribution of these cues over speech audio segments for SER.
Previous research for SER mainly focuses on modelling generalised emotion with different neural network architectures while adapting to speech variability and reducing redundancy for speaker invariance to improve SER capability [9, 10, 11, 12, 13]. As the focus of current research has shifted towards embedding modelling and left-right context cues, work by [14] proposed a spatial representation learning method with CNNs, to model mid to long-term sequence dependencies. After the advent of the transformer architecture, the SER models focused more on transformer-based and multi-head fusion-based modelling approaches [15, 16].
There remains a gap between the psycholinguistic and cognitive theories regarding speech emotion perceptual cues and the currently developed computational modelling methods. Research focusing on interpretability is still underdeveloped for SER models, particularly where the model's internal intricacies and representations with the corresponding acoustic segments can be explained. This work attempts to find a mutual accord with the theories of speech emotion perception cues across
Fig. 1: An example of distributed emotions where labelling an utterance as a single discrete category could be overlooking other perceived emotions
multiple disciplines and bridge the gap to speech emotion models. By projecting model attention weights across different time frames (based on various acoustic cues) of the acoustic segment, the emotion classification is observed to shift. Several corpora have been considered to demonstrate the task across various types of speech emotion data (acted, natural, elicited).
This manuscript is organised as follows. Section I introduces the research domain and the particular research goal of the work, along with some background information. Section II discusses context modelling and introduces the idea of overlapping context regions and phone units. Section III presents the consonant-vowel (CV) boundaries and phonemic overlapped regions and their significance in speech emotion perception cues and recognition. Section IV explains the underlying SER model for the interpretation framework and attention. Section V describes the cross-corpus data, features, experimental framework, and presents the results and graphs. Section VII discusses the interpretation of the presented results and suggested directions for the development of future work.
## II Context Modelling
Context cues for speech emotion can be described as linguistic and paralinguistic. The linguistic aspects consist of semantic structure of the speech segment and the textual meaning. The nonverbal or paralinguistic aspects provide a rich source of perceptual context cues that facilitates projecting expressiveness in social discourse in both intra-cultural and cross-cultural scenarios [17, 18]. Although verbal comprehension mainly dictates social discourse, perceptual context cues can deliver meaning and emotion independent of the verbal comprehension using the acoustic changes that influence the speech delivery [19, 20]. Work in [6] used psychoacoustic features (such as tempo, prosodic contour, loudness etc.) for modelling emotion and concluded that different emotional states have different perceptual cues and that they are subjective to individual contexts despite having a universal representation of emotion states. Furthermore, the acoustic contexts are not orthogonal, and the shared information/dimensions represents the redundant acoustic stimuli which provide context [6, 7]. Naturally, if the acoustic stimuli changes, the perceptual context cue will also change accordingly. If the acoustic stimuli are redundant for the cues that define emotion states, these stimuli share overlapping regions. Typically, a 'phone' is regarded as one of the smallest units of an acoustic speech sound. To explore the implication of the various stimuli regions, the phone boundaries should be explored. The CV boundaries for context cues are discussed in Section III.
The authors in [21] have presented left context (referred to as "forward effects" by the authors), right context (referred to as "backward effects" by the authors), proximal context and distal acoustic context cues as a in acoustic events over time. The sensory attention emphasises the change among these acoustic stimuli, which maximises the potential information for facilitating speech perception [22]. The stimuli changes at a particular time over left-right time frames to reflect the emotion state and speech perception cue at that given point of time. Therefore, it can be assumed that emotion is a distributed event in acoustic segments, not a single discrete emotion category. To investigate this hypothesis, a simple computational model of left-right modelling with attention has been applied in Section IV.
## III Linguistic Boundaries
Contextual cues, consisting of phonetic aspects for speech, can be used to aid the determination of the emotional state at a given time. The phonological forms can have similarities and dissimilarities among the phone boundaries. A clear distinction has been found between the clusters of vowel and consonant phone datapoints by work from [23, 24]. The consonant phones play a decisive role in word meaning comprehension, such that removing initial prosodic variations in vowel phones (acoustic reduction) enhances word intelligibility [23]. However, contrasting studies showed that replacing intermittent consonants with noises or change in emphasis on vowels, increases the perceived intelligibility of words and sentences to human listeners [25, 26]. It is argued that vowel phones are more responsible for defining the emotional state of the speech acoustics, and intelligibility due to stressed vowel regions and wide harmonic variations [24, 27, 28].
Furthermore, the harmonic variations and variations in the pitch within vowels, change the CV boundaries over time and contextual cues related to acoustic perception. These continuous perceptual context cues are distributed over CV boundaries in acoustic segments [27]. Thus it may be possible that at different left-right time-frames, different regions from the same acoustic segment may be categorised differently. This can be described as the relationship between perceptual CV cues with the acoustics, which has been referred to as acoustic-phonetic context for speech perception [29, 30]. The aim of this work is to understand the distributed nature of these perceptual acoustic cues which form intra-linguistic determinism between acoustic structure and meaning that humans perceive as emotion. Here, meaning and intelligibility are explored only from acoustic segments as no language model or external multi-modal data has been used.
## IV Model architecture: _BLSTMAT_
The focus of this work is to explore perceptual acoustic cues and their relationship to current speech emotion recognition modelling. Developing and training large-scale SER models is out of the scope of this work as this approach is to determine the concept of this relationship. As the previously discussed related theories, regarding speech emotion perception, take into account past and future context, this can be modelled as a form of left and right acoustic cues. The chosen modelling approach utilises a bidirectional long short term memory (LSTM) neural network with a subsequent attention layer, referred to as _BLSTMAT_. An overview of the model structure is displayed in Figure 2.
LSTM networks are unable to exploit the future context and instead they solely focus on the temporal order of the sequence, whereas bidirectional LSTMs [31] comprise of an additional layer of hidden connections which allows temporal information to pass in the opposite direction in order to exploit future and
past contextual information [32]. The hidden connections \(\mathbf{h}^{n}\) are iteratively compiled:
\[h_{t}^{n}=\mathcal{H}(\mathcal{W}_{h^{n-1}h^{n}}h_{t}^{n-1}+\mathcal{W}_{h^{n}h^{ n}}h_{t-1}^{n}+b_{h}^{n}) \tag{1}\]
where \(\mathcal{W}\) defines the weight matrices, \(\mathcal{H}\) represents the hidden layer function and \(b\) refers to the bias vector. Using this approach, temporal feature distribution over the sequence can be obtained, which is more effective for SER tasks [33].
The attention mechanism enables computation of longer-term inter-sequence dependencies. The additive method for computing attention from [34] is applied for this approach. Utilising the global mean, the attention mechanism enables the network to attend to specific parts of itself which in turn captures global information. The non-linearity \(tanh\) is used to multiply the global mean over the whole temporal vector which computes the positional dependency of each element. \(\mathcal{H}\) denotes the matrix of output vectors from the LSTM layer, by summing the average time of \(\mathcal{H}\) across contextual modalities, the shared memory matrix \(\mathcal{M}\) can be formed by repetition until it matches the dimension of \(\mathcal{H}_{s}\), where s refers to the view for the context. Where \(T\) refers to iterations, \(\mathcal{V}\) denotes the parameters controlling the influence within the view and from the shared memory:
\[\gamma^{(\tau)}=tanh(\mathcal{V}_{s1}^{(\tau)}tanh(\mathcal{H}_{s1}))\cdot tanh (\mathcal{V}_{s2}^{(\tau)}\mathcal{M}^{(\tau)}) \tag{2}\]
\[\alpha_{s}^{(\tau)}=\mathcal{V}_{s3}^{(\tau)T}\gamma^{(\tau)} \tag{3}\]
\(\mathcal{V}_{s1}\), \(\mathcal{V}_{s2}\) and \(\mathcal{V}_{s3}\) are parameters used to compute the attention strength \(\alpha\).
The _BLSTMATT_ model setup consists of 2 x 512 dimension hidden layers feeding into an attention layer, which computes a 128 dimension context vector. For classification, the network uses a fully-connected linear layer which projects the attention output. In order to classify over the number of emotions in the target, the output is normalised with a \(softmax\) layer before the loss is computed.
## V Experiments
### _Data_
The scope for these experiments regards English speaking adult datasets across three emotion types: one acted dataset, eNTERFACE [3], one natural dataset, MOSEI [5], and one elicited dataset, IEMOCAP [4]. An overview of the emotion classifications represented in each dataset are each described briefly below. For each dataset, the big-six emotions [1] are considered in training and testing: _happy_, _sad_, _anger_, _surprise_, _disgust_ and _fear_.
eNTERFACE (ENT) consists of roughly 1 hour of acted English utterances [3]. The training set is comprised of 38 speakers and the testing set contains the remaining 5 speakers. The data is split by 8 female speakers and 35 male speakers from 14 different nations.
IEMOCAP (IEM6) comprises of over 12 hours of US-English utterances from 10 speakers (5 female and 5 male) [4]. There are five dyadic sessions (between two speakers) which are specifically scripted or contrived to elicit certain emotions. The training data consists of the first 4 sessions (4 speakers) and the last session is split for the test set (2 speakers). It is common for IEMOCAP to be evaluated as four classes: _happy_, _sad_, _anger_ and _neutral_ (where _excitement_ is combined with _happy_). This test set will be referred to as IEM4.
MOSEI (MOS) is the largest sentiment and emotion dataset with approximately 65 hours of data and more than 1000 speakers [5]. Data is collected from YouTube and the videos are not specifically designed as an emotion dataset so the emotional speech is seen as natural. The official training, validation and test splits for the ACL 2018 conference have been considered, where the training and validation sets are combined for training. These can be found at [https://github.com/A2Zadeh/CMU-MultimodalSDK/blob/master/mmsdk/mmdatasdk/dataset/standard_datasets/CMU_MOSEI/cmu_mosei_std_folds.py](https://github.com/A2Zadeh/CMU-MultimodalSDK/blob/master/mmsdk/mmdatasdk/dataset/standard_datasets/CMU_MOSEI/cmu_mosei_std_folds.py).
### _Features_
Experiments from [35] showed how sequence-based SER systems performed best in terms of unweighted and weighted accuracy with 23-dimensional log-Mel filterbank features.
### _Implementation_
The _BLSTMATT_ contains two hidden layers of 512 nodes each. The output layer (size 1024) is passed into the attention mechanism computing a context vector (size 128), which is projected to 1024 nodes. This is then fed into the emotion classifier which linearly projects to the 6 classes. The cross-entropy loss function is applied, which is preceded by a \(softmax\) layer. The _BLSTMATT_ produces a variable length attention vector based on the input segment length, as mentioned in section IV. The attention vectors have been extracted and mapped with the phones and words in the input segments to be able to interpret the acoustic attention.
Fig. 2: The _BLSTMATT_ model pipeline consists of 2 bidirectional-LSTM layers, with an attention layer and linear classifier
### _Evaluation_
Unweighted accuracy (UA) and the weighted accuracy (WA) are the metrics typically applied for SER evaluation. The UA calculates accuracy in terms of the total correct predictions divided by total samples, which gives the same weight to each class:
\[UA=\frac{TP+TN}{P+N} \tag{4}\]
where \(P\) is the number of correct positive instances (equivalent to \(TP+FN\)) and \(N\) is the number of correct negative instances (equivalent to \(TN+FP\)). As some of the datasets are imbalanced across the emotion classes, see Tables I, II and III, the WA is calculated which weighs each class according to the number of samples in that class:
\[WA=\frac{1}{2}(\frac{TP}{P}+\frac{TN}{N}) \tag{5}\]
Further details regarding the implementation of the scoring scripts can be found in [34].
### _Acoustic Context_
As discussed in Section II and III, the recognition of speech emotion is hypothesised to be influenced by overlapping perceptual acoustic cues consisting of variation in the phone boundaries. So, in theory, if the phone boundaries are shifted, the emotion classification may differ from the previous predicted emotion state that considered the whole segment. To further explore this hypothesis, the acoustic context is changed in the following series of experiments.
Experiments are performed removing frames from the end and beginning of the original, whole test segments. In Tables I, II and III, this is listed in the first column labelled'skip frames (left-right)' where a number of frames are skipped, or removed, from the left and right (left and right context) of each test segment. For example, 20-200 means 20 frames have been removed from the left context of each test segment and 200 frames have been removed from the right context of each test segment. Table I shows the results where right frames are skipped, Table II shows results where only left frames are skipped and Table III shows results where both left and right frames are skipped. If the length of a test utterance is less than the length of context frames, the test utterance remains unchanged. Therefore, when the skip context frames become longer, such as 200-100 (that means a total of 300 frames to be removed), only the test segments with more frames than 300 are used. The percentage of test corpora that is modified with the context is also reported. For example, in the SEGS% column, 91.3% means that 8.7% of the test segments from the corresponding corpora remains the same due to shorter segment length and 91.3% of the test segments are modified with the corresponding context. The weighted and unweighted accuracy are reported along with the change in the context length.
As the experiments consider context length variations, the baseline for this work is the result when no left or right context is removed. This is the first line in all Tables with context 0-0. It is the emotion modelling baseline where one emotion is given for each complete test utterance. For further details about the validity of the _BLSTMATT_ model, please see work in [35] and [8].
## VI Results
The experimental results in Tables I, II and III suggest that the SER results would change when either the left, right or both contexts are changed. For example, the model tested on MOS has a UA of 73.3% without changing the context length but upon skipping the context right 100 frames, skip frames 0-100 in Table I, the UA degrades. The same observation of UA degradation occurs when skipping left frames or both left and right frames, while there is a slight improvement in UA when skipping 30 right frames. In the case of skip frames 0-30, removing 30 ms from the end of the segment modifies 100% of the segments across all the testsets. The results for ENT and IEM4 are worse for both UA and WA, but for MOS the performance improves. For IEM6, the UA degrades whereas the WA improves. The majority of the results across all the datasets degrade upon varying the context length due to the target label, supplied with those segments, being a fixed discrete emotion category. This finding corroborates the initial hypotheses that speech emotion is not a fixed entity that remains the same over the whole audio segment, and that it is subject to be distributed over different overlapping shorter context queues.
To observe the relationship between the SER results and the hypotheses regarding the acoustic segments in more detail, the attention weights were extracted for each test utterance and mapped to the aligned words and phones. Additionally, the pitch contour was calculated to understand the pitch correlation with respect to the prosodic utterance using the algorithm found at [https://github.com/google/REAPER](https://github.com/google/REAPER). The attention maps for a sample of the test utterances are presented in Figures 3 and 4: the former from the MOSEI corpus and the latter from the IEMOCAP corpus. Figure 3 shows that the attention projection drifts while changing the phone boundaries from the same audio segment and therefore the emotion state also changes. With context 0-0, the model incorrectly predicts the emotion _sad_ (attention weights in Figure 3 indicated by red line) whereas removing 20 left frames helps the model correctly predict the emotion _happy_ (attention weights in Figure 4 indicated by blue line). The attention weights focus more strongly on different portions of the test utterance. Similar behaviour can be seen in Figure 4 where skipping 100 left frames allows the model to make the correct prediction.
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Skip Frames (left-right)} & \multicolumn{2}{c|}{Unweighted Accuracy (UA \%)} & \multicolumn{2}{c|}{Weighted Accuracy (WA \%)} & \multicolumn{2}{c|}{
\begin{tabular}{c} Percentage of segments (SEGS \%) \\ with modified context \\ \end{tabular} } \\ \hline Context & ENT & IEM6 & IEM4 & MOS & ENT & IEM6 & IEM4 & MOS & ENT & IEM6 & IEM4 & MOS \\ \hline
0-0 & 93.33 & 69.06 & 88.79 & 73.30 & 88.00 & 64.57 & 63.81 & 54.29 & - & - & - \\ \hline
0-30 & 86.89 & 68.73 & 88.28 & 73.55 & 76.40 & 63.82 & 64.44 & 54.76 & 100.0 & 100.0 & 100.0 & 100.0 \\ \hline
0-100 & 82.22 & 68.73 & 87.94 & 72.81 & 68.00 & 63.45 & 61.21 & 54.70 & 91.3 & 99.3 & 99.5 & 98.0 \\ \hline
0-200 & 86.22 & 68.09 & 86.63 & 71.53 & 75.20 & 62.08 & 61.32 & 53.63 & 40.0 & 77.1 & 80.2 & 89.4 \\ \hline \end{tabular} TABLE II: Cross-corpora results with variable context length, where left frames are skipped.
Fig. 3: A _happy_ MOS utterance with no context removed mislabelled as _sad_ compared to 20 left frames removed correctly labelled as _happy_, along with the pitch contour
corresponding labels given by the annotators, which could have added bias factors and add to recognition uncertainty. To attempt to mitigate the inherent biases and to attempt to generalise the model perception cues for these experiments, the model is trained with four different corpora consisting of acted, natural and elicited emotions. Consequently, it is argued that the results corroborate the argument that an continuous approach to emotion recognition is the optimal strategy based on observed acoustic stimuli shift. This work is an attempt to bridge the gap that current SER models have, by explaining the SER model's internal intricacies and how the representations correspond with acoustic segments.
Figures 3 and 4 show the attention weight propensity towards the vowel based regions. This corroborates with the claims from the linguistic and cognitive theories about speech emotion recognition and CV boundaries, as proposed in Section III, that consonants play a decisive role in word meaning but vowels are more responsible for the emotion perception cues as a result of harmonic variations and stressed regions. The vowels are observed to change the CV boundaries and the context cues for emotion perception causing many hard boundaries to be redundant. This suggests the cues for phone boundaries and acoustic context can share information relative to the perceived emotion state.
For the IEM6 dataset, when the context lengths were skipped left frames, there was a slight improvement in UA or WA, while recognition with the MOS and IEM4 datasets improved slightly when context lengths were skipped right frames. These results highlight that where context cues vary in length, it is possible for the acoustic segments to contain more than one distinct emotion state. As the UA and WA vary positively and negatively according to context lengths, this suggests overlapping regions where the acoustic stimuli are more or less informative regarding the emotion state. As future speech emotion datasets are compiled and annotated, if the labels for emotion classes were adjusted to allow for overlapping categories, this could potentially aid the recognition performance of current and future developed models. These results and insights can also be used to modify computational models and mechanisms that are able to adapt and recognise emotion from various speech domains to be more in-line with the psycholinguistic theories.
In the current trend of SER models, emotion labels are treated as discrete labels attributed over a whole segment. The problem explored in this research, suggests that this approach assumes that an utterance's global attributes correlate with the local characteristics over different time frames in the same segment for learning one discrete emotion category. This is observed to not be the case most of the time. Vowel-consonant envelopes rapidly change over time, attributing to different acoustic context. Hence the paralinguistic cue also changes with acoustic context. The results listed in Section VI demonstrate this argument. Moreover, by treating acoustic segments and emotion correspondence as a context-oriented continuous relationship, this should aid emotion recognition models across languages and dialects due to the distribution of acoustic boundaries across models trained on various emotion data. As a result, it could be possible to learn the variability of acoustic context in speech emotions rather than the variability of acoustic segments in speech emotions.
Future development of this framework will enable improved emotion modelling by understanding the intermediate representations and relating audio data with the computational models. Furthermore, it will help create more accurate annotations for
Fig. 4: A _happy_ IEM4 utterance with no context removed mislabelled as _anger_ compared to 100 right frames removed correctly labelled as _happy_, along with the pitch contour
emotion labels, improving SER corpora generation.
This work argues that discrete categorical emotion classification should not be the preferred approach to develop future SER models as it has been observed that emotion cues present as a distributed event, corroborating directly with cognitive linguistic theory that it is also continuous to recognise. Finding a suitable approach for accurate modelling of emotion states should be the aim of future research.
|
2307.16523 | Human Preferences and Robot Constraints Aware Shared Control for Smooth
Follower Motion Execution | With the continuous advancement of robot teleoperation technology, shared
control is used to reduce the physical and mental load of the operator in
teleoperation system. This paper proposes an alternating shared control
framework for object grasping that considers both operator's preferences
through their manual manipulation and the constraints of the follower robot.
The switching between manual mode and automatic mode enables the operator to
intervene the task according to their wishes. The generation of the grasping
pose takes into account the current state of the operator's hand pose, as well
as the manipulability of the robot. The object grasping experiment indicates
that the use of the proposed grasping pose selection strategy leads to smoother
follower movements when switching from manual mode to automatic mode. | Qibin Chen, Yaonan Zhu, Kay Hansel, Tadayoshi Aoyama, Yasuhisa Hasegawa | 2023-07-31T09:45:42Z | http://arxiv.org/abs/2307.16523v1 | # Human Preferences and Robot Constraints Aware Shared Control
###### Abstract
With the continuous advancement of robot teleoperation technology, shared control is used to reduce the physical and mental load of the operator in teleoperation system. This paper proposes an alternating shared control framework for object grasping that considers both operator's preferences through their manual manipulation and the constraints of the follower robot. The switching between manual mode and automatic mode enables the operator to intervene the task according to their wishes. The generation of the grasping pose takes into account the current state of the operator's hand pose, as well as the manipulability of the robot. The object grasping experiment indicates that the use of the proposed grasping pose selection strategy leads to smoother follower movements when switching from manual mode to automatic mode.
## 1 Introduction
Shared control, which employs automation to support human operation, has been widely used in teleoperation systems, since it effectively improves performance while reducing the physical and mental strain on users. Object grasping is one of the most important teleoperation tasks and is the subject of this paper. To address this task, previous work [1] has developed a shared control system and demonstrated that the system improved grasping performance and reduced operator fatigue. This shared control framework can guide the operator to an appropriate grasping pose with the best manipulability of the robot, by dynamically and continuously blending user intention and automation assistance. This paper aims to address the problem of shared control system which does not consider human choices. An alternating shared control system is proposed, which segregates the automation and manual aspects, empowering users to intervene at any time. On the other hand, conventional continuous shared control maintains the combination of manual input and automation throughout the entire process. In Maeda's work [2], the advantages of alternating shared control were demonstrated that the alternating way won a higher score for ease of use in subjective ratings while keeping the performance as the continuous method. In addition, to achieve a smooth follower motion execution, the target grasping pose is selected from all candidates by taking into account not only the manipulability of the robot, but also the preferences of the operator.
## 2 Shared Control System with Human Preferences
### Pose Remapping in Shared Control System
The alternating shared control is realized by setting the status switching trigger on the VR controller, and processing the positional and directional gaps between the two states. In our system, the pure teleoperation part establishes a transformation from the VR coordinate system (\(p_{hand,0}^{htc}\), \(p_{hand,k}^{htc}\), 0 means the initial step, \(k\) is for the k'th sampling step) to the robot base coordinate system (\(p_{e,k}^{b}\), \(p_{e,0}^{b}\)), by calculating the relative position: \(p_{e,k}^{b}=p_{e,0}^{b}+(p_{hand,k}^{htc}-p_{hand,0}^{htc})\). For the rotational motion mapping, the absolute orientation related to the robot base frame is used as \(R_{e,k}^{b}=R_{htc}^{b}\cdot R_{hand,k}^{htc}\). When the status change is triggered, the position remapping is as follows.
\[p_{e,0}^{b}=p_{e,k}^{b} \tag{1}\]
\[p_{hand,0}^{htc}=p_{hand,k}^{htc} \tag{2}\]
Meanwhile, the new absolute orientation is filtered by the spheical linear interpolation (SLERP).
\[R_{e,k}^{b}=SLERP(R_{e,k-1}^{b},R_{htc}^{b}\cdot R_{hand,k}^{htc},\alpha) \tag{3}\]
where, \(\alpha\) is the interpolation parameter between 0 and 1, representing linear placement position from \(R_{e,k}^{b}\) to \(R_{htc}^{b}\). \(R_{hand,k}^{htc}\).
### Grasping Pose Selection Based on Human Preferences and Robot Constraints
In [1], the target grasping poses were detected from multiple directions using template matching based object point cloud compensation. However, the final selection of the grasping pose was solely based on the best robot manipulability from the library of generated poses. As a result, when utilized in the alternating shared control, the user's preferences were not taken into account. It led to a noticeable jump in robot motion when transitioning from manual mode to automatic mode.
To enable the system to provide automatic assistance based on human manipulation outcomes, this paper proposes a feasible solution. The solution follows these steps: Firstly, for each object, 150 reliable generated grasping poses are stored when the object is not visually occluded by the manipulator.
Secondly, it filters out candidates with the closest positional and directional distances in sequence. Thirdly, it selects the most robot-operable target grasping pose among them. The reason is to avoid wrong grasping poses being included in the grasping pose library. Given the current quaternions of the end-effector \(q_{ee}\) and candidates in the grasping pose list \(q_{l}=[q_{1},q_{2},\ldots,q_{i}]\), the system calculates the angular absolute distance for each candidate.
\[d_{a,i}=min(\|q_{ee}+q_{i}\|,\|q_{ee}-q_{i}\|) \tag{4}\]
The distance is the chord length of the shortest path that connects the two quaternions. The top 30 grasping poses closest in the orientation are updated in the candidate list. Sequentially, the linear distance is calculated between the gripper position \(p_{ee}\) and the candidate \(p_{i}\) from the updated list.
\[d_{l,i}=\|p_{ee}-p_{i}\| \tag{5}\]
The top 6 grasping poses with the shortest distances are collected into a new list. Then, the robot manipulabilities are calculated and the highest result is choosed as the target grasping pose. The penalized manipulability, which considers singular configurations \(S(\theta)\) and joint limits \(L(\theta)\) by \(M(\theta)=S(\theta)L(\theta)\), is presented in [1]. Automatic approach towards the target is achieved through interpolation from the current pose to the target pose. Hence, the grasping pose changes according to the user manipulation and the robot constraints (Fig. 1(a)(b)).
## 3 Experiment and Result
As the improvement of manipulability on the follower side has been evaluated in our previous work [1], here we check the level of motion smoothness when the user preferences are carried out in the automatic mode. The object grasping experiment is conducted in two modes: one considering only the robot constraints, and the proposed method which considers both the robot manipulability and the robot current pose. In each mode, the system first automatically moves to a fixed preparation pose to simulate user operation, and then completes the grasping task in automatic mode. The Fig. 1(c) demonstrates the transformation of the robot's end-effector pose from the ready pose to the selected grasping pose. It shows that the proposed grasping pose selection strategy reduces gripper movements, making them smoother, and achieves a shorter completion time as the execution speed is the same (\(0.1m/s\)).
## 4 Conclusions
In this paper, we have applied an alternating shared control that enables human intervention, enhances the comprehensibility of the robot's motion, and smoothes the gap between control mode switching. Positional remapping and directional spherical linear interpolation are employed to realize the intervention-possible alternating shared control. The proposed grasping pose selection takes into account both manual operation and robot manipulability. The two factors are affected by human preferences and robot constraints, respectively. The result of an object grasping experiment shows that the shared control system extends the operators' wishes and makes the follower's motion smooth when switching control modes. Future work includes making different degrees of freedom have different weights in the cost function of grasping pose selection, and enabling people to fine-tune the final grasping pose.
|
2306.17514 | A behaviouristic approach to representing processes and procedures in
the OASIS 2 ontology | Foundational ontologies devoted to the effective representation of processes
and procedures are not widely investigated at present, thereby limiting the
practical adoption of semantic approaches in real scenarios where the precise
instructions to follow must be considered. Also, the representation ought to
include how agents should carry out the actions associated with the process,
whether or not agents are able to perform those actions, the possible roles
played as well as the related events.
The OASIS ontology provides an established model to capture agents and their
interactions but lacks means for representing processes and procedures carried
out by agents. This motivates the research presented in this article, which
delivers an extension of the OASIS 2 ontology to combine the capabilities for
representing agents and their behaviours with the full conceptualization of
processes and procedures. The overarching goal is to deliver a foundational OWL
ontology that deals with agent planning, reaching a balance between generality
and applicability, which is known to be an open challenge. | Giampaolo Bella, Gianpietro Castiglione, Daniele Francesco Santamaria | 2023-06-30T10:01:20Z | http://arxiv.org/abs/2306.17514v1 | # A behaviourist approach to representing processes and procedures in the OASIS 2 ontology
###### Abstract
Foundational ontologies devoted to the effective representation of processes and procedures are not widely investigated at present, thereby limiting the practical adoption of semantic approaches in real scenarios where the precise instructions to follow must be considered. Also, the representation ought to include how agents should carry out the actions associated with the process, whether or not agents are able to perform those actions, the possible roles played as well as the related events.
The OASIS 2 ontology [1, 2] provides an established model to capture agents and their interactions but lacks means for representing processes and procedures carried out by agents. This motivates the research presented in this article, which delivers an extension of the OASIS 2 ontology to combine the capabilities for representing agents and their behaviours with the full conceptualization of processes and procedures. The overarching goal is to deliver a foundational OWL ontology that deals with agent planning, reaching a balance between generality and applicability, which is known to be an open challenge.
Semantic Web, Ontology, OWL, Agent, Process, Procedure, Event, Process, Procedure 7th Workshop on Foundational Ontology (FOUST) co-located with FOS 2023, 19-20 July, 2023, Sherbrooke, Quebec, Canada. Corresponding author: D. F. Santamaria. [email protected] (G. Bella); [email protected] (G. Castiglione); [email protected] (D. F. Santamaria)
[https://www.dmi.unict.it/gianp/](https://www.dmi.unict.it/gianp/) (G. Bella); [https://www.dmi.unict.it/santamaria/](https://www.dmi.unict.it/santamaria/) (D. F. Santamaria)
## 1 Introduction
The notions of process and procedure are broadly known in the literature, even outside the computer science field, although they bear subtle differences. According to ISO 9001-2015 definitions, a process is _a set of correlated or interactive activities that transform inputs into outputs_ describing _the specified way of carrying out an activity_, whereas a procedure describes _how to carry out all or part of a process_. Hence, we can argue that a process may be composed of many procedures.
Representing processes and procedures through foundational _Web Ontology Language_ (OWL) ontologies is necessary both for easing the exchange of process descriptions among systems and to provide a machine-understandable interpretation of the related activities. One of the main benefits of using Semantic Web technologies in such a context is the reasoning capability that permits the inference of new facts from existing data and the verification of the knowledge base. For example, in the context of operating systems, _race conditions_[3] could lead to problems related to the lock of resources by concurrent processes. A formalization of the processes,
procedures and agents together with semantic reasoning could reduce the ambiguity that may occur when agents attempt to perform more operations at the same time.
Defining an ontological architecture for processes and procedures aims at improving so-called "automated planning and acting" [4]. One of the most remarkable attempts to standardize the planning problem is the PDDL (Planning Domain Definition Language) [5], which separates the planning problem into a) domain description and b) problem description. For instance, the PDDL in its current version introduces _derived predicates_ for dependency modelling of properties between objects. Therefore, processes and procedures benefit from the adoption of ontological representations because ontologies can fully address both planning sub-problems.
Although many ontological approaches are available in the literature, they suffer from the lack of a complete and general approach to effectively represent processes and procedures, especially one that a) combines the representation of processes and procedures with agents and their commitments, b) models the events generated during the executions of processes and c) accounts for the roles that are played by the committer agents.
The current paper is motivated by the delivery of such a model. We start by considering the ontological foundations for the domain of multi-agent systems. Specifically, we take into account OASIS 2 [1, 2], a foundational OWL 2 ontology that leverages the behavioural approach derived from the _Theory of Agents_ and the related mentalistic notions. The behaviourist approach is an effective way of semantically describing agents by characterizing their capabilities. Agents are enabled to report the set of activities that they can perform, the types of data required to execute those activities as well as the expected outputs, through the description of their behaviours. Agents' implementation details are abstracted away to make the discovery of agents transparent, automatic, and independent of the underlying technical infrastructure. In consequence, agent commitments are clearly described, and the entire evolution of its environment is unambiguously represented, searchable, and accessible: agents may join a collaborative environment in a plug-and-play fashion, as there is no more need for third-party interventions.
Our work rests on the observation that OASIS 2 has lacked a specific characterization of processes and procedures so far, even though it models so-called _plans_, which are ways of depicting a sequence of (planned) actions to be tackled once. Therefore, plans cannot be applied again once they are performed by agents. In this paper, we extend OASIS 2 to deal with general specifications of processes and procedures that can be consumed by agents, including events and the played role, that can be practically leveraged in real scenarios such as the one concerning race conditions.
The paper is organized as follows. Section 2 presents the related work through a comparison with the OASIS 2 approach; Section 3 introduces the model of OASIS 2 devoted to the representation of agents and their behaviours. Section 4 presents the novel extension of OASIS 2 that deals with the notions of processes and procedures. Section 5 closes the paper with some hints for future outcomes.
## 2 Related Work
In DOLCE[6], the concepts of processes and events are presented as special types of perdurants, whereas functions and roles are formalized in some extensions of the ontology [7, 8]. Aware of
the general definitions provided by DOLCE, OASIS 2 focuses on the behaviourist approach which is leveraged to provide agents with novel means for representing complex planning and related actions, dealing also with events. Specifically, roles are conceived by OASIS 2 as ways of enabling agents with additional behavioural capabilities, whereas events are represented to be aligned with the definition of agent behaviours. From this point of view, OASIS 2 introduces a different conceiving of events and roles since it describes agents in terms of their capabilities. A full mapping of OASIS 2 in DOLCE is feasible and one of our planned future works.
Wang et al. introduce a model for processes related to water quality monitoring [9]. The model is strictly focused on the description of observational processes concerning water pollution monitoring, thus being limited to general applicability outside of the domain.
Within CIDOC-CRM [10], there is a work-in-progress extension called CRMact [11] that defines the classes and properties for planning future activities and events. However, CRMact is mainly focused on cultural heritage and documentation records and hence is not generally applicable.
Concerning business processes, it is worth mentioning several works. Thomas et al. propose an ontological approach for representing business processes together with a system architecture prototype, exploiting the proposed model [12]. Greco et al. use ontologies as facilities within a framework for assisting designers in the realization and analysis of complex processes [13]. Corea et al. present an approach to verify whether a business process is compliant with given business rules combining logic programming and ontologies [14], while Calvanese et al. propose an approach for using Ontology-Based Data Access (OBDA) to govern data-aware processes, and in particular, those executed over a relational database that issue calls to external services to acquire new information and update data [15]. Finally, a comparison among ontologies related to business processes for task monitoring, measurements and evaluation strategies, and for modelling process information was built [16], [17], [18].
However, the downside of limiting ontological models to business processes lies in the absence of relationships between agents and their commitments that instead represent the core of agent-oriented representation. That implies the inability of finding agents/services with specific capabilities, invoking them, and enabling their interoperability. Notably, the benefits of process-oriented representations, such as the facilitation mechanisms for the search and selection of process models, are also offered by the agent-oriented ones as long as they are sufficiently general although flexible.
## 3 Preliminaries on OASIS 2
The first version of OASIS [19] is a foundational ontology that leverages the behaviourist approach to characterize agents in terms of the actions they are able to perform, including purposes, goals, responsibilities, information about the world they observe and maintain, and their internal and external interactions. It models the executions and assignments of tasks, restrictions and constraints used to establish agent responsibilities and authorizations.
In the recent past, OASIS has been extended and applied to deal with so-called _Ontological Smart Contracts_[20] and with the ontological models for smart contracts on the blockchain [21]. OASIS is also part of the POC4COMMERCE project [22], funded by the NGI-ONTOCHAIN
consortium [23].
The last version of OASIS is OASIS 2 [24, 1]1, which extends OASIS with some new features, such as the entrustment of agents, and reshapes the model adopted for the representation of agents and their commitments.
Footnote 1: The OASIS 2 ontology can be reached at [2]
Inspired by the _Tropos_ methodology [25] which devises from Agent Oriented Programming (AOP), OASIS 2 represents agents through three essential and publicly shared mental states, namely (expected) _behaviours_, _goals_ and _tasks_. Behaviours represent the mental state of the agent associated with its ability to modify its environment or, in general, act or do something. Goals describe mental attitudes representing preferred progressions of a particular system that the agent has chosen to put effort into bringing about [26]. Tasks depict how to carry on such progressions and describe atomic operations that agents perform.
Agents and their interactions are represented by carrying out three main steps, namely: a) an optional step that consists of modelling descriptions of general abstract behaviours, called _templates_, conceptual characterization of behaviours from which concrete agent behaviours are drawn; b) modelling concrete agent behaviours, possibly, drawn by agent templates; c) modelling actions and associating them with the corresponding behaviours. The first step, not mandatory, consists in defining the agent's behaviour template, namely a higher-level description of the behaviour of abstract agents that can be leveraged to define concrete behaviours of real agents; for example, a template is designed to describe the abstract behaviour consisting in obtaining and releasing locks on resources. Additionally, templates are useful to guide developers in the definition of the behaviours of their specific agents. To describe abstract agent's capabilities to perform actions, an agent template comprises three main elements, namely behaviour, goal and task. The latter constitutes the most simple (atomic) operations that agents are able to actually perform including, possibly, input and output parameters required to accomplish them. The second step consists of representing concrete agent behaviours either by relying on a template or by defining it from scratch. In both cases, concrete behaviours are modelled analogously to those of templates, where the models of outstanding features are replaced with actual characteristics. Behaviours drawn by shared templates are associated with them in order to depict the behaviour inheritance relationship. In the last step, actions performed by agents are described as direct consequences of some behaviours and are associated with the behaviours of the agent that performed them. To describe such an association, OASIS 2 introduces _plan executions_. Plan executions describe the actions performed by an agent, associating them with one of its behaviours. Associations are carried out by connecting the description of the performed action to the behaviour from which the action has been drawn: actions are hence described by suitable graphs that retrace the model of the agent's behaviour.
Plans can be additionally either submitted to agents as requests for performing some actions or they can be assigned by specific agents called _entruster agents_.
In OASIS 2, agent templates are defined according to the UML class diagram in Fig. 1. To consider how both abstract and concrete agents perform actions, the description of agents comprises three main elements, namely _behaviour_, _goal_, and _task_. Agent tasks, in their turn, describe atomic operations that agents perform, including possibly input and output parameters required to accomplish them. Those elements in OASIS 2 are introduced by way of the following
OWL classes:
* _Agent_: This class comprises all the individuals representing agents. Instances of such a class are connected with one or more instances of the class _Behaviour_ using the OWL object-property _hasBehaviour_.
* _Behaviour_: Behaviours can be seen as collectors comprising all the goals that an agent may achieve. Instances of _Behaviour_ are connected with one or more instances of the class _GoalDescription_ by means of the object-property _consistsOfGoalDescription_.
* _GoalDescription_: Goals represent containers embedding all the tasks that the agent can achieve. Instances of _GoalDescription_ comprised by a behaviour may also satisfy dependency relationships introduced by the object-property _dependsOn_. Goals are connected with the tasks that form them and are represented by instances of the class _TaskDescription_ through the object-property _consistsOfTaskDescription_.
* _TaskDescription_: This class describes atomic operations that agents perform. Atomic operations are the most simple actions that agents are able to execute and, hence, they represent what agents can do within their environment. Atomic operations may depend on other atomic operations when the object-property _dependsOn_ is specified. Atomic operations whose dependencies are not explicitly expressed are intended to be performed in any order.
The core of agent behaviour revolves around the description of atomic operations represented by instances of the class _TaskDescription_ that characterizes the mental state corresponding to commitments. In their turn, instances of the class _TaskDescription_ are related to the following five elements that identify the operation:
Figure 1: Diagram of agent templates in OASIS 2
* An instance of the class _TaskOperator_, characterizing the mental state corresponding to the action to be performed. Instances of _TaskOperator_ are connected either by means of the object-property _refersExactlyTo_ or _refersAsNewTo_ to instances of the class _Action_. The latter class describes physical actions represented by means of entity names in the form of infinite verbs (e.g., _produce_, _sell_). Specifically, the object-property _refersExactlyTo_ is used to connect the task operator with a precise action having a specific IRI, whereas _refersAsNewTo_ is used to connect a task operator with an entity representing an action of which only a general abstract description is given (for example, an action for which only the type is known). In the latter case, the entity representing the action is also defined as an instance of the _TemplateThing_: such instances are used to define entities that represent templates for the referred element and that describe the characteristics that such element should satisfy. _TemplateThing_ is the class used to characterize all the individuals involved in the definition of behaviour templates and to distinguish them from the entities representing concrete behaviours, plans or actions, thus eliminating the need of having separated models for those aspects. In order to specify the classes of which the entity must be an instance, it is eventually possible to connect such entity by means of the object-property _refersAsInstanceOf_ with the individual instances of the desired classes.
* Possibly, an instance of the class _TaskOperatorArgument_, connected using the object-property _hasTaskOperatorArgument_ and representing additional specifications or subordinate characteristics of task operators (e.g., _on_, _off_, _left_, _right_). Instances of _TaskOperatorArgument_ are referred to the operator argument by using either the object-property _refersAsNewTo_ or _refersExactlyTo_.
* An instance of the class _TaskObject_, connected by means of the object-property _hasTaskObject_ and representing the template of the object recipient of the action performed by the agent (e.g., _price_). Instances of _TaskObject_ are referred to the action recipient by specifying either the object-property _refersAsNewTo_ or _refersExactlyTo_.
* Input parameters and output parameters are introduced by instances of the classes _TaskInputParameter_ and _TaskOutputParameter_, respectively. Instances of _TaskDescription_ are related to instances of the classes _TaskInputParameter_ and _TaskOutputParameter_ by means of the object-properties _hasTaskInputParameter_ and _hasTaskOutputParameter_, respectively. Instances of _TaskInputParameter_ and of _TaskOutputParameter_ are referred to the parameter by specifying either the object-property _refersAsNewTo_ or _refersExactlyTo_. Moreover, the classes _TaskInputParameter_ and _TaskOutputParameter_ are also subclasses of _TaskParameter_.
Finally, in the case of agent behaviour templates, instances of _Agent_, _Behaviour_, _GoalDescription_, _TaskDescription_, _TaskOperator_, _TaskOperatorArgument_, _TaskObject TaskInputParameter_, and _TaskOutputParameter_ are also instances of _TemplateThing_.
Fig. 2 illustrates the case study mentioned above where a lock on a resource can be obtained and released by agents. Specifically, Fig. 2 presents a template describing an abstract agent that is able to request a lock on a resource. The agent template comprises a single behaviour,
constituted by a single goal that in its turn comprises a single task. The task, which represents the ability to request a lock, provides four elements:
* _request_lock_task_operator_, representing the mental state associated with the behaviour's action (the task operator), which in its turn is associated with the individual _request_, the latter describing the capability of requesting something.
* _request_lock_task_operator_argument_, introducing an additional feature (the operator argument) associated with the action and represented by the individual _lock_. The argument describes the fact that the request action is referred to a lock. Task operator and its argument describe together the capability of requesting a lock;
* _request_lock_task_object_template_, representing the recipient of the operation, which is related to an instance of the class _Lock_ by means of the object-property _refersAsNewTo_. Such an instance comprises all the features that the recipient of the adopting action should own: the concrete actions implementing the behaviour template for requesting a lock are supposed to effectively request a lock with the desired features;
* _request_lock_task_output_template_, representing the input of the operation, namely a resource on which the lock is requested.
In the second step, concrete agent behaviours are defined either by instantiating one or more templates or from scratch. In OASIS 2, the modelling pattern of concrete behaviours has a structure analogous to one of the behaviour templates, illustrated above, with the difference that individuals used to define a concrete behaviour, instead of being instances of the class _TemplateThing_, are instances of the class _BehaviourThing_. The latter class is devoted to describe all the mental states associated with concrete behaviours of real agents that induce actions.
Concrete behaviours may be connected with the template they are drawn from. In order to describe the fact that concrete agents inherit their behaviours from a commonly shared template, the instances related to the concrete behaviours are connected with the instances of the template through the sub-properties of the object-property _overloads_ as follows. The association is carried out by connecting the instances of the classes:
Figure 2: Example of an OASIS 2 agent template
* _Behaviour_, by means of _overloadsBehaviour_;
* _GoalDescription_, by means of _overloadsGoalDescription_;
* _TaskDescription_, by means of _overloadsTaskDescription_;
* _TaskObject_, by means of _overloadsTaskObject_;
* _TaskOperator_, by means of _overloadsTaskOperator_;
* _TaskInputParameter_, by means of _overloadsTaskInputParameter_;
* _TaskOutputParameter_, by means of the object-property _overloadsTaskOutputParameter_.
As the last step, agent commitments devised from behaviours are introduced to describe agent actions. In OASIS \(2\), commitments are represented by adopting the same pattern presented for abstract behaviours with the difference that instances of the class _TemplateThing_ are instead modelled as instances of the class _ExecutionThing_ and the agent responsible for the execution of the action is related with the plan representing the commitment by means of the object-property _performsPlanExecution_, subproperty of _performs_. The class _ExecutionThing_ is introduced to characterize all the entities involved in the definition of concrete and already performed actions and to distinguish them from the ones introduced for templates, behaviours and plans.
In order to relate agent commitments with the behaviour from which they are drawn, subproperties of the object-property _drawnBy_ are introduced. Specifically, _planExecutionDrawnBy_ connects the instance of _GoalDescription_ of the agent action to its analogue of agent behaviour; much in the same way, _goalExecutionDrawnBy_ connects the instance of the class _GoalDescription_ of the commitment with its analogue, while _taskExecutionDrawnBy_, _taskObjectDrawnBy_, _taskOperatorDrawnBy_, _taskInputParameterDrawnBy_, and _taskOutputParameterDrawnBy_ are introduced for _TaskDescription_, _TaskObject_, _TaskOperator_, _TaskInputParameter_, and _TaskOutputParameter_, respectively.
Usually, agents proposing plans identify the behaviours responsible for their realization beforehand, in such a way as to completely describe and trace how agent intentions are realized. In this case, the entities representing the submitted plan are related to the entities describing the responsible behaviour by means of suitable subproperties of the object-property _submittedTo_, relating instances of _PlanningThing_ with instances of _BehaviourThing_ as follows: a) _planDescriptionSubmittedTo_, for instances of _Behaviour_; b) _goalDescriptionSubmittedTo_, for instances of _GoalDescription_; c) _taskDescriptionSubmittedTo_, for instances of _TaskDescription_; d) _taskObjectSubmittedTo_, for instances of _TaskObject_; e) _taskOperatorSubmittedTo_, for instances of _TaskOperator_; f) _taskInputParameterSubmittedTo_, for instances of _TaskInputParameter_; g) _taskOutputParameterSubmittedTo_, for instances of _TaskOutputParameter_.
In a similar way, plans are also related to the agent's actions realizing them. For this purpose, the subproperties of the object-property _hasExecution_ are introduced, namely _hasPlanExecution_, _hasGoalExecution_, _hasTaskExecution_, _hasTaskObjectExecution_, _hasTaskOperatorExecution_, _hasTaskInputParameterExecution_, and _hasTaskOutputParameterExecution_. Analogously, actions are connected with the behaviour responsible for their execution by means of suitable subproperties of the object-property _executionDrawnBy_.
## 4 Processes and procedures in OASIS 2
Since the first version, OASIS was already capable of describing how agent activities are carried out by suitably combining behaviours, plans and actions. Plans, in particular, can be used to describe in detail how inputs are processed to be turned into outputs, but they are not sufficient to describe general complex processes. Plans require the presence of a committer agent beforehand and can be applied only once. This is because the mental state related to the desire or wish to perform actions does not abstract from the committer agent. Plans can be leveraged to represent single applications of procedures but they should be combined properly. Then, plans can be associated with the behaviours responsible for their executions thus carrying out the actions required to accomplish the procedures. The approach is illustrated in Fig. 3. Leveraging the ISO definition, processes defined by agents are constituted by procedures. In the behaviouristic vision, the procedures states are associated with plans that are executed thus leading to actions. Actions derive from agent behaviours that in their turn can rely on specific templates. Agents performing actions are provided with behaviours either intrinsically or through a played role. Finally, actions could lead to events either incidentally or as foreseen by procedures. In the first case, events are conceived as noteworthy and extraordinary happenings that have not been previously planned, in the second case, as recurrent situations.
For example, agents may play the role of resource manager, hence they are able to request and release locks on resources according to such role. A process describing the steps required to request and release locks is formalized, so that they can tackle a plan for each step. Plans are executed so that the resource is released and modified. Actions are performed thanks to agent behaviours provided by the resource manager's role. During the execution of one of those actions, the system exceptionally sends a message to the committer agent. We will partially see this scenario together with the presentation of the model.
In light of the above considerations, OASIS 2 introduces the following novel OWL classes:
Figure 3: Process and procedure in OASIS 2
* _Process_, which encompasses the procedures describing how activities are carried out and;
* _Procedure_, the subclass of the class _Activity_, which introduces the set of plans required to accomplish the procedure itself and to be realized through actions. In OASIS 2 procedures are constituted by one or more _ProcedureState_, each one connected to a specific plan describing how the activity is carried out. Specifically, we identify two types of _ProcedureState_: a) _TerminatingProcedureState_, which includes _InitialProcedureState_ and _FinalProcedureState_, and b) _NonTerminatingProcedureState_.
Procedures are constituted by _procedure states_ that describe single steps of the procedure. _Initial procedure states_ describe the beginning of the procedure, while _final procedure states_ its termination. Moreover, the initial procedure state coincides with the final procedure state in the case of a single-step procedure. Finally, _non-terminating procedure states_ describe the intermediate steps to be performed, including all the steps between the initial state and the final state. The schema for processes and procedures is illustrated in Fig. 4. The subproperties of the object-property _procedureConsistsOfProcedureState_ are used to suitably connect a procedure with its procedure states. Specifically, the object-properties _procedureConsistsOfInitialProcedureState_ and _procedureConsistsOfFinalProcedureState_ (both subproperty of _procedureConsistsOfTerminatingProcedureState_) connect the procedure with its initial and final procedure state, respectively, while the object-property _procedureConsistsOfNonTerminatingProcedureState_ connects the procedure with its non-terminating states. The initial procedure state is connected with the subsequent non-terminating procedure states by means of the object-property _hasNextNonTerminatingProcedureState_. The latter property is also used to connect non-terminating procedure states with their subsequent non-terminating procedure states. Whenever the next procedure state is the final state, the object-property _hasFinalProcedureState_ is adopted in its place. Both _hasNextNonTerminatingProcedureState_ and _hasFinalProcedureState_ are defined as subproperty of the object-property _hasNext_.
Finally, since the intended meaning of processes and related procedures is to give a description of how activities should be carried out, the instances introduced so far are also instances of the class _PlanningThing_. To complete the representation of processes, it is now sufficient to connect an instance of the class _Process_ with the ones of the class _Procedure_ that model the process activities. In case a process is constituted by more than one procedure, the latter can be sorted by connecting them through the object-property _hasNextProcedure_.
To describe how process activities should be carried out, we introduce a behaviour as in Fig. 1, where the instances of _TemplateThing_ are instead defined as instances of _PlanningThing_. The choice is motivated by the fact that templates describe abstract behaviours that are instantiated to introduce concrete behaviours, whereas plans describe actions that agents wish to perform or see accomplished: this is exactly the case of process, where a set of actions must be tackled in order to achieve the desired end. Additionally, plans allow one to describe in detail what actions are required to accomplish the activity associated with the procedure state, the input parameters and the expected output, without the need of selecting an agent responsible for those actions beforehand. To associate a procedure state with the planned behaviour, OASIS \(2\) introduces the object-property _isDescribedBy_.
The process concerning the case study considered at the beginning of the section is depicted in Fig. 5. The process describes the modification of a resource which requires the acquisition
and release of an exclusive lock before and after the modification, respectively.
The process consists of a single procedure that, in its turn, consists of three distinct states, an initial state, a non-terminating state, and a final state. The initial state is associated with a behaviour describing how to request a lock on a resource. The behaviour follows the template in Fig. 2, where instances of the class _TemplateThing_ are replaced with instances of the class _PlanningThing_ as stated in Section 3. Analogously, the non-terminating state is associated with a behaviour describing how to modify the resource, while the final state is associated with the behaviour describing how to release the lock.
Once the process is fully described, it is possible to model its application. To do so, a model retracing the structure of a process (see Fig. 4) is introduced, where instances of the class _PlanningThing_ are instead defined as instances of the class _ExecutionThing_. In a similar way to _TemplateThing_ and _PlanningThing_ that are introduced to represent descriptions of abstract behaviours and planned actions, respectively, _ExecutionThing_ is conceived for those mental
Figure 4: The OASIS 2 model for processes and procedures
Figure 5: Example of process in OASIS 2
states that represent actions already committed. In the case of processes, the class provides a means for characterizing the realization of the process, the achievement of the related activities, and the commitment of the planned actions. Moreover, to uniquely relate the elements of the process with the ones of its realization, two subproperties of the object-property _drawnBy_ are introduced. Specifically, _processDrawnBy_ and _procedureDrawnBy_ are introduced to connect the instances of _Process_ and _Procedure_, respectively. Hence, it is clear how to trace back to the agent's behaviour responsible for the execution of a process, thus unambiguously identifying actors and actions of arbitrarily complex environments.
For instance, the process of the example in Fig. 5 can be realized as partially depicted in Fig. 6 that illustrates the modification process on a resource. The schema retraces the one introduced for the process, where instances of the class _PlanningThing_ are suitably replaced by instances of the class _ExecutionThing_. Specifically, the process realization introduces three behaviour executions, one for each plan of the process, namely a) _requestLockBehaviourExec_, representing the execution of the _requestLockPlan_ plan; b) _modifyResourceBehaviourExec_, representing the execution of the _modifyResourcePlan_ plan; c) _releaseLockBehaviourExec_, representing the execution of the _releaseLockPlan_ plan.
Moreover, each behaviour of the process is connected with its analogous of the process realization by means of the object-property _hasPlanExecution_. Finally, each plan execution can be connected with the responsible behaviour by means of the object-property _planExecutionDrawnBy_ (subproperty of _executionDrawnBy_, see Section 3).
Figure 6: Example of process realization in OASIS 2
As stated above, in OASIS 2 specific events can be associated with procedure states. In OASIS 2 events are semantically represented by instances of the class _Event_, while the procedure state is related to the triggered event by means of the object-property _triggersEvent_. The phenomenon associated with the event is introduced by means of OASIS 2 actions. These are in fact sufficiently general and powerful to potentially cover up any type of phenomenon springing from an event that is fully within the vision of a behaviouristic approach.
In the case of the case study, when the system is triggered to send a message to selected users when a resource is modified, for example, because an error occurred, an action such as the one in Fig. 7 is introduced. Other information such as the type of event, its duration, or the time window in which it happens, can be additionally specified.
Finally, it is admissible that agents perform procedures exclusively according to the specific role that they play. Roles permit the separation of the concerns of behaviours intrinsically owned by agents from the ones temporarily at their disposal to execute a process. This implies that the agent is able to perform an action only when such a role is played. Hence, the behaviour responsible for performing those actions is not strictly associated with the agent but with the role. To represent the scenario described above and to model the playing of roles, OASIS \(2\) introduces roles as depicted in Fig. 8.
In OASIS 2 roles, introduced as instances of the class _Role_, provide agents with behaviours by means of the object-property _providesBehaviour_. In its turn, a role is associated with the
Figure 8: Roles in OASIS 2
Figure 7: Example of event in OASIS 2
agent playing it by means of the object-property _playRole_. Whenever agents cease to play a role, the associated behaviours are no longer available. The end of a role is specified by making the instance of _Role_ deprecated, hence the instance of the class _DeprecatedThing_. This implies that actions are associated with role-oriented behaviours until the corresponding role is not deprecated. Concerning the case study, an agent playing the role consisting in requesting a lock on a resource is depicted in Fig. 9. The role is introduced as a fresh instance of the class _Role_, specifically provided for the agent's representational needs. The role types, introduced as an instance of the class _RoleType_, are defined accordingly to the domain to be described, hence they are out of the OASIS \(2\) scopes: in the case of the case study, a specific role type, called _resource_consumer_role_, is introduced to depict the roles dedicated for consuming resources.
## 5 Conclusions
This paper introduced an extension of the OASIS \(2\) ontology and, therefore, inherits the behaviour approach to semantically represent agents and their commitments through the formalization of their mental states, namely, _behaviour_, _goal_ and _task_. In particular, the proposed extension of OASIS \(2\) deals with the modelling of processes and procedures, thereby providing a general although practical way of representing processes and procedures to be tackled by agents through their behaviours. Now that such a foundational ontology is available, our semantic representation capabilities are substantially enhanced. In consequence, all application scenarios where specific instructions must be followed have fallen into scope.
For example, how agents reach a consensus, together with the modelling of their behaviours, is one of the future challenges. An application of OASIS \(2\) is foreseen for the characterisation of security directives, aiming at a structural solution for translating security documents to a mathematically-driven world. The approach targets the NIS \(2\) directive but other similar directives can be addressed. In addition, we intend to apply OASIS \(2\) to represent security constraints for cybersecurity threat contexts, in particular for the purpose of semantically representing authentication and confidentiality properties for agents.
Also, we shall consider how to integrate OASIS \(2\) with the PDDL and with the main frameworks such as JADE [27], all with the aim of automatically generating agents and artefacts. Similarly, an integration with _CArtAgO_[28], a framework for building shared computational worlds, appears to be valuable. The horizons of the semantic representation of agents have substantially expanded but retain the vast potential for yet more expansion in the near future.
Figure 9: Example of playing roles in OASIS 2
## Acknowledgments
Gianpietro Castiglione acknowledges a studentship by Intrapresa S.r.l. and Italian "Ministero dell'Universita e della Ricerca" (D.M. n. 352/2022).
|
2309.07182 | Sleep Stage Classification Using a Pre-trained Deep Learning Model | One of the common human diseases is sleep disorders. The classification of
sleep stages plays a fundamental role in diagnosing sleep disorders, monitoring
treatment effectiveness, and understanding the relationship between sleep
stages and various health conditions. A precise and efficient classification of
these stages can significantly enhance our understanding of sleep-related
phenomena and ultimately lead to improved health outcomes and disease
treatment.
Models others propose are often time-consuming and lack sufficient accuracy,
especially in stage N1. The main objective of this research is to present a
machine-learning model called "EEGMobile". This model utilizes pre-trained
models and learns from electroencephalogram (EEG) spectrograms of brain
signals. The model achieved an accuracy of 86.97% on a publicly available
dataset named "Sleep-EDF20", outperforming other models proposed by different
researchers. Moreover, it recorded an accuracy of 56.4% in stage N1, which is
better than other models. These findings demonstrate that this model has the
potential to achieve better results for the treatment of this disease. | Hassan Ardeshir, Mohammad Araghi | 2023-09-12T23:02:19Z | http://arxiv.org/abs/2309.07182v2 | # Sleep Stage Classification Using a Pre-trained Deep Learning Model
###### Abstract
One of the common human diseases is sleep disorders. The classification of sleep stages plays a fundamental role in diagnosing sleep disorders, monitoring treatment effectiveness, and understanding the relationship between sleep stages and various health conditions. A precise and efficient classification of these stages can significantly enhance our understanding of sleep-related phenomena and ultimately lead to improved health outcomes and disease treatment.
Models others propose are often time-consuming and lack sufficient accuracy, especially in stage N1. The main objective of this research is to present a machine-learning model called "EEGMobile". This model utilizes pre-trained models and learns from electroencephalogram (EEG) spectrograms of brain signals. The model achieved an accuracy of 86.97% on a publicly available dataset named "Sleep-EDF20", outperforming other models proposed by different researchers. Moreover, it recorded an accuracy of 56.4% in stage N1, which is better than other models. These findings demonstrate that this model has the potential to achieve better results for the treatment of this disease.
s 2023
sleep stage classification, EEG signals, deep learning, pre-trained model, signal spectrogram.
## 1 Introduction
### Sleep
Sleep is a reversible state in which eyes are closed, and most parts of the body are inactive, allowing the individual to become unconscious and providing an opportunity for the body to recover energy and alleviate fatigue and anxiety. Sleep is considered a fundamental human function, comprising about one-third of our lives.
Research indicates that sleep is vital for strengthening learning and memory, as the brain forms and reinforces new learning pathways during sleep. Additionally, adequate sleep enhances problem-solving abilities and improves creativity. Dreams during sleep also play a significant role in memory consolidation and brain processing [1].
The behavior and decisions of individuals during the day are dependent on the duration and quality of their sleep. Chronic sleep deprivation can lead to various mental problems, such as cognitive disorders, stress, and depression, significantly impacting a person's life. Moreover, insufficient sleep can have broad consequences on physical health, including increased risks of obesity, diabetes, high blood pressure, cancer, and cardiovascular diseases [2].
Sleep consists of distinct stages that individuals go through during a night's rest. These stages are characterized by specific patterns of brain activity, eye movements, and muscle activity. The two main categories of sleep stages are Rapid Eye Movement (REM) and Non-Rapid Eye Movement (NREM), with NREM further divided into N1, N2, and N3 stages.
In general, each human sleep includes multiple sleep cycles, each containing these stages. Some sleep cycles may not include all the stages, and, for example, a cycle may only consist of N1 and N2 stages [3].
### Eeg
Brain signals, also known as neural signals or brainwaves, are electrical activities generated by brain neurons. These signals are produced due to the communication between different regions of the brain. Measuring brain signals is crucial for understanding brain function, and cognition, and investigating various neurological conditions. The brain, being the controller of the body and particularly the nervous system, is the most essential part of the human body, and studying it can aid in understanding and treating various physical and mental illnesses.
The presence of electrical currents and waves in the brain was discovered in 1875 by an English physician, Richard Caton, and over the past century, significant advancements have been made in studying these brainwaves. One of the primary techniques used to measure brain signals is EEG Electroencephalography (EEG). EEG is a non-invasive medical imaging technique that records the brain's electrical activity from the scalp using metal electrodes and conductive gels [4].
### EEG & Sleep Stages
As mentioned earlier, sleep disorder treatment centers identify the root cause of sleep problems by examining brain function and initiating appropriate measures for resolution. One of the methods used to assess brain function is EEG. Doctors record an individual's brain activity using EEG during a sleep cycle and analyze the stages and cycles of sleep. Based on the differences observed compared to the normal state, they provide their diagnosis and prescribe medications or necessary interventions for patients.
Therefore, the most crucial aspect of treating sleep disorders lies in accurately diagnosing the sleep stages during an EEG test.
### Spectrograms
Spectrography has significant applications in sound analysis. In essence, an audio signal is represented as a waveform that indicates changes in amplitude over time. However, a spectrogram illustrates the changes in frequency of the waveform over time, with amplitude represented as the third dimension using color. Thus, the vertical axis represents frequency in Hertz, and the horizontal axis represents time.
All spectrograms are not created equal; an algorithm called Fast Fourier Transform (FFT) is commonly used to compute these spectrograms. In FFT, a parameter called size (or the number of data points involved) is variable, leading to different outcomes. Generally, higher FFT sizes provide finer frequency details, known as frequency resolution, while lower FFT sizes provide finer time details, known as temporal resolution.
For instance, if you want to identify microphone noise, a higher FFT size is helpful. On the other hand, if you want to detect a high-frequency event, you should opt for a smaller FFT size. Hence, spectrograms can be used to eliminate noise or unwanted sounds.
Indeed, as mentioned earlier, the output of a spectrogram is a color image. In recent years, models working on audio, particularly systems converting speech to text, first transform the audio signal into its spectrogram representation. They then operate on the spectrogram using methods like CNNs or pre-trained models that excel at image processing [5].
## 2 Transfer Learning
### Pre-trained models
Image processing plays a vital role within the broader scope of image analysis and computer vision systems. The outcomes of image processing exert significant influence over subsequent high-level tasks, facilitating the recognition and understanding of image data. Deep learning has emerged as a potent tool for tackling low-level vision tasks, such as image super-resolution, inpainting, deraining, and colorization, in recent times. While these image-processing tasks share commonalities, there has been limited exploration of pretraining models across these domains.
The application of pretraining holds promise for addressing two key challenges in image processing. First, task-specific datasets can be constrained, especially in scenarios involving sensitive or costly data. For instance, this is evident in medical image processing, which includes tasks ranging from the segmentation of specific anatomical structures and the detection of lesions to the differentiation between pathological and healthy tissue in various organs [10]. Another example is the use of satellite images for assessing the quality of resulting images in urban areas [11].
Additionally, factors like varying camera parameters, lighting conditions, and weather can introduce significant variations in training data distributions. Second, the specific image processing tasks required are often unknown until the test image is presented. Consequently, a suite of image processing modules must be prepared, each with distinct objectives but potential for shared underlying operations.
The concept of pretraining has already gained traction in natural language processing and computer vision. For instance, many object detection models utilize pre-trained backbones from ImageNet classification [14].
A wealth of well-established networks can be found online, with AlexNet setting the foundation for convolutional neural networks [16], VGGNet known for its uniform depth [17], and ResNet's groundbreaking skip connections that have transformed deep learning, notably improving computer vision tasks [18].
In the realm of natural language processing, Transformer-based models have revolutionized tasks like translation and question-answering. Their success hinges on pretraining these models on vast text corpora, followed by fine-tuning them on task-specific datasets.
Efforts have been made to extend the triumph of Transformers into the domain of computer vision, marking an exciting intersection of these two fields.
### Medical Application
Medical imaging plays an important role in the medical area and is a powerful tool for diagnosis. With the development of computer technology such as machine learning, computer-aided diagnosis has become a popular and promising direction. Note that medical images are generated by special medical equipment, and their labeling often relies on experienced doctors. Therefore, in many cases, it is expensive and hard to collect sufficient training data. Transfer learning technology can be utilized for medical imaging analysis. A commonly used transfer learning approach is to pre-train a neural network on the source domain (e.g., ImageNet is a large-scale ontology of images built upon the backbone of the WordNet structure, encompassing a total of 3.2 million images. It is significantly more accurate than current image datasets which is useful for three simple applications: object recognition, image classification, and automatic object clustering [9]) and then fine-tuning it based on the instances from the target domain.
### MobileNetV3
MobileNetV3 encompasses two distinct models, MobileNetV3-Large and MobileNetV3-Small, designed to cater to high and low resource usage scenarios, respectively. These models are the result of platform-aware Neural Architecture Search (NAS) and NetAdapt techniques, which refine and optimize network architectures for improved performance [24].
In the realm of computer vision, recent advancements have introduced convolutional neural network architectures that prioritize both speed and size efficiency. These pivotal computer vision architectures, including NASNet [25], MobileNets [26, 27], EfficientNet [28], MnasNet [29], and ShuffleNets [30], are acclaimed for their swift training processes. NASNet automates architecture search for tailored image solutions, while MobileNets prioritize mobility and efficiency, excelling in real-time image analysis. EfficientNet balances size and depth for versatile image applications, MnasNet combines architecture search with mobile-friendly design, and ShuffleNets reduce computation overhead, ideal for real-time video processing and edge computing, collectively advancing diverse computer vision domains [31, 34].
These networks implement depthwise convolutions, a technique where convolutional kernels are applied individually to each input channel, enhancing the extraction of spatial information. Notably, these depthwise convolutional kernels are shared across all input channels, thus optimizing model efficiency and lowering computational costs. It's important to note that learning the size of these depthwise convolutional kernels can pose challenges, potentially increasing the intricacy of training processes.
One of the noteworthy recent contributions in this domain is the MobileNetV3 architecture, which has demonstrated significant advancements in computer vision tasks [35].
MobileNetV3, an evolution of MobileNetV1 and MobileNetV2,
represents a significant leap in mobile-friendly neural network architectures. Howard et al. introduced this version using Network Architecture Search (NAS) with the NetAdapt algorithm, aiming to optimize MobileNet for low-resource hardware platforms in terms of size, performance, and latency. The architecture enhancements in MobileNetV3, depicted in Figure 1, draw inspiration from its predecessors.
A notable addition to MobileNetV3 is the introduction of a novel nonlinearity known as "hard swish" (h-swish), a modified variant of the sigmoid function from a previous work [36]. The h-swish non-linearity minimizes training parameters, thereby reducing model complexity and size.
Within the MobileNetV3 block, a core component emerges--the inverted residual block. This block combines a depthwise separable convolution block with a squeeze-and-excitation block [29], drawing inspiration from bottleneck blocks [37]. The inverted residual connection links input and output features within the same channels, enhancing feature representations while conserving memory.
The depth-wise separable convolutional operation comprises a depthwise convolutional kernel applied to each channel, followed by a 1 x 1 pointwise convolutional kernel with batch normalization (BN) and ReLU or h-swish activation functions. This transformation replaces the traditional convolutional block and reduces model capacity.
To further optimize training, MobileNetV3 incorporates a squeeze-and-excitation (SE) block, which selectively emphasizes relevant features in each channel [38].
## 3 Materials and Methods
### Dataset
In this section, we will discuss the examination of the sleep-edf dataset, which is relevant to this topic, as well as the proposed method and other methods provided by others, which are learned through it.
This dataset is accessible through the PhysioNet website, and the article related to how this dataset was collected has also been reviewed.
This dataset consists of 197 recorded files of Polysomnographic (PSG) data related to the entire night's sleep of both healthy and diseased individuals (those with sleep disorders). Each PSG includes EEG signals in two channels, Pz-Oz and Fpz-Cz, as well as EOG signals from individuals' brains, with a recording frequency of 100 Hz. Additionally, the sleep stage is determined every 30 seconds. The dataset is labeled with "ST" for patients with sleep disorders and "SC" for healthy individuals. It should be noted that a maximum of two files have been recorded for each individual, corresponding to two different nights.
In this dataset, an alternative naming convention for sleep stages has been used, consisting of a cycle of Wake, S1, S2, S3, S4, and REM. Essentially, a 6-stage cycle is considered, and in this new classification, S3 and S4 correspond to N3. Therefore, in most other works, including the method presented in this report, the naming convention is initially changed as follows:
\[S1\to N1,\quad S2\to N2,\quad S3,S4\to N3 \tag{1}\]
In works that have been done to solve this problem, three subsets of this dataset are considered:
\(\ast\)**Sleep-EDF8:** This subset, published by the PhysioNet website in 2013, includes only 8 recorded files related to 4 healthy individuals and 4 patients. As a result, the number of data samples (30-second segments) is 15188.
\(\ast\)**Sleep-EDF20:** In this subset, 39 files related to 20 healthy individuals (initial 20 healthy individuals of the original dataset) are selected, resulting in a total of 42308 data samples.
\(\ast\)**Sleep-EDF78:** In this subset, all 153 files related to 78 healthy individuals (all healthy individuals from the original dataset) are considered, resulting in a total of 195479 data samples.
It is evident that in all subsets, the number of samples in stages is not balanced with each other, which can affect the training outcome, causing the designed model to predict the stage with more samples more frequently.
Spectrograms can reflect the activity of a specific frequency. When the EEG signal is stronger, the color of its corresponding spectrogram is brighter (yellow), and the weaker the signal is, the darker the color (blue) [6].
Figure 1: The structure of MobileNetV3 blocks and components [38].
Figure 2: EEG signal with a frequency of 100 Hz and corresponding spectrograms
With the improved visualization of frequency in Figure 2, we can now enhance our ability to define sleep stages using non-linear measures:
\(\ast\)**Wake:** During the wakeful stage, you are fully conscious and alert. Your eyes are open, and your brain activity, as measured by EEG, is characterized by rapid fluctuations. In this stage, there is prominent beta activity, which is characterized by a frequency range of 13-26 Hz and a low voltage of 10-30 \(\mu\)V. Additionally, there may be some alpha activity, ranging from 8-12 Hz, with a higher voltage of 20-40 \(\mu\)V. This is the stage where you are actively engaged with the world around you, and your thoughts are typically alert and clear.
\(\ast\)**N1 (Drowsiness):** As you transition from wakefulness to sleep, you enter the drowsiness stage. During this phase, your eye movements may slow down, and you may experience slow movements of eye-rolling. Notably, the alpha waves, which were present during wakefulness, start to disappear. In their place, theta waves emerge, typically in the range of 4-7 Hz. These theta waves are indicative of a transitional state between wakefulness and light sleep. You may start to feel more relaxed, and your thoughts might become less coherent as you drift towards sleep.
\(\ast\)**N2 (Light Speed):** In the stage of light sleep, your eye movements cease, and you become more detached from your surroundings. This stage is characterized by distinctive patterns on the EEG. Bursts of brain activity are visible, and you may see the emergence of sleep spindles, which are short bursts of oscillatory brain waves in the range of 11-15 Hz, as well as K-complexes. These features are superimposed on a background of theta waves. Light sleep serves as a transitional phase, where your body and mind begin to relax further in preparation for deeper sleep.
\(\ast\)**N3 (Deep Sleep):** Deep sleep is a crucial phase for physical and mental restoration. Delta waves make their appearance slowly in this stage, and they are characterized by a low frequency of 1-3 Hz and a high EEG amplitude exceeding 75 \(\mu\)V. Sleep spindles and K-complexes may still be present but are less prominent compared to earlier stages. Deep sleep is vital for physical healing, immune function, and memory consolidation. It is often challenging to wake someone from a deep sleep, and if you are awakened, you may feel disoriented initially.
\(\ast\)**REM Sleep:** REM sleep is a unique stage characterized by rapid eye movements, as the name suggests. During REM sleep, your muscles are temporarily paralyzed to prevent you from acting out your dreams. The EEG activity during REM sleep is mixed in frequency, and it is often associated with low voltage. Occasional bursts of sawtooth waves may appear on the EEG. This stage is where most vivid dreaming occurs, and brain activity resembles wakefulness in some ways, despite being associated with muscle atonia. REM sleep is vital for emotional processing, memory consolidation, and overall cognitive function [15].
### Method
#### 3.2.1 Overview
In alignment with the study conducted by [6], our focus centers exclusively on the _Fpz-Cz_ channel within the _EEG_ signals derived from the _Sleep-EDF_ dataset. These samples transform spectrograms, following which we construct a novel model utilizing the pre-existing _MobileNetV3Large_ architecture. Subsequently, we perform fine-tuning on specific layers of this model, resulting in the creation of _EEGMObile_. This newly devised model is then trained to employ the generated spectrograms. Given the prior training of the _MobileNetV3Large_ component on images and the inherent resemblance between spectrograms and images, we have attained promising outcomes within this domain, which will be subjected to a comprehensive analysis in subsequent sections.
#### 3.2.2 Preprocess
We store the spectrograms of the data samples under examination, and to calculate these spectrograms, we employ the scipy library in Python.
Specifically, the Spectrogram function (Figure 3) within the aforementioned library is configured with five input parameters, as documented in [7]:
* [noitemsep,topsep=0pt,leftmargin=*]
* Input signal represented as a time series with measured values.
* **f** Recording frequency of the input time series, which, as mentioned earlier, should be set to 100 for our dataset.
* **nperse** The length of each input signal should be set to 30 as each signal is 30 seconds long.
* **noverlap** It is the Number of points that have overlap between segments. In the spectrogram section, this is discussed with an example, and it is set to 16. However, changing this value might yield better results as it significantly affects the visual characteristics.
* **nfft** Length of the FFT used, set to 1024. It could have been lower, but in our case, a smaller value didn't yield good results for the model we used. The resulting image didn't resemble an image but merely separate colored points.
It is noteworthy that, in contrast to the referenced study [6], there is no requirement to crop the generated spectrogram images. The code automatically eliminates graph-related elements when saving the image, resulting in an output image with dimensions of 497 pixels in length and 369 pixels in width.
Furthermore, when we take into account the scale of processing 195,479 images and acknowledge that this process is I/O-bound, the read-and-write operations to the hard drive can substantially impede program performance. To mitigate this issue, we leverage Python's ThreadPoolExecutor to concurrently handle these tasks for multiple images, resulting in an eight-fold acceleration compared to the non-concurrent approach.
To eliminate the need for generating spectrograms from signals during subsequent experiments and to have consistent inputs, these images are stored.
Additionally, while reading the images again, since we plan to use 20-fold cross-validation, we group samples related to a single subject into one fold. For example, in the Sleep-EDF8 dataset, we ultimately have 20 folds, each containing a total of 52609 data points.
Furthermore, because the constructed network performs better on images of size 224 * 224 when reading the images, they are resized using the resize command from the cv2 library in Python [8].
Figure 3: Converting a signal to its corresponding image
#### 3.2.3 EEGMobile
To construct this artificial network, the Keras library has been employed. Now we will provide a detailed description of the constructed network.
In this network, we begin with the _MobileNetV3Large_, a pre-trained artificial network on images, which we introduced earlier. The output of this network is a 4-dimensional tensor.
Following that, we have the fine-tuning section, which allows us to adjust the output provided by the pre-trained network to suit our problem. This section requires further training and modifications. In this section, an initial _Global Average Pooling2D_ layer is included to transform the mentioned 4-dimensional output to 2-dimensional while reducing the number of dimensions.
Subsequently, a _Dense_ layer with dimensions 224 and a _relu_ activation function is placed, serving to consolidate previous information.
Further down, a _Dense_ layer with 5 dimensions and a _softmax_ activation function is present. These dimensions correspond to the classes present in the dataset: _(Awake, N1, N2, N3, REM)_. The _softmax_ function ensures that the output represents the class with the highest probability.
Now, we can configure our neural network in a way that some of its layers won't undergo training. This reduces the number of trainable parameters and consequently increases the training speed for each iteration. For instance, we can make the initial layers of the pre-trained model untrainable. This significantly speeds up the training process as a whole. However, according to the experiments conducted, which we will elaborate on later, it's better to allow these layers to be trained at least for a few epochs initially.
Consequently, we create the aforementioned network using the _adam_ optimizer and the _sparse categorical crossentropy_ loss function, which is suitable for models with multiple classes. Subsequently, we can proceed to train this network.
## 4 Training Experiments
### Details
In this experiment, only the _Sleep-EDF20_ dataset is considered. Therefore, as in other related articles on this topic, we utilize a _20-fold cross-validation_. In each training stage, the data of one individual is designated as the validation set. A new model is constructed and trained using data from the remaining 19 individuals. The model's performance is then evaluated with the validation dataset, repeating this process for all 20 individuals. Ultimately, the results are averaged to provide the final output.
It's important to mention that the TensorFlow library is employed to process data concurrently with a graphics card, significantly enhancing the speed of model training.
As a result, with a batch size of 16 and only 20 epochs, we were able to effectively train our neural network. However, it's worth noting that the initial 5 epochs involved training the entire network, including all layers of both the pre-trained model and the added fine-tuning layers. In the subsequent 15 epochs, only the last layers (the final 15 layers of the pre-trained model and the fine-tuning layers) were trained. This approach greatly accelerated the training process, and within a very short time, using a _GTX 1060 6GB_ graphics card, we achieved the desired outcome. This result was expected, as in recent years, pre-trained models have consistently outperformed self-constructed models.
Also, We tested three different methods to read data to train the model:
\(\ast\)**HDD (Hard Disk Drive)** Spectrograms are stored on the HDD with sufficient storage capacity but slower read/write speeds. The Keras module processes images on the GPU, which accelerates training. However, each batch loads data into RAM, introducing latency compared to faster storage.
\(\ast\)**SSD (Solid-State Drive):** We use SSDs for improved speed, as they offer significantly faster read/write speeds than HDDs. SSDs are a good choice for GPU-based training, though they are more expensive. They are cost-effective for datasets like sleep-edf8.
\(\ast\)**RAM (Random Access Memory):** Storing the entire dataset in RAM provides the fastest training speeds. RAM's speed surpasses HDDs and SSDs. Importantly, it eliminates the need to read from slower storage during each epoch, speeding up training. RAM doesn't require a capacity equivalent to the dataset size. Moreover, initial data loading into RAM can utilize multi-threading for even faster data access.
### Result
The model achieved an average accuracy of 86.97%. The comparison of our model's performance with other experiments is presented in Table 1.
The proposed model performs even better than EEGSNet [6], especially in the N1 class. It's worth mentioning that the number of trainable parameters is set to 3.2 million, assuming all layers are trainable. However, when only the final fifteen layers of the pre-trained model, along with the fine-tuning layers, are considered trainable, the count reduces to 525,829 which is less than EEGSNet's parameters. Also, EEGSNet's experiments were performed on a server with four NVIDIA Tesla V100-DGXS GPUs, considering the best performance after 3 times. However, we ran our experiment using a GTX 1060 6GB graphics card only one time.
Additionally, we can observe the varying results obtained through different reading methods:
\(\ast\)**HDD:** On HDD, each epoch took 9 minutes, and with 20 epochs, one fold was completed in 180 minutes. The entire project yielded results after 60 hours.
\(\ast\)**SSD:** With SSD, each epoch took 4 minutes, and with 20 epochs, one fold was completed in 80 minutes. The entire project yielded results after 26 hours.
\(\ast\)**RAM:** In contrast, RAM only required 5 minutes to store all data initially. Subsequently, each epoch was completed in just 1 minute. Thus, each fold was done in 20 minutes, and the entire training process concluded in 400 minutes.
In summary, RAM significantly enhances the training speed, evident from the swift completion of each epoch and fold. Furthermore, the speed benefits of RAM become even more apparent with faster RAM, allowing for the possibility of training the model with additional epochs. With the availability of greater RAM capacity, the
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Param} & \multirow{2}{*}{Epochs} & \multicolumn{4}{c}{Overall Metrics (\%)} & \multicolumn{4}{c}{Per-Class F1 (\%)} \\ & & & ACC & MF1 & Kappa & W & N1 & N2 & N3 & REM \\ \hline DeepSleepNet [9] & 24.7 M & 41950 & 82 & 76.9 & 0.76 & 84.7 & 66.6 & 85.9 & 84.8 & 82.4 \\ TinySleepNet [40] & 1.3 M & 42308 & 85.4 & 80.5 & 0.80 & 90.1 & 51.4 & 88.5 & 88.3 & 84.3 \\ EEGSNet [6] & 0.6 M & 42308 & 86.82 & 81.57 & 0.82 & 90.76 & 52.41 & 88.78 & 87.0 & 87.89 \\ \hline EEGMobile & 3.2 M & 20 & 86.97 & 81.42 & 0.81 & 95.4 & 56.4 & 88.86 & 82.82 & 83.64 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Final Results of _EEGMobile_ on _Sleep-EDF20_ Dataset
project becomes scalable, making it feasible to tackle larger datasets efficiently.
## 5 Conclusions
The results of the study and experimentation on the proposed model have shown promising performance with a higher accuracy rate compared to the works of others in the Sleep-EDF20 dataset. It has also exhibited significantly better speed. Nevertheless, for further validation and generalization of our findings, we recommend extending the evaluation to datasets such as Sleep-EDFS, Sleep-EDF78, and SHHS. This will allow us to compare the experimental results on different datasets with other works.
Furthermore, to uncover the full potential of the model, conducting experiments with an increased number of training iterations (similar to other works with around 1000 iterations) is suggested. Undoubtedly, increasing the number of iterations can make the model more accurate and help it converge to a better optimum.
In conclusion, in this report, we conducted a comprehensive study on sleep stage classification using the Fpz-Cz channel in EEG signals. Our goal was to design a robust and efficient model capable of accurately classifying different sleep stages using the American Academy of Sleep Medicine classification system. We presented a model that learns, in which a pre-trained model is fine-tuned and augmented with carefully selected layers. The proposed model has shown promising results in sleep stage classification tasks.
|
2306.00068 | A Candidate Dual QSO at Cosmic Noon | We report the discovery of a candidate dual QSO at z=1.889, a redshift that
is in the era known as "cosmic noon" where most of the Universe's black hole
and stellar mass growth occurred. The source was identified in Hubble Space
Telescope WFC3/IR images of a dust-reddened QSO that showed two
closely-separated point sources at a projected distance of 0.26", or 2.2 kpc.
This red QSO was targeted for imaging to explore whether red QSOs are hosted by
merging galaxies. We subsequently obtained a spatially-resolved STIS spectrum
of the system, covering the visible spectral range, and verifying the presence
of two distinct QSO components. We also obtained high-resolution radio
continuum observations with the VLBA at 1.4 GHz (21-cm L band) and found two
sources coincident with the optical positions. The sources have similar black
hole masses, bolometric luminosities, and radio loudness parameters. However,
their colors and reddenings differ significantly. The redder QSO has a higher
Eddington ratio, consistent with previous findings. We consider the possibility
of gravitational lensing and and find that it would require extreme and
unlikely conditions. If confirmed as a bona-fide dual QSO, this system would
link dust-reddening to galaxy and supermassive black hole mergers, opening up a
new population in which to search for samples of dual AGN. | Eilat Glikman, Rachel Langgin, Makoto A. Johnstone, Ilsang Yoon, Julia M. Comerford, Brooke D. Simmons, Hannah Stacey, Mark Lacy, John M. O'Meara | 2023-05-31T18:00:06Z | http://arxiv.org/abs/2306.00068v1 | # A Candidate Dual QSO at Cosmic Noon
###### Abstract
We report the discovery of a candidate dual QSO at z=1.889, a redshift that is in the era known as "cosmic noon" where most of the Universe's black hole and stellar mass growth occurred. The source was identified in _Hubble_ Space Telescope WFC3/IR images of a dust-reddened QSO that showed two closely-separated point sources at a projected distance of 0\(\farcs\)26, or 2.2 kpc. This red QSO was targeted for imaging to explore whether red QSOs are hosted by merging galaxies. We subsequently obtained a spatially-resolved STIS spectrum of the system, covering the visible spectral range, and verifying the presence of two distinct QSO components. We also obtained high-resolution radio continuum observations with the VLBA at 1.4 GHz (21-cm L band) and found two sources coincident with the optical positions. The sources have similar black hole masses, bolometric luminosities, and radio loudness parameters. However, their colors and reddenings differ significantly. The redder QSO has a higher Eddington ratio, consistent with previous findings. We consider the possibility of gravitational lensing and and find that it would require extreme and unlikely conditions. If confirmed as a bona-fide dual QSO, this system would link dust-reddening to galaxy and supermassive black hole mergers, opening up a new population in which to search for samples of dual AGN.
Quasars (1319), Double quasars (406)
## 1 Introduction
The next generation gravitational wave experiment, LISA, will detect the signal from the coalescence of supermassive black holes (SMBHs) in the \(10^{5}-10^{7}M_{\odot}\) range. Since every large galaxy hosts a nuclear SMBH, understanding the black hole merger process into the supermassive regime is essential for a full picture of galaxy evolution. Galaxy mergers have also been invoked to explain the many scaling relations seen between galaxies and their nuclear supermassive black holes (SMBHs) suggesting a co-evolution between the two systems (Magorrian et al., 1998; Gebhardt et al., 2000; Marconi & Hunt, 2003). In addition, gas-rich (i.e., "wet") mergers are understood to trigger the most luminous QSOs through the funneling of gas and dust into to the nucleus fueling accretion onto the SMBHs, which are also being brought together by the merger (Sanders et al., 1988). It is expected, therefore, that at some point during this process both SMBHs will be simultaneously active and therefore discoverable as a pair of active galactic nuclei (AGNs).
While theoretical investigations into the physics of SMBH binaries (e.g., mass ratio, coalescence time scale, AGN activity) have been making steady progress, observational constraints are still lacking due to the small numbers of confirmed dual AGNs. These simulations do find that late-stage major mergers are the most likely to produce dual AGNs (i.e., separations \(\leq 10\) kpc; Van Wassenhove et al., 2012; Blecha et al., 2013; Steinborn et al., 2016) suggesting that those are the best systems in which to search.
Dust-reddened (or, red) QSOs, represent a short-lived phase of QSO evolution driven by the "wet" merger scenario described above. During such a merger, much of
the black hole growth occurs in a heavily enshrouded environment followed by a relatively brief transitional phase in which the obscuring dust is cleared by outflows and radiation-driven winds and is seen as a moderately reddened, Type 1, luminous QSO. After feedback processes clear the dust, the canonical blue QSO shines through and dominates (Sanders et al., 1988; Hopkins et al., 2005, 2008). Objects in the transitional phase, i.e, moderately obscured, red QSOs, are farther along the merger timeline, and are thus ideal systems for finding dual AGNs.
Samples of red quasars1 have been identified through radio plus near-infrared selection (e.g., the FIRST-2MASS, or F2M, red quasar survey; Glikman et al., 2004, 2007, 2012) and, more recently, mid- plus near-infrared selection (e.g., the WISE-2MASS, or W2M, red QSO survey; Glikman et al., 2018, 2022). These red QSO samples span a broad range of redshifts (\(0.1<z<3\)) and reddenings (\(0.2\lesssim E(B-V)\lesssim 1.5\)); have very high accretion rates (\(L/L_{\rm Edd}>0.1\); Kim et al., 2015), sufficient to blow out the obscuring material (Glikman, 2017). Their spectra often show broad absorption lines (BALs) that are associated with outflows and feedback (Urrutia et al., 2009). Crucially, _Hubble_ Space Telescope (HST) imaging at \(z\simeq 0.7\) and \(z\simeq 2\) reveals that \(\gtrsim 80\%\) of F2M red quasars are hosted by merging galaxies (Urrutia et al., 2008; Glikman et al., 2015) making them more likely to harbor dual AGNs (or, more luminous, dual QSOs).
Footnote 1: In this letter, we adopt the canonical nomenclature that distinguishes quasars, radio-detected luminous AGN whose radio emission is essential to their selection, from QSOs, the overall class of luminous AGN.
In this paper, we present the discovery of a candidate dual QSO in HST imaging of a sample of W2M red QSOs from Glikman et al. (2022). The QSO's redshift of \(z\sim 1.9\) probes the epoch of peak AGN and star formation in the universe. Throughout this work we quote magnitudes on the AB system, unless explicitly stated otherwise. When analyzing spectra for extinction properties, we first correct them for Galactic extinction, using the Fitzpatrick (1999) extinction curve. When computing luminosities and any other cosmology-dependent quantities, we use the \(\Lambda\)CDM concordance cosmology: \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{M}=0.30\), and \(\Omega_{\Lambda}=0.70\).
## 2 Source Characteristics
A Cycle 24 HST program imaged the host galaxies of 11 W2M red QSOs (5 QSOs at \(z\sim 0.7\) and 6 QSOs \(z\sim 2\)) to compare with the F2M imaging studies of Urrutia et al. (2008) and Glikman et al. (2015) that focused on those same redshifts (13 and 11 objects, respectively), using ACS and WFC3/IR, respectively (PID 14706, PI Glikman). The images were observed with a four-point box dither pattern and were reduced using the Astrodrizzle package with a final pixel scale of 0\(\farcs\)06. One source, J122016.9+112627.092, appeared as two closely separated point sources (left and middle panels of Figure 1) in both the F105W and F160W filters. The WFC3/IR observations were designed to be identical to those in Glikman et al. (2015), with W2M J1220 having exposure times of 797 s and 1597 s and reaching \(3\sigma\) surface brightness limits of 23.67 mag arcsec\({}^{-2}\) and 23.87 mag arcsec\({}^{-2}\) in F105W and F160W, respectively. From the ground, this source appears as a single object with \(r=18.13\) in the Sloan Digital Sky Survey (SDSS) and \(H=16.01\) in 2MASS.
Footnote 2: Hereafter, W2M J1220
This source possesses an optical spectrum in SDSS and was assigned a redshift of \(z=1.871\) (see SS3 for details on the corrected redshift), shown in the right panel of Figure 1. The spectrum is well-fit by a QSO composite spectrum, constructed by combining the UV template of Telfer et al. (2002) with the optical-to-near-infrared template from Glikman et al. (2006) and reddened with the SMC dust law of Gordon & Clayton (1998), by \(E(B-V)=0.246\). We also obtained a near-infrared spectrum with the TripleSpec near-infrared spectrograph (Wilson et al., 2004) on the 200-inch Hale telescope at the Palomar Observatory, also shown in Figure 1. The Balmer lines are shifted into the atmospheric absorption bands and cannot be studied from the ground. Due to the seeing-limited resolution of \(\sim 1^{\prime\prime}\), this optical-through-near-infrared spectrum represents the combined light of both components and is therefore not well-fit by a single reddened QSO across the full wavelength range.
W2M J1220 is also detected in the FIRST survey with an integrated flux density of \(F_{\rm int,20cm}=2.33\) mJy (\(F_{\rm pk,20cm}=1.50\) mJy/beam), which corresponds to a total radio power of \(P_{\rm 1.4GHz}=5.7\times 10^{25}\) W Hz\({}^{-1}\). Table 1 lists the optical through near-infrared photometry for this source.
### Morphological modeling
To determine the separation of the two sources and measure their respective magnitudes we modeled the WFC3/IR images in both filters using Galfit (Peng et al., 2002). We used a point spread function (PSF) constructed by combining a few dozen bright stars in each HST filter from observations that used the same dither pattern following the same procedure as in Glikman
et al. (2015) using stars whose images were were obtained within 12 months of W2M 1220. The stars used to construct the PSF were chosen to lie in the central region of the WFC3/IR detector to minimize distortion effects. All archival observations were re-reduced using the same Astrodrizzle parameters as W2M 1220. When fitting, all parameters were allowed to be free in both filter images, which produced the best fits (i.e., smallest \(\chi^{2}_{\nu}\)) and cleanest residuals. Attempts at fixing the positions of the F160W and F105W components to each other resulted in poorer fits and yielded residuals with strong negative/positive flux asymmetries.
We first fit a model consisting of two PSFs and a background sky component. While both sources are consistent with a point spread function, the residual image showed excess flux in need of additional model components. We added a Sersic component to the model resulting in a better fit, verified by an F-test whose probability was consistent with 0, strongly suggesting that we can reject the null hypothesis. However, the best-fit Sersic component, situated next to the southern PSF, had an effective radius, \(R_{e}\), of 0.06 pixels, which is not physically meaningful. Although the addition of this Sersic component improved the fit statistic and accounted for flux not captured by the PSFs, it is unclear how much of this added component was accounting for PSF mismatches3. Because the added Sersic did not model the extended emission seen in the residual image to the east of the two PSFs, we added a second Sersic component, which does account for this excess flux and whose inclusion is supported by an F-test with probability consistent with 0.
Footnote 3: Extensive investigation into the PSF subtraction did not reveal any systematic effects when fit to archival point sources located at the same pixel position as W2M 1220. However, we find that the PSF is not able to capture all the flux from very bright point sources resulting in significant, yet symmetric residuals.
The best-fit Galfit model therefore is composed of two PSFs and two Sersic components. The locations of the PSFs, in both filters, indicate a projected separation of \(0.2680\pm 0.0003^{\prime\prime}\), which corresponds to \(\sim 2.2\) kpc at the QSO's redshift. Figure 2 shows the residual images from this fit where different model components have been sub
\begin{table}
\begin{tabular}{c c} \hline \hline Band & AB mag \\ \hline g & 19.07\(\pm\)0.01 \\ r & 18.130\(\pm\) 0.008 \\ i & 17.452\(\pm\)0.007 \\ z & 17.12\(\pm\)0.01 \\ J & 16.44\(\pm\)0.07 \\ H & 16.01\(\pm\)0.06 \\ K & 15.67\(\pm\)0.06 \\ \hline Northern & Componenta \\ \hline F105W & 18.020\(\pm\)0.003 \\ F160W & 16.9739\(\pm\)0.0002 \\ \hline Southern & Componenta \\ \hline F105W & 17.411\(\pm\)0.002 \\ F160W & 16.8656\(\pm\)0.0004 \\ \hline \end{tabular}
\end{table}
Table 1: Photometric properties of F2M J1220
Figure 1: _Left_ – Color combined WFC3/IR image showing the presence of two closely-separated central peaks. The red layer is the F160W image, the green layer is an average of the F160W and F105W images, and the blue layer is the F105W image. _Middle_ – Surface plot of the image counts in the F160W image where two distinct sources are visible. _Right_ – Optical through near-infrared spectrum of W2M1220+1126 (black line). A reddened QSO template, made out of the UV composite QSO template of Telfer et al. (2002) combined with the optical-to-near-infrared composite spectrum from Glikman et al. (2006), with \(E(B-V)=0.246\) is overplotted with a red line and an unreddened QSO template is shown in blue. We see that the Balmer lines are shifted into the atmospheric absorption bands. The STIS G750L transmission curve used in this work is shown with a gray dot-dash curve.
tracted from the data. We mark in the rightmost panel the positions of the four model components. Table 2 lists the best-fit parameters for this model, noting that the first Sersic component may not represent a physically meaningful model.
### HST follow-up with STIS
We obtained STIS spatially-resolved spectroscopy in the G750L mode covering a wavelength range from 5240-10270 A (dot-dash line in Figure 1; Cycle 29, PID 16794). The \(52\arcsec\times 0\farcs 2\) slit was oriented at a position angle of \(177.286^{\circ}\) to capture both components in a single observation. The STIS CCD has a plate scale of \(0.05078^{\circ}\)/pixel such that the two components are separated by \(\sim 5\) pixels. The standard STIS reduction pipeline was used to remove detector signatures and defringe the spectra using the STIStools defringe.defringe command to remove the fringing pattern.
We use the x1d routine to extract each spectrum, adjusting the parameters to minimize overlap between the two. We constrain the search region for finding a peak in the extraction profile by setting MAXSRCH to 1.5 for each source and A2CENTER to 506 and 511 pixels, respectively. We set the extraction box size to 3 pixels and use a 10 pixel offset from the peak, which is far from both source profiles, for the background subtraction region. Two distinct spectra were extracted at 511.383 pixels and 506.324 pixels, respectively.
To evaluate the the impact of blending on our spectral extraction, we sum the cosmic-ray cleaned, normalized, and defringed science spectrum along the wavelength axis and plot the spatial profile of the two spectra in Figure 3. We fit two Gaussian distributions to this summed profile, keeping the width of the Gaussians tied to each other, and fixing the centers to the positions found by x1d. The extracted region for each spectrum is shown in light and dark shaded pink. Using the best-fit \(\sigma\) of 1.04 pixels, we calculate that the southern spectrum overlaps the northern spectrum by \(\sim 0.6\%\), ensuring that the individual spectra are not contaminated by blending. We also determine that the 3-pixel aperture loses 14.8% of the total flux. We correct the fluxes of our spectra by this amount.
Figure 4, left, shows the resultant extracted spectra for both QSO components. The Mg ii line that is seen in the SDSS spectrum (Fig. 1) is visible in both the southern and northern components at 2800A. However, the C iii] line at \(\sim 2000\)A is only seen in the southern component, where the signal-to-noise ratio is sufficiently high. The two spectra have different continuum shapes and the redder color of the northern component seen in the WFC3/IR image is apparent in the spectrum as well.
### VLBA Imaging
W2M J1220 is detected in the FIRST catalog (Becker et al., 2003) with a 20 cm integrated flux density of 2.33 mJy. The peak flux density is 1.50 mJy/beam with an rms of 0.146 mJy/beam. The source's deconvolved major and minor axes are \(5\farcs 57\) and \(2\farcs 26\), respectively, indicating that the image is slightly resolved4.
Footnote 4: The FIRST survey has an angular resolution of \(5\arcsec\).
Aiming to detect two distinct radio components at their optical positions, we obtained 257 minutes of on-source integration with the Very Long Baseline Array (VLBA) split into two equal-length dual polarization observations on 19 August 2021 and 03 December 2021 in the L-band (1.4 GHz or 20 cm). We used J1218+1105 as a phase calibrator, which we measure to have an integrated flux density of 0.177 Jy in L-band located only 0.58 deg away from our target.
The observations were flagged, calibrated, cleaned, and imaged with the Common Astronomy Software Applications (CASA; CASA Team et al., 2022) package Version 6.5, following the approach described in VLBA Science Memo #38 (Linford, 2022). Fort Davis (FD) was selected as the reference antenna. To ensure that the amplitude scaling accounted for the wide bandpass, the task ACCOR was run twice - first for the initial calibrations and again after the bandpass correction. A phase-referenced (Stokes I) image of the target was produced by applying the TCLEAN task with natural weighting. The final calibrated image spans 320 x 0.001" along each axis with an rms noise level of 0.017 mJy and is shown in Figure 5.
The calibrated VLBA image shows two distinct point sources oriented at a position angle of \(172.205^{\circ}\) with a separation of \(0\farcs 26\) (2.2 kpc). We performed 2D Gaussian fitting on each source with CASAviewer and found that the northern source has an integrated flux density of 0.502 \(\pm\) 0.066 mJy and a peak flux density of 0.165 \(\pm\) 0.016 mJy/beam. The deconvolved major and minor axis are \(0\farcs 0234\) and \(0\farcs 0087\). The southern source has an integrated flux density of 0.330 \(\pm\) 0.044 mJy, a peak flux density of 0.146 \(\pm\) 0.014 mJy/beam, and deconvolved major and minor axes of \(0\farcs 0166\) and \(0\farcs 0072\). In both sources, the major axis is slightly larger than the CLEAN beam which has FWHM of \(0\farcs 01\) along both axes.
We overlay in contours the HST WFC3/IR F160W fluxes. The two VLBA point sources and their position angles are consistent with the HST position, though out
side the \(0\farcs 03\) Gaia-based astrometric errors, confirming their associations as the two components of W2M J1220.
## 3 Results
With our near-infrared imaging, spatially-resolved optical spectroscopy, and 20 cm radio imaging in hand, we are able to analyze the properties of the individual QSO components. We noticed that the SDSS-assigned redshift of \(z=1.871\) was based on the C iv line, which did not align with the Mg ii line in the STIS spectra. Since C iv is known to be blushifted relative to QSOs' systemic redshifts (Richards et al., 2011), we update the source redshift to \(z=1.889\) based on the Mg ii line center derived from these new observations. We fit a reddened QSO template to each spectrum and find that the southern source has \(E(B-V)=0.179\pm 0.001\) while the northern source has \(E(B-V)=0.458\pm 0.009\). These fits are shown with a pink line in Figure 4. We correct the observed F160W photometry, corresponding to rest-frame 5320A, by these extinction values and, applying a bolometric correction of 9.2 (Richards et al., 2006), compute \(L_{\rm bol,south}=3.06\times 10^{44}\) erg s\({}^{-1}\) and \(L_{\rm bol,north}=4.84\times 10^{44}\) erg s\({}^{-1}\). This means that the northern source, which appears fainter, is intrinsically more luminous after correcting for its substantially higher amount of extinction. We note that the intrinsically more luminous component coincides with the brighter radio source.
We fit a Gaussian profile to the Mg ii line in both spectra to measure the \(v_{\rm FWHM}\) values as \(2830\pm 650\) km s\({}^{-1}\) and \(3920\pm 260\) km s\({}^{-1}\) in the northern and southern sources, respectively. The errors are computed by perturbing the best fit model using the spectrum's error array and re-fitting 10,000 times. We compute the standard deviation of the Gaussian \(\sigma\) parameter found in each fit iteration, shown in the right panels Figure 4.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ Component} & \multicolumn{5}{c}{F160W (\(\chi^{2}_{\nu}=58.2\))} & \multicolumn{5}{c}{F105W (\(\chi^{2}_{\nu}=28.8\))} \\ & R.A.a & Decl. a & Mag & \(n\) & \(R_{e}\) & R.A.a & Decl.a & Mag & \(n\) & \(R_{e}\) \\ & (J2000) & (J2000) & (mag) & & (kpc) & (J2000) & (J2000) & (mag) & & (kpc) \\ \hline North PSF & +0.8800 & +0.2805 & 16.97 & & & +0.8783 & +0.2875 & 18.02 & & \\ South PSF & +0.8785 & +0.0097 & 16.87 & & & +0.8781 & +0.0232 & 17.41 & & \\ Sérsic (central) & +0.8922 & +0.0668 & 20.31 & \(1.32\pm 6.21\) & \(0.03\pm 0.23\) & +0.8926 & +0.0471 & 21.00 & \(1.76\)b & 0.02b \\ Sérsic (eastern) & +0.9379 & +0.1756 & 20.89 & 1.94b & 1.5b & +0.9383 & +0.1469 & 20.31 & \(1.50\pm 0.37\) & \(1.01\pm 0.04\) \\ \hline \end{tabular}
\end{table}
Table 2: Galfit parameters
Figure 2: Color combined residual images from the best-fit Galfit model, as described in Table 2. _Left_ – HST WFC3/IR image with just the two PSF components subtracted. _Middle_ – Both PSF components and central Sérsic component subtracted; bright extended emission is seen to the east. _Right_ – Full residual with all model components subtracted. In this frame, the best-fit model parameters are marked. White circles are at the PSF positions. The cyan cross is the central Sérsic parameter located slightly to the east of the southern PSF. The cyan circle is the position of the Sérsic component that best-fits the extended emission farther to the east.
For the two QSOs, we apply the single-epoch virial black hole mass estimator (\(M_{\rm BH}\)) following the formalism of Shen and Liu (2012),
\[\log\left(\frac{M_{\rm BH,vir}}{M_{\odot}}\right)=a+b\log\left(\frac{L_{3000}}{10 ^{44}{\rm erg/s}}\right)+c\log\left(\frac{v_{\rm FWHM}}{\rm km/s}\right), \tag{1}\]
adopting the values \(a=0.740\), \(b=0.620\), \(c=2.00\) for single-epoch measurements of FWHM\({}_{\rm MgII}\) and \(L_{3000}\), based on the calibration of Shen et al. (2011). For this calculation, we estimate \(L_{3000}\) two different ways. We measure it directly from the STIS spectra by applying an aperture correction to the observed flux, and de-reddening each spectrum. We then apply an artificial 30 A-wide box-car filter centered on 3000 A to measure the source flux, from which we determine luminosity. The second method starts with the F160W source magnitudes (Table 1), which are far less sensitive to uncertainties in \(E(B-V)\). We de-redden these magnitudes and use spectrophotometry to scale a QSO composite template to match the F160W flux. From the scaled template, we measure the 3000 A flux using the box-car filter as in the previous method. This results in a range of black hole masses, listed in Table 3. The BH masses differ for each method by \(\lesssim 0.5\) dex but are extremely similar (\(\lesssim 0.2\) dex) between the two components. These masses are at the high end of the range accessible to LISA.
Combining \(M_{BH}\) and \(L_{\rm bol}\) allows us to estimate the Eddington ratio. We find that the more obscured northern source has \(L/L_{\rm Edd}\simeq 0.1-0.3\), while the less obscured, southern source has \(L/L_{\rm Edd}\simeq 0.04-0.1\). This is consistent with findings that red QSOs have higher accretion rates than their unobscured counterparts (Urrutia et al., 2012; Kim et al., 2015).
From the definition, \(R\equiv f(1.4GHz)/f(B)\), we calculated the radio loudness parameter of the two sources. The optical flux is determined by the method described above, where the QSO template, scaled to the de-reddened F160W flux, is passed through a Johnson \(B\) filter curve. The radio flux is not K-corrected, given that the radio spectral index for each source is not known. We find that both sources have nearly identical \(R\) values, at the boundary of the radio-quiet regime5, with \(R_{north}\approx 0.46\) and \(R_{south}\approx 0.48\). Table 3 lists all the derived properties for the two QSOs in this dual system.
Footnote 5: Objects with \(R>2\) are categorized as “radio-loud”, while objects are generally considered “radio-quiet” when \(R<0.5\). Radio-intermediate sources are those that fit neither category \(0.5<R<2\)(Stocke et al., 1992).
## 4 Discussion
The confirmation of two distinct QSO spectra, at the same redshift, separated by \(0\farcs 26\), and coincident with two compact radio sources provides strong evidence that W2M J1220 is a dual QSO. Most of the known and confirmed dual AGNs are at low redshifts (\(z<0.7\); e.g. Koss et al., 2012; Comerford et al., 2012; Muller-Sanchez et al., 2015; Fu et al., 2015; Rubinur et al., 2019), which is not yet probing the epoch of peak QSO activity in the universe (\(z\simeq 2\); Madau and Dickinson, 2014) when merger rates were significantly higher (Conselice et al., 2003; Rodriguez-Gomez et al., 2015). Therefore, the identification of a dual QSO system at this epoch is noteworthy, especially given that red QSOs are predominantly found in merging hosts.
W2M J1220 is comparable to LBQS 0103\(-\)2753 (Shields et al., 2012), which is a confirmed dual QSO, separated by \(0\farcs 3\), at \(z=0.858\). LBQS 0103\(-\)2753 was identified in HST imaging and verified with a STIS spectrum, similar to W2M J1220. Deep HST imaging of LBQS 0103\(-\)2753 reveals tidal features and morphological evidence of a recent merger. The WFC3/IR imaging for W2M J1220 is not deep enough to show these features. The two-component spectra of LBQS 0103\(-\)2753 are quite distinct, with one of the components showing BAL features indicative of outflows. There is also a velocity offset of \(\sim 1500\) km s\({}^{-1}\) between the two components. Although the black hole masses of LBQS 0103\(-\)2753 are \(\sim 1-1.5\) orders of magnitude higher than W2M J1220, they are similar to each other (both have \(M_{\rm BH}\sim 10^{8.5-9}\ M_{\odot}\)).
In the cosmic noon era (\(z\sim 2-3\)), Shen et al. (2021) report two dual QSO candidates using the novel
Figure 3: Spatial profile of the source spectrum collapsed along the x-direction. Black points are the summed counts at each spatial pixel position, and the red line is a double Gaussian fit to the data. The shaded areas represent our \(3\arcsec\) extraction regions, chosen to minimize blending. Two distinct peaks are shown with the southern spectrum overlapping the northern spectrum by \(\sim 0.6\%\).
technique of 'varstrometry' which identifies sources with high astrometric variability in Gaia suggestive of two distinct, closely-spaced sources with randomly varying fluxes (Hwang et al., 2020). One source, J0841+4825, is at \(z=2.95\) and is separated by \(0\farcs 46\), though its ground-based spatially-resolved spectroscopy shows highly similar spectra which could be explained by gravitational lensing. Both components of J0749+2255, at \(z=2.17\) and also separated by \(0\farcs 46\), are detected by VLBA observations at 15 GHz, has a spatially resolved STIS spectrum, and HST imaging showing merger signatures in the host, putting this system on solid footing for a dual QSO (Chen et al., 2023). There are 45 additional varstrometry-selected dual QSO candidates extending out to \(z\simeq 3\) awaiting confirmation (Chen et al., 2022).
The gravitational lens PSJ1721+8842, initially thought to be a quadruple lens at \(z=2.37\), was analyzed by Mangat et al. (2021) to re-interpret the system as two QSOs that are lensed to form four point source images based on HST optical and IR observations as well as VLA observations.
Yue et al. (2021) report a candidate QSO pair at \(z=5.66\) separated by \(1\farcs 24\), or 7.3 kpc. Spatially resolved spectroscopy reveal two spectra with similar line characteristics but different reddenings, as in W2M J1220.
### Lensing considerations
Some of the properties between the two components of W2M J1220, such as the derived black hole masses, radio loudness parameters, as well as near-identical emission line centers and profile shapes, may be explained by gravitational lensing. Here we consider that possibility.
Figure 6 shows the ratio of the northern to southern spectrum, with the ratio of the best-fit reddened templates (pink curves in Figure 4) over-plotted. While
Figure 4: _Left – Individual spectra of the two QSO components plotted at rest wavelengths. The pink curves represent the best-fit reddened QSO template. The northern source (gray line) is reddened by \(E(B-V)=0.432\) while the southern source (black line) is reddened by \(E(B-V)=0.184\). Mg ii and C iii lines are labeled. Right – Gaussian fits to the Mg ii emission line in the southern (top) and northern (bottom) spectra showing 10,000 iterations determined by perturbing the best-fit line using the error arrays. The range of fits reflects the uncertainty in the derived Gaussian parameters._
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline Source & R.A.a & Decl.a & \(E(B-V)\) & \(v_{\rm FWHM}\) & \(\log M_{\rm BH}\)b & \(\log L_{\rm bol}\) & \(L/L_{\rm Edd}\) & \(S_{\rm pk,20cm}\) & \(R\) \\ & (J2000) & (J2000) & (mag) & (km s\({}^{-1}\)) & \((M_{\odot})\) & (erg s\({}^{-1}\)) & & (mJy) & \\ \hline North & 12:20:16.87176 & +11:26:28.344 & \(0.458\pm 0.009\) & \(3140\pm 800\) & \((7.2-7.7)\pm 0.2\) & \(44.76\) & \(0.08-0.22\) & \(0.502\pm 0.066\) & \(0.460\) \\ South & 12:20:16.87420 & +11:26:28.082 & \(0.179\pm 0.001\) & \(3800\pm 230\) & \((7.40-7.70)\pm 0.04\) & \(44.46\) & \(0.05-0.1\) & \(0.330\pm 0.044\) & \(0.478\) \\ \hline \end{tabular}
\end{table}
Table 3: Individual QSO characteristics
the C iv and Mg ii emission lines, marked by vertical dashed lines, disappear, the shape of the ratio is broadly consistent with the difference in reddening. The undulating features that deviate from a QSO template seen in both the ratio spectrum and in the southern component are not consistent with known QSO spectral features, such as iron emission in the ultraviolet (UV; Vestergaard and Wilkes, 2001).
With a separation of 0\(\farcs\)26, corresponding to an Einstein radius of 0\(\farcs\)13, W2M J1220 would be the most closely-separated lensed QSO known6 Such small-separation lenses (on the order of \(\sim 100\) milli-arcseconds or, so-called, milli-lenses; Frey et al., 2010; Spingola et al., 2019; Casadio et al., 2021) are rare and have been difficult to find, but probe supermassive (\(10^{6}-10^{9}\)) compact objects as putative lenses. Recent systematic searches using VLBA data have resulted in few viable candidates. This is because a lens with an unusually high surface density is needed to yield such small separations.
Footnote 6: J0439+1634 has a separation of 0\(\farcs\)2 (Fan et al., 2019). This source is of the rare class of “naked” cusp lenses involving three images, which is ruled out for our system by the VLBA image. B0218+357, with a separation of 0\(\farcs\)34 is currently the next most closely-separated lens (Patnaik et al., 1993).
We explore the range of possible lens masses that could result in a separation of 0\(\farcs\)26 as a function of redshift to determine the plausibility of lensing in W2M1220. We employ the relation among distances, lens mass, and source separation, and a point mass lens,
\[M(\theta_{E})=\left(\frac{c^{2}}{4G}\right)\theta_{E}^{2}\left(\frac{D_{d}D_{s} }{D_{ds}}\right), \tag{2}\]
and for a singular isothermal sphere (SIS) mass model,
\[M(\theta_{E})=\frac{\sigma_{v}^{2}R}{G}\ \ \mathrm{with}\ \ \sigma_{\mathrm{v}}= \mathrm{c}\sqrt{\frac{\theta_{E}}{4\pi}\frac{D_{s}}{D_{ds}}}. \tag{3}\]
Here, \(M(\theta_{E})\) is the mass enclosed within the angular radius \(\theta_{E}\), \(D_{d}\) and \(D_{s}\) are the angular diameter distance to the lens, source, respectively, and \(D_{ds}\) is the angular diameter distance between the lens and source. Given the redshift of W2M J1220, \(D_{s}\) is known we can compute \(M(\theta_{E})\) as a function of lens redshift. We find that even the smallest lens masses, residing at \(z\sim 0.4-0.6\), would require \(>10^{9}\)\(M_{\odot}\) to be confined to the innermost kpc\({}^{2}\).
Assuming lensing as a possibility, we model the system with GRAVLENS (Keeton, 2001) and determine the lens position that would yield the two VLBA positions finding that a lens would need to be situated 0\(\farcs\)021695 to the west and 0\(\farcs\)159543 to the south of the northern source. An attempt to fix a Sersic component at this position relative to fixed PSF components in our morphological fitting (SS2) resulted in failure of Galfit to converge. In fact, the central Sersic component found by Galfit, while possessing unphysical properties, is situated to the _east_ of both PSF components.
Identifying intrinsic differences, such as in the spectral slope, between the two QSOs would rule out lensing. The \(E(B-V)\) values that we find are determined with respect to a composite QSO spectrum that has a spectral slope of \(\alpha_{\nu}=-0.47\) at \(\lambda<5000\). We vary the slope of the QSO template (following Section
Figure 5: Calibrated and cleaned image of VLBA L-band observations of W2M J1220 produced with CASA. Overplotted contours indicate the flux from the WFC3/IR F160W image at 5\(\sigma\) levels. Two point sources are detected at the HST position with a separation of 0\(\farcs\)26 and a position angle of 172.205\({}^{\circ}\). Beam size of 0\(\farcs\)005 is shown in red in the bottom left.
Figure 6: Flux ratio of the northern to southern spectra showing the disappearance of the C iv and Mg ii emission lines (vertical dashed lines), motivating an exploration of gravitational lensing as the cause for the pair of QSO images. The pink line represents a ratio of the best-fit reddening curves shown in Figure 4.
5.3 of Glikman et al., 2007) and recomputed \(E(B-V)\) for a template with \(\alpha_{\nu}=-0.25\) (a bluer slope) and \(\alpha_{\nu}=-0.76\) (a redder slope), which represent the range intrinsic to unreddened QSOs (Richards et al., 2003). While the bluer template does yield slightly higher values by \(\Delta E(B-V)\sim 0.02\) (and vice-versa for the redder template, yielding lower values by \(\Delta E(B-V)\sim 0.03\)), the differences are not sufficient to account for the difference in \(E(B-V)\) between the two sources as seen in the continuum fits (Figure 4), the flux ratios (Figure 6) and the \(F105W-F160W\) colors. We therefore cannot attribute the different reddenings to intrinsic differences in spectral slopes between the two spectra and the best explanation for the \(E(B-V)\) values is different amounts of dust reddening along the two lines of sight.
To achieve the observed amount of reddening would require a lensing galaxy with a significant amount of gas and dust. While we cannot rule out lensing with this spectral slope investigation, it does rule out the exotic possibility that the lens might be a freely floating SMBH.
Finally, the similarities between the black hole properties in W2M J1220 are seen in previously confirmed dual QSOs, such as LBQS 0103\(-\)2753 (Shields et al., 2012). Simulations of SMBH binaries predict major mergers more often produce dual AGNs with similar BH masses (Blecha et al., 2013; Steinborn et al., 2016). And, since red QSOs are known to be associated with major mergers, this discovery may reflect a selection effect towards similar mass BHs.
The similarity in radio loudness may reflect the enhanced low-level radio emission seen in W2M red QSOs, which has been interpreted as coming from either a dusty wind or nascent jets (Glikman et al., 2022). Likewise, the differences between the two QSOs, such as the higher accretion rate seen in the redder source, is consistent with what has been seen in red QSOs elsewhere (Urrutia et al., 2012; Kim et al., 2015).
Finally, as some gravitational lenses show intervening absorption features in the individual component spectra (e.g., Rubin et al., 2018), which may reveal the putative lens redshift, we explored the absorption features in the STIS spectra and could not identify any evidence for such coherent features. This further weakens the possibility of a gas rich galaxy as a gravitational lens. Therefore, although we cannot definitively rule it out, we consider lensing to be the less likely explanation for this system.
One way to rule out lensing would be to obtain deeper imaging with HST that may reveal evidence of a merging system, as is seen in the dual QSO found by Chen et al. (2023). Another way would be to measure the flux densities at other radio frequencies and compare their radio spectral indices which, if different for each source, would also rule out lensing. And, if a lens is responsible, its nature as highly compact and with extreme dust gradients, is worthy of its own study.
### dual statistics for red QSOs
The serendipitous discovery of a QSO pair in HST imaging of a red QSO raises the question of their frequency compared to unreddened QSOs. Shen et al. (2023) investigate statistically the incidence of QSO pairs using Gaia detections of known SDSS QSOs with \(L_{\rm bol}>10^{45.8}\) erg s\({}^{-1}\) at \(1.5<z<3.5\) with separations of \(0\farcs 4-3\arcsec\) and find an integrated pair fraction of \(\sim 6\times 10^{-4}\). Assuming this fraction is constant at the \(0\farcs 26\) separation of W2M 1220 and is a factor of \(\sim 10\) higher given its lower luminosity (\(L_{\rm bol}\sim 10^{44.8}\) erg s\({}^{-1}\)), the pair fraction is estimated to be \(\sim 10^{-3}\)(Shen et al., 2023). However, only 17 red QSOs have been imaged with HST at \(z\sim 2\) (11 F2M quasars reported in Glikman et al., 2015, and 6 W2M QSOs which include W2M 1220). This fraction of 0.06 (1/17) would be an order of magnitude higher than that found for the luminous, unobscured QSOs investigated in Shen et al. (2023). Given that red QSOs are known to be hosted by major mergers, this population may be the most likely for finding dual QSOs although with only a single system, we cannot draw broad conclusions.
## 5 Conclusions
We report the discovery of a dual QSO candidate, separated by \(0\farcs 26\) corresponding to 2.2 kpc at \(z=1.889\). The sources are confirmed as QSOs with a spatially-resolved STIS spectrum and high-resolution VLBA imaging at 1.4 GHz which reveal two point sources consistent with the positions in the HST images. The two components are reddened by different amounts of dust-extinction. When corrected for this extinction, the properties of the QSOs are similar, including black hole masses \(\sim 10^{7.5}M_{\odot}\) and radio loudness of \(\sim 0.5\) (though their Eddington ratios differ). These similarities mean we cannot rule out gravitational lensing, though the lens is not detected in the imaging and extended features seen in the HST imaging may indicate merging hosts. The features of these two QSOs are consistent with previous findings in dual AGNs.
A dual QSO discovered at cosmic noon in a survey for red QSOs, which is a population known to be hosted by major mergers, can provide a unique population in which to search for such systems where both black holes are active at the same time. Given that only \(\sim 30\) red quasars have been observed with HST, finding a candi
date dual QSO in such a small sample suggests an elevated incidence of dual activity in red QSOs. Because W2M J1220 was found serendipitously, a targeted high resolution imaging effort of red QSOs at \(z=2-3\) may be the most fruitful place to find dual quasars during a crucial phase of SMBH/Galaxy co-evolution.
We thank Marianne Vestergaard for sharing the Fe UV template which we used to look for features in our spectra. E.G. acknowledges the generous support of the Cottrell Scholar Award through the Research Corporation for Science Advancement. E.G. is grateful to the Mittelman Family Foundation for their generous support. We gratefully acknowledge the National Science Foundation's support of the Keck Northeast Astronomy Consortium's REU program through grant AST-1950797. BDS acknowledges support through a UK Research and Innovation Future Leaders Fellowship [grant number MR/T044136/1]. Some/all of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute. The specific observations analyzed can be accessed via 10.17909/5ydb-ex84 and 10.17909/s2sz-4252. This research is based on observations made with the NASA/ESA _Hubble_ Space Telescope obtained from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program(s) PID 16794. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This work made use of the Swinburne University of Technology software correlator (Deller et al., 2011), developed as part of the Australian Major National Research Facilities Programme and operated under licence. HST(STIS), VLBA, SDSS, Palomar(TripleSpec), astroconda, CASA 10.17909/5ydb-ex84 10.17909/s2sz-t252.
|
2309.14698 | A Toeplitz-like operator with rational matrix symbol having poles on the
unit circle: Invertibility and Riccati equations | This paper is a continuation of the work on unbounded Toeplitz-like operators
$T_\Om$ with rational matrix symbol $\Om$ initiated in Groenewald et. al
(Complex Anal. Oper. Theory 15, 1(2021)), where a Wiener-Hopf type
factorization of $\Om$ is obtained and used to determine when $T_\Om$ is
Fredholm and compute the Fredholm index in case $T_\Om$ is Fredholm. Due to the
high level of non-uniqueness and complicated form of the Wiener-Hopf type
factorization, it does not appear useful in determining when $T_\Om$ is
invertible. In the present paper we use state space methods to characterize
invertibility of $T_\Om$ in terms of the existence of a stabilizing solution of
an associated nonsymmetric discrete algebraic Riccati equation, which in turn
leads to a pseudo-canonical factorization of $\Om$ and concrete formulas of
$T_\Om^{-1}$. | G. J. Groenewald, S. ter Horst, J. Jaftha, A. C. M. Ran | 2023-09-26T06:20:06Z | http://arxiv.org/abs/2309.14698v1 | A Toeplitz-like operator with rational matrix symbol having poles on the unit circle: Invertibility and Riccati equations
###### Abstract
This paper is a continuation of the work on unbounded Toeplitz-like operators \(T_{\Omega}\) with rational matrix symbol \(\Omega\) initiated in Groenewald et. al (Complex Anal. Oper. Theory 15, 1(2021)), where a Wiener-Hopf type factorization of \(\Omega\) is obtained and used to determine when \(T_{\Omega}\) is Fredholm and compute the Fredholm index in case \(T_{\Omega}\) is Fredholm. Due to the high level of non-uniqueness and complicated form of the Wiener-Hopf type factorization, it does not appear useful in determining when \(T_{\Omega}\) is invertible. In the present paper we use state space methods to characterize invertibility of \(T_{\Omega}\) in terms of the existence of a stabilizing solution of an associated nonsymmetric discrete algebraic Riccati equation, which in turn leads to a pseudo-canonical factorization of \(\Omega\) and concrete formulas of \(T_{\Omega}^{-1}\).
keywords: Toeplitz operators, unbounded operators, invertibility, Riccati equations, pseudo-canonical factorization Msc Primary 47B35, 47A53; Secondary 47A68 +
Footnote †: journal: Elsevier
## 1 Introduction
In two recent papers [10; 11], we explored the matrix analogue of an unbounded Toeplitz-like operator that was investigated in [7; 8; 9] for scalar rational symbols with poles on the unit circle \(\mathbb{T}\). While many of the classical
operator theory topics, like Fredholmness, invertibility and spectrum, are well understood in the scalar case, the case of matrix symbols appears to be more intricate. In [10] a Wiener-Hopf type factorization was obtained, from which the Fredholm index can be determined, in case the symbol has no zeroes on \(\mathbb{T}\). However, this Wiener-Hopf type factorization has a high level of non-uniqueness and, unlike in the classical case, generally does not lead to a diagonalization of the symbol. As a result, although some further Fredholm characteristics can be determined from the Wiener-Hopf type factorization [11], it does not seem to be an adequate tool to compute the dimensions of the kernel and the cokernel, nor does it seem to give a clear characterization of invertibility. In the present paper we take a different approach to the question of invertibility of this Toeplitz-like operator, using Riccati equations and pseudo-canonical factorization.
Unbounded Toeplitz operators appeared first in a paper of Hartman and Wintner [12] in 1950, but only became an active topic with the seminal paper of Sarason [16] in connection to truncated Toeplitz operators; see [15] for how unbounded Toeplitz operators with matrix symbols come into play. More recently, kernels of unbounded Toeplitz operators appeared in the study of nearly backward shift invariant subspaces and Toeplitz inverses [3, 4].
Next we introduce some notation, which is required to define our Toeplitz-like operator and state our main results. We write Rat for the space of rational functions, \(\mathrm{Rat}(\mathbb{T})\) for the functions in Rat that only have poles on the unit circle \(\mathbb{T}\), \(\mathrm{Rat}_{0}(\mathbb{T})\) for the strictly proper functions in \(\mathrm{Rat}(\mathbb{T})\), \(\mathcal{P}\) for the space of polynomials and for positive integers \(k\) we indicate the polynomials of degree at most \(k\) by \(\mathcal{P}_{k}\), extending it to all integers by setting \(\mathcal{P}_{k}=\{0\}\) in case \(k\leq 0\). For positive integers \(m\) and \(n\), we indicate the spaces of \(m\times n\) matrices with entries from these function spaces by \(\mathrm{Rat}^{m\times n}\), \(\mathrm{Rat}_{0}(\mathbb{T})^{m\times n}\), etc. In the case of vector functions, when \(n=1\), we will just write \(m\) instead of \(m\times 1\). For \(1<p<\infty\), \(L^{p}\) and \(H^{p}\) denote the Lebesgue space and Hardy space, respectively, and \(K^{p}\) is the standard complement of \(H^{p}\) in \(L^{p}\). With \(L^{p}_{m}\), \(H^{p}_{m}\) and \(K^{p}_{m}\) we indicate the spaces of vectors of length \(m\) with entries from \(L^{p}\) and \(H^{p}\), respectively.
Let \(\Omega\in\mathrm{Rat}^{m\times m}\) with possibly poles on \(\mathbb{T}\) and \(\det\Omega\not\equiv 0\), and let \(1<p<\infty\). We then define the Toeplitz-like operator \(T_{\Omega}\left(H^{p}_{m}\to H^{p}_{m}\right)\) by
\[\begin{split}\mathrm{Dom}(T_{\Omega})=\left\{\begin{array}{c}f \in H^{p}_{m}:&\Omega f=h+\eta\text{ where }h\in L^{p}_{m}(\mathbb{T}),\\ \text{ and }\quad\eta\in\mathrm{Rat}^{m}_{0}(\mathbb{T})\end{array}\right\}, \\ T_{\Omega}f=\mathbb{P}h\text{ with }\mathbb{P}\text{ the Riesz projection of }L^{p}_{m}(\mathbb{T})\text{ onto }H^{p}_{m}.\end{split} \tag{1.1}\]
By the Riesz projection, \(\mathbb{P}\), we mean the projection of \(L^{p}_{m}\) onto \(H^{p}_{m}\), as discussed in [14, pages 149-153].
As usual, \(\Omega\in\mathrm{Rat}^{m\times m}\) has a pole at \(z_{0}\in\mathbb{C}\cup\{\infty\}\), if any of its entries has a pole at \(z_{0}\). In case \(\det\Omega\not\equiv 0\), a zero of \(\Omega\) is a pole of its inverse \(\Omega^{-1}(z):=\Omega(z)^{-1}\). It is not necessarily the case that the zeroes of \(\Omega\) correspond to the zeroes of \(\det\Omega\), and \(\Omega\) can have both a pole and a zero at the same point \(z_{0}\in\mathbb{C}\cup\{\infty\}\). It was proved in [10] that \(T_{\Omega}\) is Fredholm if and only if \(\Omega\) has no zeroes on \(\mathbb{T}\). In particular, for \(T_{\Omega}\) to be invertible, that is, bijective from \(\mathrm{Dom}(T_{\Omega})\) onto \(H^{p}_{m}\), it is necessary that \(\Omega\) has no zeroes on \(\mathbb{T}\).
In the classical case, invertibility of Toeplitz operators with rational symbols can be studied via Riccati equations and canonical factorizations associated with state space realizations of the symbol, cf., [1, 2]. In the present paper, we follow the approach of [6]. Let \(\Omega\in\operatorname{Rat}^{m\times m}\) and assume \(\Omega\) is given by a minimal state space realization of the form
\[\Omega(z)=R_{0}+zC(I-zA)^{-1}B+\gamma(zI-\alpha)^{-1}\beta, \tag{1.2}\]
with \(R_{0}\in\mathbb{C}^{m\times m}\) and with \(A,B,C\) and \(\alpha,\beta,\gamma\) matrices of appropriate sizes such that \(A\) has all its eigenvalues in the open unit disc \(\mathbb{D}\) and \(\alpha\) has all its eigenvalues in the closed unit disc \(\overline{\mathbb{D}}\), that is, \(A\) is stable and \(\alpha\) is semi-stable (in [6]\(\alpha\) is also stable). Minimality in this setting means that one cannot find a representation for \(\Omega\) of this form with \(A\) and \(\alpha\) matrices of smaller size; equivalently, the triples \((C,A,B)\) and \((\gamma,\alpha,\beta)\) both provides observable and controllable discrete-time linear systems, cf., [13]. Despite the fact that \(\Omega\) has poles on \(\mathbb{T}\), so that \(\alpha\) has eigenvalues on \(\mathbb{T}\), there is a fairly direct analogue of the canonical factorization result of [6].
**Theorem 1.1**.: _Let \(\Omega\in\operatorname{Rat}^{m\times m}\) with \(\det\Omega\not\equiv 0\) and no zeroes on \(\mathbb{T}\) and assume that \(\Omega\) is given by the minimal realization (1.2) with \(A\) stable and \(\alpha\) semi-stable. Then the following are equivalent:_
* _There exists a matrix_ \(Q\) _such that_ \(R_{0}-\gamma QB\) _is invertible,_ \(Q\) _satisfies the nonsymmetric discrete algebraic Riccati equation:_ \[Q=\alpha QA+(\beta-\alpha QB)\left(R_{0}-\gamma QB\right)^{-1}(C-\gamma QA),\] (1.3) _and such that the matrices_ \[\begin{split} A_{\circ}&:=A-B(R_{0}-\gamma QB)^{- 1}(C-\gamma QA),\\ \alpha_{\circ}&:=\alpha-\left(\beta-\alpha QB\right) \left(R_{0}-\gamma QB\right)^{-1}\gamma\end{split}\] (1.4) _are both stable._
* \(\Omega\) _has a right pseudo-canonical factorization_ \(\Omega(z)=\Psi(z)\Theta(z)\)_, i.e.,_ \(\Psi\) _and_ \(\Theta\) _are_ \(m\times m\) _rational matrix functions with_ \(\det\Psi\not\equiv 0\) _and_ \(\det\Theta\not\equiv 0\) _and such that_ \(\Theta\) _has poles only outside or on the unit circle_ \(\mathbb{T}\)_,_ \(\Theta^{-1}\) _has poles only outside_ \(\mathbb{T}\)_,_ \(\Psi\) _has poles only inside or on the unit circle_ \(\mathbb{T}\)_, and_ \(\Psi^{-1}\) _has poles only inside the unit circle_ \(\mathbb{T}\)_._
_Moreover, if \(Q\) is a solution to (1.3) such that \(A_{\circ}\) and \(\alpha_{\circ}\) are stable, then a right pseudo-canonical factorization is obtained as follows: Let \(\delta\) and \(D\) be invertible matrices such that \(\delta D=R_{0}-\gamma QB\) and set_
\[C_{\circ}=\delta^{-1}(C-\gamma QA)\quad\text{and}\quad\beta_{\circ}=(\beta- \alpha QB)D^{-1}. \tag{1.5}\]
_Then \(\Omega(z)=\Psi(z)\Theta(z)\) holds with \(\Psi\) and \(\Theta\) defined as_
\[\Theta(z)=D+zC_{\circ}(I-zA)^{-1}B\quad\text{and}\quad\Psi(z)=\delta+\gamma( zI-\alpha)^{-1}\beta_{\circ}, \tag{1.6}\]
_and the inverses of \(\Psi\) and \(\Theta\) are given by_
\[\begin{split}\Theta^{-1}(z)&=D^{-1}-zD^{-1}C_{\circ}(I -zA_{\circ})^{-1}BD^{-1},\\ \Psi^{-1}(z)&=\delta^{-1}-\delta^{-1}\gamma(zI- \alpha_{\circ})^{-1}\beta_{\circ}\delta^{-1}.\end{split} \tag{1.7}\]
_Furthermore, the solution \(Q\) of the Riccati equation (1.3) so that \(A_{\circ}\) and \(\alpha_{\circ}\) are stable is unique. Finally, the realizations in (1.6) and (1.7) for \(\Theta\), \(\Psi\), \(\Theta^{-1}\) and \(\Psi^{-1}\) are minimal._
The above result is proved in Section 2 and is essentially obtained by specifying Theorem 1.1 of [6] for the function \(\Omega_{r}\) defined by
\[\Omega_{r}(z)=\Omega(rz) \tag{1.8}\]
for \(1<r\) small enough, so that \(\Omega_{r}\) does not have poles and zeroes on \(\mathbb{T}\). More generally, for any function \(f\), scalar-, vector- or matrix-valued, and scalar \(r>0\) we write \(f_{r}\) for the function \(f_{r}(z)=f(rz)\) defined for \(z\in\mathbb{C}\) for which \(rz\) is in the domain of \(f\).
In order to characterize invertibility of \(T_{\Omega}\) more is required than what is in [6], we want to compare invertibility of the unbounded operator \(T_{\Omega}\) with invertibility of the bounded Toeplitz operator \(T_{\Omega_{r}}\). Note that invertibility in both cases means bijectivity on its domain of definition. Hence, the inverse of \(T_{\Omega}\) will be bounded. For \(r>1\), we define the annulus
\[\mathfrak{A}_{r}:=\{z\in\mathbb{C}\colon r^{-1}<|z|<r\}. \tag{1.9}\]
It turns out that \(T_{\Omega}\) and \(T_{\Omega_{r}}\) can be compared, not only with respect to invertibility, but even with respect to their Fredholm properties, in case they are Fredholm.
**Theorem 1.2**.: _Let \(\Omega\in\operatorname{Rat}^{m\times m}\) with \(\det\Omega\not\equiv 0\) and assume that \(\Omega\) has no zeroes on \(\mathbb{T}\). Define \(\Omega_{r}\) by (1.8). Let \(r_{0}>1\) be such that \(\Omega\) has no zeroes in the annulus \(\mathfrak{A}_{r_{0}}\) and no poles in \(\mathfrak{A}_{r_{0}}\setminus\mathbb{T}\). Then \(T_{\Omega}\) is Fredholm and for each \(1<r<r_{0}\), \(T_{\Omega_{r}}\) is bounded and Fredholm and we have_
\[\dim\operatorname{Ker}T_{\Omega}=\dim\operatorname{Ker}T_{\Omega_{r}}\quad \text{and}\quad\operatorname{codim}\operatorname{Ran}T_{\Omega}=\operatorname {codim}\operatorname{Ran}T_{\Omega_{r}}.\]
_In particular, \(T_{\Omega}\) is invertible if and only if \(T_{\Omega_{r}}\) is invertible for some (and hence all) \(1<r<r_{0}\)._
We shall prove Theorem 1.2 in Section 3. Our second main result together with [6], shows that invertibility of \(T_{\Omega}\) is equivalent to items (i) and (ii) in Theorem 1.1. With some further work we derive, in Section 4 below, formulas for the inverse of \(T_{\Omega}\), as given in the following result.
**Theorem 1.3**.: _Let \(\Omega\in\operatorname{Rat}^{m\times m}\) with \(\det\Omega\not\equiv 0\) and no zeroes on \(\mathbb{T}\) and assume that \(\Omega\) is given by the minimal realization (1.2) with \(A\) stable and \(\alpha\) semi-stable. Then \(T_{\Omega}\) is invertible if and only if the Riccati equation (1.3) has a solution
\(Q\) such that \(A_{\circ}\) and \(\alpha_{\circ}\) in (1.4) are stable, or, equivalently, if \(\Omega\) has a pseudo-canonical factorization \(\Omega(z)=\Psi(z)\Theta(z)\) as in item (ii) of Theorem 1.1. In that case, the inverse of \(T_{\Omega}\) is the bounded operator given by_
\[T_{\Omega}^{-1}=T_{\Theta}^{-1}T_{\Psi}^{-1}=T_{\Theta^{-1}}T_{\Psi^{-1}}. \tag{1.10}\]
_Moreover, \(T_{\Omega}^{-1}\) has a block matrix representation \(\left[T_{\Omega}^{-1}\right]_{i,j}\) with respect to the standard block basis of \(H_{m}^{p}\) that is given by_
\[\left[T_{\Omega}^{-1}\right]_{i,j}=\sum_{k=0}^{\min(i,j)}\Theta_{i-k}^{\times }\Psi_{j-k}^{\times}, \tag{1.11}\]
_where_
\[\begin{array}{ll}\Theta_{0}^{\times}=D^{-1},&\Theta_{j}^{\times}=-D^{-1}(C_ {\circ})(A_{\circ})^{j-1}BD^{-1},\ \ j=1,2,\ldots,\\ \Psi_{0}^{\times}=\delta^{-1},&\Psi_{j}^{\times}=-\delta^{-1}\gamma\left( \alpha_{\circ}\right)^{j-1}\left(\beta_{\circ}\right)\delta^{-1},\ \ j=1,2,\ldots.\end{array}\]
For the final result we present in this introduction we restrict to the case where \(p=2\), since it relies on a result from [6], which is proved only for \(p=2\). Let \(\Omega\in\mathrm{Rat}^{m\times m}\) be given by the minimal realization with \(A\) stable and \(\alpha\) semi-stable and assume \(A\) and \(\alpha\) are of size \(s\times s\) and \(t\times t\), respectively. We define the observability operator for the pair \((C,A)\) as
\[\mathcal{O}_{C,A}:\mathbb{C}^{s}\to H_{m}^{2},\quad\mathcal{O}_{C,A}:x\mapsto C (I-zA)^{-1}x,\ \ \lambda\in\mathbb{D}, \tag{1.12}\]
and the controllability operator for the pair \((\alpha,\beta)\) as
\[\begin{array}{l}\mathcal{C}_{\alpha,\beta}:\mathrm{Dom}(\mathcal{C}_{\alpha,\beta})\rightarrow\mathbb{C}^{t},\quad\mathcal{C}_{\alpha,\beta}f=\frac{1}{2 \pi}\int_{-\pi}^{\pi}(e^{it}I-\alpha)^{-1}\beta f(e^{it})\,\mathrm{d}t,\\ \mathrm{for}\quad f\in\mathrm{Dom}(\mathcal{C}_{\alpha,\beta}):=\left\{f\in H _{m}^{2}\colon\int_{-\pi}^{\pi}(e^{it}I-\alpha)^{-1}\beta f(e^{it})\,\mathrm{d }t\ \mathrm{exists}\right\}.\end{array} \tag{1.13}\]
Since \(A\) is stable, it is clear that \(\mathcal{O}_{C,A}\) defines a bounded operator from \(\mathbb{C}^{s}\) into \(H_{m}^{2}\). Due to the semi-stability of \(\alpha\), \(\mathcal{C}_{\alpha,\beta}\) need not be bounded, but it is the case that the subspace
\[\mathcal{D}_{r}:=\{f\in H_{m}^{2}\colon f_{r}\in H_{m}^{2}\},\]
for \(r>1\), is contained in \(\mathrm{Dom}(\mathcal{C}_{\alpha,\beta})\). This will be proved in Lemma 4.2 in Section 4 below, where we will also prove the following proposition.
**Proposition 1.4**.: _Consider the case \(p=2\). Let \(\Omega\in\mathrm{Rat}^{m\times m}\) with \(\det\Omega\not\equiv 0\) and such that \(T_{\Omega}\) is invertible. Assume that \(\Omega\) is given by the minimal realization (1.2) with \(A\) stable and \(\alpha\) semi-stable. Then the solution \(Q\) of the algebraic Riccati equation (1.3) that makes \(A_{\circ}\) and \(\alpha_{\circ}\) in (1.4) stable is given by_
\[Q=\mathcal{C}_{\alpha,\beta}T_{\Omega}^{-1}\mathcal{O}_{C,A}. \tag{1.14}\]
The formula for \(Q\) is analogous to that in [6], where poles on \(\mathbb{T}\) are not allowed, but requires more attention since \(\mathcal{C}_{\alpha,\beta}\) is not necessarily bounded. To see that the right hand side in (1.14) is well defined, we point out that \(\mathcal{O}_{C,A}\) maps \(\mathbb{C}^{s}\) into \(\mathcal{D}_{r}\) for \(r>1\) small enough, while \(T_{\Omega}^{-1}\) maps \(\mathcal{D}_{r}\) into \(\mathcal{D}_{r}\), again for \(r>1\) small enough, which is contained in the domain of \(\mathcal{C}_{\alpha,\beta}\). That \(T_{\Omega}^{-1}\) maps \(\mathcal{D}_{r}\) into \(\mathcal{D}_{r}\) follows from Proposition 3.1 below.
We conclude this introduction with a brief overview of the remainder of the paper. In Section 2 we apply the main result of [6] to the function \(\Omega_{r}\) in (1.8), and translate back to the state space realization of \(\Omega\), leading to a proof of Theorem 1.1. In the next section we investigate the relation between \(T_{\Omega}\) and \(T_{\Omega_{r}}\), and give a proof of Theorem 1.2. The work of Sections 2 and 3, is then combined in Section 4 to prove Theorem 1.3 as well as Proposition 1.4.
## 2 Riccati equation, canonical factorization and inversion for \(\Omega_{r}\)
Suppose that \(\Omega\in\operatorname{Rat}^{m\times m}\) is given by the minimal realization formula (1.2), that is:
\[\Omega(z)=R_{0}+zC(I-zA)^{-1}B+\gamma(zI-\alpha)^{-1}\beta, \tag{2.1}\]
with \(A\) being stable and \(\alpha\) being semistable. It is then clear that \(\Omega_{r}\) defined by (1.8) admits the state space realization
\[\Omega_{r}(z)=\Omega(rz) =R_{0}+z(rC)(I-z(rA))^{-1}B+\gamma\left(zI-\frac{\alpha}{r} \right)^{-1}\frac{\beta}{r}\] \[=R_{0}+zC_{r}(I-zA_{r})^{-1}B+\gamma(zI-\alpha_{r})^{-1}\beta_{r} \tag{2.2}\]
with
\[A_{r}=rA,\quad C_{r}=rC,\quad\alpha_{r}=\frac{\alpha}{r},\quad\beta_{r}=\frac {\beta}{r}. \tag{2.3}\]
As in Theorem 1.1, let \(r_{0}>1\) be such that \(\Omega\) has no zeroes in the annulus \(\mathfrak{A}_{r_{0}}\) and no poles in \(\mathfrak{A}_{r_{0}}\setminus\mathbb{T}\), with \(\mathfrak{A}_{r_{0}}\) as defined in (1.9). Then, for \(1<r<r_{0}\), \(\Omega_{r}\) has no poles and no zeroes on \(\mathbb{T}\). Therefore, the results of [6] apply to \(\Omega_{r}\) and its realization (2.2)-(2.3). Note that the paper [6] only considers the case of Toeplitz operators on \(H_{m}^{p}\). However, since invertibility of Toeplitz operators on \(H_{m}^{p}\) with rational matrix symbols can be characterized in terms of their Wiener-Hopf factorizations, which are independent of \(p\), invertibility on \(H_{m}^{p}\) is independent of the value of \(p\). We now specify the main result of [6] to \(\Omega_{r}\), together with some supplementary observations, in the next proposition. This result is subsequently used to prove Theorem 1.1.
**Proposition 2.1**.: _Let \(\Omega\in\operatorname{Rat}^{m\times m}\) be given by the realization (2.1) with \(A\) stable and \(\alpha\) semi-stable, so that \(\Omega_{r}\) is given by the realization (2.2)-(2.3). Let \(r_{0}>1\) be such that \(\Omega\) has no zeroes in the annulus \(\mathfrak{A}_{r_{0}}\) and no poles in \(\mathfrak{A}_{r_{0}}\setminus\mathbb{T}\). For \(1<r<r_{0}\) the following are equivalent:_
1. \(T_{\Omega_{r}}\) _is invertible._
_._
2. _There exists a matrix_ \(Q\) _such that_ \(R_{0}-\gamma QB\) _is invertible,_ \(Q\) _satisfies the nonsymmetric discrete algebraic Riccati equation:_ \[Q=\alpha QA+\left(\beta-\alpha QB\right)(R_{0}-\gamma QB)^{-1}(C-\gamma QA),\] (2.4) _and_ \(rA_{\circ}\) _and_ \(r^{-1}\alpha_{\circ}\) _are stable, with_ \(A_{\circ}\) _and_ \(\alpha_{\circ}\) _given by (_1.4_)._
3. \(\Omega_{r}\) _has a canonical factorization_ \(\Omega_{r}(z)=\Psi^{(r)}(z)\Theta^{(r)}(z)\)_, i.e.,_ \(\Psi^{(r)}\) _and_ \(\Theta^{(r)}\) _are_ \(m\times m\) _rational matrix functions with_ \(\det\Psi^{(r)}\not\equiv 0\) _and_ \(\det\Theta^{(r)}\not\equiv 0\) _and such that_ \(\Theta^{(r)}\) _and_ \((\Theta^{(r)})^{-1}\) _have poles only outside_ \(\mathbb{T}\) _and_ \(\Psi^{(r)}\) _and_ \((\Psi^{(r)})^{-1}\) _have poles only inside_ \(\mathbb{T}\)_._
_Moreover, the solution \(Q\) of the Riccati equation (2.4) such that \(rA_{\circ}\) and \(r^{-1}\alpha_{\circ}\) are stable is unique and independent of \(r\), i.e., for each \(1<r<r_{0}\) one obtains the same solution \(Q\) in item (ii). Furthermore, if \(Q\) is as in (ii), then a canonical factorization of \(\Omega_{r}\) is obtained with \(\Theta^{(r)}=\Theta_{r}\) and \(\Psi^{(r)}=\Psi_{r}\), where \(\Theta\) and \(\Psi\) are defined as in Theorem 1.1, and \(\Theta_{r}\) and \(\Psi_{r}\) are defined according to (1.8). In case one of items (i)-(iii) holds, and hence all, we have \(T_{\Omega_{r}}^{-1}=T_{\Theta_{r}}^{-1}T_{\Psi_{r}}^{-1}=T_{\Theta_{r}^{-1}}T _{\Psi_{r}^{-1}}\)._
**Proof.** Since \(\Omega_{r}\) has no poles on \(\mathbb{T}\), since \(1<r<r_{0}\), Theorem 1.1 of [6] applies to \(\Omega_{r}\) and its realization (2.2), leading to the equivalence of variations of (i)-(iii) in terms of the matrices in the realizations. Technically, Theorem 1.1 of [6] does not contain an item about the invertibility of \(T_{\Omega_{r}}\), but that invertibility of \(T_{\Omega_{r}}\) is equivalent to the two items in the theorem follows by the discussion preceding the theorem, and this is also where the formula for \(T_{\Omega_{r}}^{-1}\) in terms of the canonical factors appears. It thus remains to show that the statements of items (ii) and (iii) in terms of the realization matrices of \(\Omega_{r}\) correspond to the statements concerning the Riccati solutions and canonical factorization from Theorem 1.1 of [6], respectively.
We start with item (ii). From Theorem 1.1 of [6], and the preceding paragraphs, we obtain that invertibility of \(T_{\Omega_{r}}\) is equivalent to the existence of a matrix \(Q_{r}\) such that \(R_{0}-\gamma Q_{r}B\) is invertible, that satisfies the Riccati equation
\[Q_{r} =\alpha_{r}Q_{r}A_{r}+\left(\beta_{r}-\alpha_{r}Q_{r}B\right)(R_{ 0}-\gamma Q_{r}B)^{-1}(C_{r}-\gamma Q_{r}A_{r})\] \[=\left(\frac{\alpha}{r}\right)Q_{r}(rA)+\left(\frac{\beta}{r}- \frac{\alpha}{r}Q_{r}B\right)(R_{0}-\gamma Q_{r}B)^{-1}(rC-\gamma Q_{r}(rA))\] \[=\alpha Q_{r}A+\left(\beta-\alpha Q_{r}B\right)(R_{0}-\gamma Q_{ r}B)^{-1}(C-\gamma Q_{r}A)\]
and such that
\[A_{\circ,r} =A_{r}-B(R_{0}-\gamma Q_{r}B)^{-1}(C_{r}-\gamma Q_{r}A_{r})\] \[=rA-B(R_{0}-\gamma Q_{r}B)^{-1}(rC-\gamma Q_{r}(rA))=rA_{\circ},\] \[\alpha_{\circ,r} =\alpha_{r}-\left(\beta_{r}-\alpha_{r}Q_{r}B\right)(R_{0}-\gamma Q _{r}B)^{-1}\gamma\] \[=\frac{\alpha}{r}-\left(\frac{\beta}{r}-\frac{\alpha}{r}Q_{r}B \right)(R_{0}-\gamma Q_{r}B)^{-1}\gamma=\frac{\alpha_{\circ}}{r}\]
are both stable, corresponding to the claim of item (ii). Moreover, the matrix \(Q_{r}\) with these properties is unique. It follows that the Riccati equation that \(Q_{r}\) solved is independent of \(r\), but it is less straightforward that the condition of having \(rA_{\circ}\) and \(\frac{\alpha_{\circ}}{r}\) stable does not introduce a dependency on \(r\); in particular, \(A_{\circ}\) and \(\alpha_{\circ}\) in the above formulas may depend on \(r\). To see that this is not the case, we note that the matrix \(Q_{r}\) can be obtained as the limit of a Riccati difference equation associated with the finite section method for \(T_{\Omega_{r}}\), as discussed in Section 4 of [6]. Indeed, since \(\Omega_{r}\) is continuous on \(\mathbb{T}\) and we assume \(T_{\Omega_{r}}\) to be invertible, for \(N\) large enough the \(N\)-th section of \(T_{\Omega,r}\), i.e., the Toeplitz block matrix
\[T_{\Omega_{r},N}=\begin{bmatrix}R_{0,r}&R_{-1,r}&\cdots&R_{1-N,r}\\ R_{1,r}&R_{0,r}&\cdots&R_{2-N,r}\\ \vdots&\vdots&\ddots&\vdots\\ R_{N-1,r}&R_{N-2,r}&\cdots&R_{0,r}\end{bmatrix}, \tag{2.5}\]
with \(R_{j,r}\) the \(j\)-th Fourier coefficient of \(\Omega_{r}\), will be invertible, and the matrices
\[Q_{N,r}:=\mathcal{C}_{\beta_{r},\alpha_{r},N}T_{\Omega_{r},N}^{-1}\mathcal{ O}_{C_{r},A_{r},N}\]
with \(\mathcal{C}_{\beta_{r},\alpha_{r},N}\) and \(\mathcal{O}_{C_{r},A_{r},N}\) given by
\[\mathcal{C}_{\beta_{r},\alpha_{r},N}=\operatorname{Row}_{j=0}^{N-1}(\alpha_{ r}^{j}\beta_{r})\quad\text{and}\quad\mathcal{O}_{C_{r},A_{r},N}=\operatorname{ Col}_{j=0}^{N-1}(C_{r}A_{r}^{j}) \tag{2.6}\]
solve the Riccati difference equation
\[Q_{N+1,r} =\alpha_{r}Q_{N,r}A_{r}+(\beta_{r}-\alpha_{r}Q_{N,r}B)(R_{0}- \gamma Q_{N,r}B)^{-1}(C_{r}-\gamma Q_{N,r}A_{r})\] \[=\alpha Q_{N,r}A+(\beta-\alpha Q_{N,r}B)(R_{0}-\gamma Q_{N,r}B)^{ -1}(C-\gamma Q_{N,r}A)\]
and \(Q_{N,r}\) converges to \(Q_{r}\) as \(N\to\infty\). Hence, in order to see that \(Q_{r}\) is independent of \(r\), it suffices to show that \(Q_{N,r}\) is independent of \(r\). Note that the Fourier coefficients of \(\Omega_{r}\) are given by
\[R_{n,r}=\left\{\begin{array}{ll}C_{r}A_{r}^{n-1}B=r^{n}CA^{n-1}B\text{ for }n>0,\\ R_{0}\text{ for }n=0,\\ \gamma\alpha_{r}^{n-1}\beta_{r}=r^{-n}\gamma\alpha^{n-1}\beta\text{ for }n<0. \end{array}\right. \tag{2.7}\]
It follows that
\[T_{\Omega_{r},N}=\operatorname{Diag}(I_{m},rI_{m},\ldots,r^{N-1}I_{m})T_{ \Omega,N}\operatorname{Diag}(I_{m},rI_{m},\ldots,r^{N-1}I_{m})^{-1},\]
where \(T_{\Omega,N}\) is as in (2.5) with \(R_{n,r}=R_{n,1}\), where \(R_{n,1}\) is defined according to (2.7) with \(r=1\). This shows that for large \(N\), also \(T_{\Omega,N}\) invertible and
\[T_{\Omega_{r},N}^{-1}=\operatorname{Diag}(I_{m},rI_{m},\ldots,r^{N-1}I_{m})T _{\Omega,N}^{-1}\operatorname{Diag}(I_{m},rI_{m},\ldots,r^{N-1}I_{m})^{-1}. \tag{2.8}\]
Define \(\mathcal{C}_{\beta,\alpha,N}\) and \(\mathcal{O}_{C,A,N}\) analogous to \(\mathcal{C}_{\beta_{r},\alpha_{r},N}\) and \(\mathcal{O}_{C_{r},A_{r},N}\), with \(\beta_{r},\alpha_{r},C_{r},A_{r}\) replaced by \(\beta,\alpha,C,A\), respectively. It is then easy to see that
\[\mathcal{C}_{\beta_{r},\alpha_{r},N} =\mathcal{C}_{\beta,\alpha,N}\operatorname{Diag}(I_{m},rI_{m}, \ldots,r^{N-1}I_{m})^{-1},\] \[\mathcal{O}_{C_{r},A_{r},N} =\operatorname{Diag}(I_{m},rI_{m},\ldots,r^{N-1}I_{m})\mathcal{ O}_{C,A,N}.\]
Combining these identities with (2.8), it follows that
\[Q_{N,r}=\mathcal{C}_{\beta_{r},\alpha_{r},N}T^{-1}_{\Omega_{r},N}\mathcal{O}_{C_{ r},A_{r},N}=\mathcal{C}_{\beta,\alpha,N}T^{-1}_{\Omega,N}\mathcal{O}_{C,A,N},\]
is indeed independent of \(r\), and consequently, \(Q_{r}\) is also independent of \(r\).
It remains to prove the equivalence of (ii) (or (i)) and (iii),and that (iii) can be achieved as described in the proposition, and the formulas for \(T^{-1}_{\Omega_{r}}\). The equivalence of (ii) and (iii), in fact, follows directly from Theorem 1.1 in [6]. We now show that the formulas for the canonical factors from [6] lead to the factorization of \(\Omega_{r}\) using \(\Theta\) and \(\Psi\) from Theorem 1.1. By the formulas in [6], the canonical factorization of \(\Omega_{r}\) is given by \(\Omega_{r}(z)=\Psi^{(r)}(z)\Theta^{(r)}(z)\) where we factor \(R_{0}-\gamma Q_{r}B=\delta D\), as claimed, and set
\[\beta_{\circ,r} =(\beta_{r}-\alpha_{r}Q_{r}B)D^{-1}=r^{-1}(\beta-\alpha Q_{r}B)D^ {-1}=r^{-1}\beta_{\circ},\] \[C_{\circ,r} =\delta^{-1}(C_{r}-\gamma QA_{r})=r\delta^{-1}(C-\gamma QA)=rC_{ \circ},\]
to arrive at
\[\Psi^{(r)}(z) =\delta+\gamma(zI-\alpha_{r})^{-1}\beta_{\circ,r}=\delta+r^{-1} \gamma(zI-r^{-1}\alpha)^{-1}\beta_{\circ}\] \[=\delta+\gamma(rzI-\alpha)^{-1}\beta_{\circ}=\Psi_{r}(z)\]
with inverse
\[\Psi^{(r)}(z)^{-1} =\delta^{-1}-\delta^{-1}\gamma(zI-\alpha_{\circ,r})^{-1}\beta_{ \circ,r}\delta^{-1}\] \[=\delta^{-1}-r^{-1}\delta^{-1}\gamma(zI-r^{-1}\alpha_{\circ})^{- 1}\beta_{\circ}\delta^{-1}=\delta^{-1}-\delta^{-1}\gamma(rzI-\alpha_{\circ})^{ -1}\beta_{\circ}\delta^{-1}\]
and
\[\Theta^{(r)}(z)=D+zC_{\circ,r}(I-zA_{r})^{-1}B=D+rzC_{\circ}(I-rzA)^{-1}B= \Theta_{r}(z)\]
with inverse
\[\Theta^{(r)}(z)^{-1} =D^{-1}-zD^{-1}C_{\circ,r}(I-zA_{\circ,r})^{-1}BD^{-1}\] \[=D^{-1}-rzD^{-1}C_{\circ}(I-rzA_{\circ})^{-1}BD^{-1}.\]
The formula for \(T^{-1}_{\Omega_{r}}\) now follows simply from the text preceding Theorem 1.1 in [6].
Using the equivalence of (ii) and (iii) in Proposition 2.1 it is easy to prove our second main result.
**Proof of Theorem 1.1.** Since \(\Omega_{r}\), \(\Psi_{r}\) and \(\Theta_{r}\) are rational matrix functions, the factorization \(\Omega_{r}(z)=\Psi_{r}(z)\Theta_{r}(z)\) for some \(1<r<r_{0}\) implies that also \(\Omega(z)=\Psi(z)\Theta(z)\) as well as the formulas for the inverses of \(\Psi\) and \(\Theta\). Hence (iii) in Proposition 2.1 is equivalent to (ii) in Theorem 1.1.
Next we show that the realizations of \(\Theta\) and \(\Psi\) in (1.6) and of \(\Theta^{-1}\) and \(\Psi^{-1}\) in (1.7) are minimal. Note that since the realization of \(\Omega\) is minimal and \(A\) is stable and \(\alpha\) is semi-stable, the McMillan degree, \(\deg(\Omega)\), of \(\Omega\) is equal to the
sum of the sizes of \(A\) and \(\alpha\), say \(s\) and \(t\), respectively. From the formulas of \(\Theta\) and \(\Psi\) it is clear that \(\deg(\Theta)\leq s\) and \(\deg(\Psi)\leq t\). On the other hand, since the McMillan degree is sublogarithmic, we have
\[s+t=\deg(\Omega)=\deg(\Psi\Theta)\leq\deg(\Psi)+\deg(\Theta)\leq s+t.\]
Hence we have equality in each step, which implies \(\deg(\Theta)=s\) and \(\deg(\Psi)=t\), in other words, the realizations of \(\Theta\) and \(\Psi\) are minimal. By the observation right after Proposition 7.2 in [1], it follows that the realization for \(\Theta\) (respectively \(\Psi\)) is minimal if and only if the realization of \(\Theta^{-1}\) (respectively \(\Psi^{-1}\)) is minimal. Hence, also the realizations of \(\Theta^{-1}\) and \(\Psi^{-1}\) are minimal.
Since the solution \(Q=Q_{r}\) of (2.4) used to construct \(rA_{\circ}\) and \(r^{-1}\alpha_{\circ}\) is independent of \(r\), it follows that the solution \(Q\) in item (ii) in Proposition 2.1 is such that \(rA_{\circ}\) and \(r^{-1}\alpha_{\circ}\) are stable for all \(1<r<r_{0}\), so that \(A_{\circ}\) is stable and \(\alpha_{\circ}\) is semi-stable.
Thus, from the equivalence of (ii) and (iii) in Proposition 2.1 it follows that we get the equivalence of (i) and (ii) in Theorem 1.1, except that at this stage we only get \(\alpha_{\circ}\) to be semi-stable. To see that \(\alpha_{\circ}\) is in fact stable, note that \(\Omega(z)=\Psi(z)\Theta(z)\) implies that \(\Psi(z)^{-1}=\Theta(z)\Omega(z)^{-1}\). Since \(A\) is stable, \(\Theta\) has no poles on \(\mathbb{T}\), and \(\Omega^{-1}\) has no poles on \(\mathbb{T}\) because \(\Omega\) is assumed to have no zeroes on \(\mathbb{T}\). Therefore, \(\Psi^{-1}\) has no poles on \(\mathbb{T}\) either, which implies, by minimality of the realization of \(\Psi^{-1}\), that \(\alpha_{\circ}\) has no eigenvalues on \(\mathbb{T}\). Hence \(\alpha_{\circ}\) is stable.
## 3 Fredholmness of \(T_{\Omega}\) versus Fredholmness of \(T_{\Omega_{r}}\)
In this section we prove Theorem 1.2. The proof relies heavily on the connection between \(T_{\Omega}\) and \(T_{\Omega_{r}}\), with \(\Omega_{r}\) as in (1.8), as explained in the next result.
**Proposition 3.1**.: _Let \(\Omega\in\operatorname{Rat}^{m\times m}\) with \(\det\Omega\not\equiv 0\) and assume that \(\Omega\) has no zeroes on \(\mathbb{T}\). Let \(r_{0}>1\) be such that \(\Omega\) has no zeroes in the annulus \(\mathfrak{A}_{r_{0}}\) and no poles in \(\mathfrak{A}_{r_{0}}\setminus\mathbb{T}\). Define_
\[\mathcal{D}_{r}:=\{f\in H_{m}^{p}\colon f_{r}\in H_{m}^{p}\}. \tag{3.1}\]
_Then for each \(1<r<r_{0}\) we have_
\[\operatorname{Ker}T_{\Omega}\subset\mathcal{D}_{r}\subset H_{m}(\overline{ \mathbb{D}})\subset\operatorname{Dom}(T_{\Omega}). \tag{3.2}\]
_Moreover, \(T_{\Omega}\) maps \(\mathcal{D}_{r}\) into \(\mathcal{D}_{r}\), the inverse image \(T_{\Omega}^{-1}(\mathcal{D}_{r})\) of \(\mathcal{D}_{r}\) under \(T_{\Omega}\) lies in \(\mathcal{D}_{r}\), and we have_
\[(T_{\Omega}f)_{r}=T_{\Omega_{r}}f_{r},\quad f\in\mathcal{D}_{r}. \tag{3.3}\]
Proof.: Let \(1<r<r_{0}\). We start by proving (3.2), except for the first inclusion.
The second inclusion is trivial. If \(f\in\mathcal{D}_{r}\), then \(f\) has an analytic extension to \(r\mathbb{D}\), so that, in particular, \(f\) is analytic on \(\overline{\mathbb{D}}\).
The argument to show that \(H_{m}(\overline{\mathbb{D}})\subset\operatorname{Dom}(T_{\Omega})\) is similar to that in the scalar case [9, Theorem 6.2]. Note that each \(f\in H_{m}(\overline{\mathbb{D}})\) is also in \(\mathcal{D}_{r^{\prime}}\) for some \(1<r^{\prime}\) sufficiently close to \(1\). Hence it suffices to show that \(\mathcal{D}_{r}\subset\operatorname{Dom}(T_{\Omega})\). Let \(f\in\mathcal{D}_{r}\). Since \(\Omega\) is rational, \(\Omega f\) is meromorphic on \(r\mathbb{D}\) with finitely many poles, each of finite multiplicity. Computing the residues of the poles of each of the entries in the vector function \(\Omega f\) it is easy to write \(\Omega f\) in the form \(g+\rho\) with \(g\in L_{m}^{p}\) and \(\rho\in\operatorname{Rat}_{0}^{m}(\mathbb{T})\), showing that \(f\in\operatorname{Dom}(T_{\Omega})\). Hence, we have proved the second inclusion.
Next we show that \(T_{\Omega}\) maps \(\mathcal{D}_{r}\) into \(\mathcal{D}_{r}\) and that (3.3) holds. Let \(f\in\mathcal{D}_{r}\). Hence \(f\) has an analytic extension to \(r\mathbb{D}\) and \(|f(z)|^{p}\) is integrable on \(r\mathbb{T}\), that is, \(f\in H_{m}^{p}(r\mathbb{T})\). By (3.2), \(f\in\operatorname{Dom}(T_{\Omega})\) and hence \(\Omega f=g+\rho\) for some \(g\in L_{m}^{p}\) and \(\rho\in\operatorname{Rat}_{0}^{m}(\mathbb{T})\). Write \(g=g_{+}+g_{-}\) with \(g_{+}\in H_{m}^{p}\) and \(g_{-}\in K_{m}^{p}\), so that \(T_{\Omega}f=g_{+}\). Since \(f\) is analytic on \(r\mathbb{D}\) and \(\Omega\) and \(\rho\) are rational with no poles in \(\mathfrak{A}_{r}\setminus\mathbb{T}\), it follows that \(g=\Omega f-\rho\) must be analytic in \(\mathfrak{A}_{r}\setminus\mathbb{T}\) with the poles on \(\mathbb{T}\) all having finite multiplicity. However, \(g\in L_{m}^{p}\), and hence cannot have poles of finite multiplicity on \(\mathbb{T}\). Thus \(g\) is analytic in \(\mathfrak{A}_{r}\). Hence, \(g_{+}\) is analytic on \(r\mathbb{D}\) and \(g_{-}\) on \(\mathbb{C}\setminus\overline{r^{-1}\mathbb{D}}\). Since \(r<r_{0}\), by an argument similar to that in the first part of the proof, it follows that \(g_{+}\in H_{m}^{p}(r\mathbb{T})\) and \(g_{-}\in K_{m}^{p}(r^{-1}\mathbb{T})\). This implies that \(g_{+,r}(z)=g_{+}(rz)\) and \(g_{-,r}(z)=g_{-}(rz)\) define functions in \(H_{m}^{p}\) and \(K_{m}^{p}\), respectively. In particular, \(g_{+}\in\mathcal{D}_{r}\) and it follows that \(T_{\Omega}\) maps \(\mathcal{D}_{r}\) into \(\mathcal{D}_{r}\). Moreover, \(\rho\) is a rational matrix function with poles only in \(\mathbb{T}\), so that \(\rho_{r}\) only has poles inside \(\mathbb{D}\) and we obtain that \(\rho_{r}\in K_{m}^{p}\). Note further that on \(\mathbb{T}\)
\[\Omega_{r}(z)f_{r}(z) =\Omega(rz)f(rz)=g(rz)+\rho(rz)=g_{r}(z)+\rho_{r}(z)\] \[=g_{+,r}(z)+g_{-,r}(z)+\rho_{r}(z).\]
Therefore, we have
\[T_{\Omega_{r}}f_{r}=\mathbb{P}(\Omega_{r}f_{r})=\mathbb{P}(g_{+,r}+g_{-,r}+ \rho_{r})=g_{+,r}=(T_{\Omega}f)_{r}.\]
Finally, we show that \(T_{\Omega}^{-1}(\mathcal{D}_{r})\subset\mathcal{D}_{r}\). Since \(0\in\mathcal{D}_{r}\), this proves in particular that \(\operatorname{Ker}T_{\Omega}\subset\mathcal{D}_{r}\) and hence the first inclusion of (3.2). To prove the inclusion, we require the Wiener-Hopf type factorization from [10, Theorems 1.1 and 1.2], namely \(\Omega\) can be factored as
\[\Omega(z)=\Omega_{-}(z)\Xi(z)\Omega_{+}(z),\quad\text{with}\ \ \Xi(z)=z^{-k}\Omega_{ \circ}(z)P_{0}(z) \tag{3.4}\]
for some integer \(k\geq 0\), \(\Omega_{+},\Omega_{\circ},\Omega_{-}\in\operatorname{Rat}^{m\times m}\) and \(P_{0}\in\mathcal{P}^{m\times m}\) such that \(\Omega_{-}\) and \(\Omega_{-}^{-1}\) are both minus functions (i.e., no poles outside \(\mathbb{D}\)), \(\Omega_{+}\) and \(\Omega_{+}^{-1}\) are both plus functions (i.e., no poles in \(\overline{\mathbb{D}}\)), \(\Omega_{\circ}=\operatorname{Diag}_{j=1}^{m}(\phi_{j})\) with \(\phi_{j}\in\operatorname{Rat}(\mathbb{T})\) having no zeroes and having roots only on \(\mathbb{T}\) (in [10, Theorems 1.1], \(\phi_{j}\in\operatorname{Rat}\) can have zeroes on \(\mathbb{T}\), but this cannot occur since \(\Omega\) has no zeroes on \(\mathbb{T}\)), and \(P_{0}\) a lower triangular polynomial with \(\det(P_{0}(z))=z^{N}\) for some integer \(N\geq 0\). It then follows from Theorem 1.3 in [10] that
\[T_{\Omega}=T_{\Omega_{-}}T_{\Xi}T_{\Omega_{+}}\quad\text{and}\quad T_{\Omega_{ -}}^{-1}=T_{\Omega_{-}^{-1}},\ \ T_{\Omega_{+}}^{-1}=T_{\Omega_{+}^{-1}}.\]
To show that \(T_{\Omega}^{-1}(\mathcal{D}_{r})\subset\mathcal{D}_{r}\), it suffices to show that \(T_{\Xi}^{-1}(\mathcal{D}_{r})\subset\mathcal{D}_{r}\), \(T_{\Omega_{+}}^{-1}(\mathcal{D}_{r})\subset\mathcal{D}_{r}\) and \(T_{\Omega_{-}}^{-1}(\mathcal{D}_{r})\subset\mathcal{D}_{r}\). The latter two inclusions follow from the the fact that \(T_{\Omega_{-}}\) and \(T_{\Omega_{+}}\) are invertible with inverses \(T_{\Omega_{-}}^{-1}=T_{\Omega_{-}^{-1}}\) and \(T_{\Omega_{+}}^{-1}=T_{\Omega_{+}^{-1}}\) along with the argument from the previous paragraph applied to \(T_{\Omega_{-}^{-1}}\) and \(T_{\Omega_{+}^{-1}}\) showing that \(\mathcal{D}_{r}\) is an invariant subspace for these two operators; for the latter, note that \(\Omega_{-}\) and \(\Omega_{+}\) do not have zeroes and poles on the annulus \(\mathfrak{A}_{r}\). Hence it remains to show that \(T_{\Xi}^{-1}(\mathcal{D}_{r})\subset\mathcal{D}_{r}\).
Let \(f\in\operatorname{Dom}(T_{\Xi})\) such that \(T_{\Xi}f\in\mathcal{D}_{r}\). Since \(f\in\operatorname{Dom}(T_{\Xi})\), we can apply Lemma 2.1 from [11] to conclude that \(\Xi(z)f(z)=z^{-k}h(z)+\eta(z)\) with \(h\in H_{m}^{p}\) and \(\eta=(\eta_{1},\ldots,\eta_{m})\in\operatorname{Rat}_{0}^{m}(\mathbb{T})\) of the form \(\eta_{j}=r_{j}/q_{j}\in\operatorname{Rat}_{0}(\mathbb{T})\) with \(q_{j}\) the denominator of the \(j\)-th diagonal element of \(\Omega_{0}\). Write \(z^{-k}h(z)=h_{-}(z)+h_{+}(z)\) with \(h_{+}\in H_{m}^{p}\) and \(h_{-}\in K_{m}^{p}\). It is clear that \(h_{-}\) is analytic on \(\mathbb{C}\setminus\{0\}\). Moreover, \(h_{+}=T_{\Xi}f\), so that \(h_{+}\in\mathcal{D}_{r}\), by assumption. Since \(\det\Xi\not\equiv 0\), we have \(f=\Xi^{-1}h_{+}+\Xi^{-1}h_{-}+\Xi^{-1}\eta\). Note that \(\Xi^{-1}\) has no poles on the annulus \(\mathfrak{A}_{r}\) and no zeroes on \(\mathfrak{A}_{r}\setminus\mathbb{T}\). Since \(\Xi^{-1}\), \(f\), \(h_{+}\) and \(h_{-}\) don't have poles on \(\mathbb{T}\), neither can \(\Xi^{-1}\eta\). It follows that \(\Xi^{-1}\eta\), \(\Xi^{-1}h_{+}\) and \(\Xi^{-1}h_{-}\) are all analytic on \(\mathfrak{A}_{r}\). Therefore, \(f\) is analytic on \(\mathfrak{A}_{r}\). However, \(f\in H_{m}^{p}\), so that \(f\) in fact is analytic on \(r\mathbb{T}\). Using that \(\Xi^{-1}\) is rational, \(g_{+}\in\mathcal{D}_{r}\), \(g_{-}\) and \(\rho\) are continuous on \(r\mathbb{T}\) it follows that \(f\) is \(p\)-integrable on \(r\mathbb{T}\), and hence \(f\in\mathcal{D}_{r}\).
**Lemma 3.2**.: _Let \(\Omega\in\operatorname{Rat}^{m\times m}\) with \(\det\Omega\not\equiv 0\). Then \(T_{\Omega}\) is Fredholm if and only if \(\Omega\) has no zeroes on \(\mathbb{T}\). Assume \(T_{\Omega}\) is Fredholm and let \(r_{0}>1\) be such that \(\Omega\) has no zeroes in the annulus \(\mathfrak{A}_{r_{0}}\) and no poles in \(\mathfrak{A}_{r_{0}}\setminus\mathbb{T}\). Then \(T_{\Omega_{r}}\) is bounded and Fredholm for each \(1<r<r_{0}\)._
Proof.: For \(1<r<r_{0}\), by definition of \(r_{0}\) it is clear that \(T_{\Omega_{r}}\) has no poles and no zeroes on \(\mathbb{T}\), so that \(T_{\Omega_{r}}\) is bounded and Fredholm. For \(T_{\Omega}\) the result is not included in [10] but follows from the results proved there. Indeed, consider a Wiener-Hopf type decomposition of \(\Omega\) as in the proof of Proposition 3.1, e.g., as in (3.4). Since \(\Omega_{+}^{-1}\) is a plus function and \(\Omega_{-}^{-1}\) is a minus function, they do not have poles on \(\mathbb{T}\). Also, since \(\det P_{0}(z)=z^{N}\) for some integer \(N\geq 0\), \(P_{0}^{-1}\) as a function in \(\operatorname{Rat}^{m\times m}\) can only have a pole at \(0\), so that \(P_{0}\) also does not have zeroes on \(\mathbb{T}\). This shows that the zeroes of \(\Omega\) on \(\mathbb{T}\) correspond to the zeroes of \(\Omega_{\circ}\). Since \(\Omega_{\circ}\) is a diagonal matrix function, its zeroes correspond to the zeroes of its diagonal elements, which are all on \(\mathbb{T}\), by construction. Since \(\Omega\) has no zeroes on \(\mathbb{T}\), this implies that the numerators of the diagonal elements of \(\Omega_{\circ}\) are constant, assuming the numerators and denominators are co-prime. The statement for \(T_{\Omega}\) now follows from the fact that \(T_{\Omega}\) is Fredholm if and only if the numerators (assuming co-primeness) in \(\Omega_{\circ}\) are constant, according to Theorem 1.4 in [10].
As a consequence of Proposition 3.1, it is easy to show that the dimensions of the kernels of \(T_{\Omega}\) and \(T_{\Omega_{r}}\) are the same.
**Corollary 3.3**.: _Let \(\Omega\in\operatorname{Rat}^{m\times m}\) with \(\det\Omega\not\equiv 0\) and assume that \(\Omega\) has no zeroes on \(\mathbb{T}\). Let \(r_{0}>1\) be such that \(\Omega\) has no zeroes in the annulus \(\mathfrak{A}_{r_{0}}\) and no
poles in \(\mathfrak{A}_{r_{0}}\setminus\mathbb{T}\). For each \(1<r<r_{0}\) we have_
\[\operatorname{Ker}T_{\Omega}=\{f_{1/r}\colon f\in\operatorname{Ker}T_{\Omega_{r }}\}\quad\text{and}\quad\operatorname{Ker}T_{\Omega_{r}}=\{f_{r}\colon f\in \operatorname{Ker}T_{\Omega}\}.\]
_In particular, we have \(\dim\operatorname{Ker}T_{\Omega}=\dim\operatorname{Ker}T_{\Omega_{r}}\)._
**Proof.** Note that the formula for \(\operatorname{Ker}T_{\Omega_{r}}\) makes sense, since \(\operatorname{Ker}T_{\Omega}\subset\mathcal{D}_{r}\). Since the map \(f\mapsto f_{r}\) defines a bijection from \(\mathcal{D}_{r}\) onto \(H^{p}_{m}\), with inverse map \(h\mapsto h_{1/r}\), it suffices to prove one of the two formulas. From Proposition 3.1 it follows that
\[f\in\operatorname{Ker}T_{\Omega}\subset\mathcal{D}_{r}\iff 0=T_{\Omega}f\iff 0=(T_{\Omega}f)_{r}=T_{\Omega_{r}}f_{r}.\]
This proves the formula for \(\operatorname{Ker}T_{\Omega_{r}}\).
With a bit more work we can prove a similar result for the codimensions of the ranges of \(T_{\Omega}\) and \(T_{\Omega_{r}}\).
**Corollary 3.4**.: _Let \(\Omega\in\operatorname{Rat}^{m\times m}\) with \(\det\Omega\not\equiv 0\) and assume that \(\Omega\) has no zeroes on \(\mathbb{T}\). Let \(r_{0}>1\) be such that \(\Omega\) has no zeroes in the annulus \(\mathfrak{A}_{r_{0}}\) and no poles in \(\mathfrak{A}_{r_{0}}\setminus\mathbb{T}\). Let \(1<r<r_{0}\) and let \(\mathcal{X}\) be a complement of \(\operatorname{Ran}T_{\Omega_{r}}\) in \(H^{p}_{m}\). Then_
\[\mathcal{X}_{1/r}:=\{h_{1/r}\colon h\in\mathcal{X}\}\]
_is a complement of \(\operatorname{Ran}T_{\Omega}\). In particular, we have_
\[\operatorname{codim}\operatorname{Ran}T_{\Omega}=\operatorname{codim} \operatorname{Ran}T_{\Omega_{r}}.\]
**Proof.** Note that by assumption \(T_{\Omega}\) and \(T_{\Omega_{r}}\) are both Fredholm. Hence they have closed ranges and \(\mathcal{X}\) is finite dimensional. By Proposition 3.1 we have that
\[\{g_{r}\colon g\in T_{\Omega}(\mathcal{D}_{r})\}=\operatorname{Ran}T_{\Omega_{ r}},\]
and thus
\[T_{\Omega}(\mathcal{D}_{r})=(\operatorname{Ran}T_{\Omega_{r}})_{1/r}:=\{g_{1/ r}\colon g\in\operatorname{Ran}T_{\Omega_{r}}\}.\]
Since \(\operatorname{Ran}T_{\Omega_{r}}+\mathcal{X}\) is a direct sum, the same is true for \((\operatorname{Ran}T_{\Omega_{r}})_{1/r}+\mathcal{X}_{1/r}=T_{\Omega}( \mathcal{D}_{r})+\mathcal{X}_{1/r}\), since \(h\in(\operatorname{Ran}T_{\Omega_{r}})_{1/r}\cap\mathcal{X}_{1/r}\) implies that \(h\in\mathcal{D}_{r}\) and \(h_{r}\in\operatorname{Ran}T_{\Omega_{r}}\cap\mathcal{X}=\{0\}\), so that \(h=0\). Also, since \(\operatorname{Ran}T_{\Omega_{r}}+\mathcal{X}=H^{p}_{m}\), we have that \(T_{\Omega}(\mathcal{D}_{r})+\mathcal{X}_{1/r}=\mathcal{D}_{r}\). We claim that \(\operatorname{Ran}T_{\Omega}+\mathcal{X}_{1/r}\) is also a direct sum. Indeed, let \(h\in\operatorname{Ran}T_{\Omega}\cap\mathcal{X}_{1/r}\). Since \(h\in\operatorname{Ran}T_{\Omega}\), we have \(h=T_{\Omega}f\) for some \(f\in\operatorname{Dom}(T_{\Omega})\). Moreover, we have \(h\in\mathcal{D}_{r}\), because \(\mathcal{X}_{1/r}\subset\mathcal{D}_{r}\). But then Proposition 3.1 implies that also \(f\in\mathcal{D}_{r}\). Hence \(h\in T_{\Omega}(\mathcal{D}_{r})\cap\mathcal{X}_{1/r}=\{0\}\). Finally, note that
\[\mathcal{D}_{r}=T_{\Omega}(\mathcal{D}_{r})+\mathcal{X}_{1/r}\subset \operatorname{Ran}T_{\Omega}+\mathcal{X}_{1/r}\subset H^{p}_{m},\]
and that \(\operatorname{Ran}T_{\Omega}+\mathcal{X}_{1/r}\) is closed, since \(\operatorname{Ran}T_{\Omega}\) and \(\mathcal{X}_{1/r}\) are both closed and \(\mathcal{X}_{1/r}\) finite dimensional, using [5, Proposition III.4.3]. The fact that \(\mathcal{D}_{r}\) is dense now implies that \(\operatorname{Ran}T_{\Omega}+\mathcal{X}_{1/r}=H^{p}_{m}\) is a direct sum decomposition of \(H^{p}_{m}\).
**Proof of Theorem 1.2.** The claims follow directly by combining the results of Lemma 3.2 and Corollaries 3.3 and 3.4.
## 4 Invertibility of \(T_{\Omega}\) and the formula for \(T_{\Omega}^{-1}\)
In this section we prove Theorem 1.3 and Proposition 1.4. That invertibility of \(T_{\Omega}\) corresponds to the existence of a solution to the Riccati equation (1.3) so that \(A_{\circ}\) and \(\alpha_{\circ}\) in (1.4) are stable, and hence the pseudo-canonical factorization of \(\Omega\), follows easily from the results of the previous sections. To obtain the formula for \(T_{\Omega}\) and its block matrix representation in terms of the realization (1.2) requires more work. We shall first present the operator factorization of \(T_{\Omega}\) corresponding to the pseudo-canonical factorization.
**Lemma 4.1**.: _Let \(\Omega\in\operatorname{Rat}^{m\times m}\) with \(\det\Omega\not\equiv 0\) and no zeroes on \(\mathbb{T}\) be given by the minimal realization (1.2) with \(A\) stable and \(\alpha\) semi-stable. Assume that \(\Omega\) admits a pseudo-canonical factorization \(\Omega=\Psi\Theta\) as in item (ii) of Theorem 1.1, with \(\Theta\) and \(\Psi\) as in (1.6). Then \(T_{\Omega}\) is given by_
\[T_{\Omega}=T_{\Psi}T_{\Theta}.\]
_In particular, \(T_{\Theta}\) is bounded and has a bounded inverse \(T_{\Theta}^{-1}=T_{\Theta^{-1}}\). Moreover, \(T_{\Psi}\) admits an upper triangular block Toeplitz matrix representation with respect to the standard basis of \(H_{m}^{p}\) given by_
\[\left[T_{\Psi}\right]_{i,j}=0\text{ if }j<i,\quad\left[T_{\Psi}\right]_{i,j}= \delta\text{ if }j=i,\quad\left[T_{\Psi}\right]_{i,j}=\gamma\alpha^{j-i-1}\beta_{0} \text{ if }j>i. \tag{4.1}\]
Proof.: Since \(A\) and \(A_{\circ}\) are both stable, it follows that \(\Theta\) and \(\Theta^{-1}\) have no poles inside the closed unit disc \(\overline{\mathbb{D}}\), that is, both are plus functions. It then follows from Lemma 6.1 in [10] (with \(\Omega=\Psi\) and \(V=\Theta\)) that \(T_{\Omega}=T_{\Psi}T_{\Theta}\). It is then also clear that \(T_{\Theta}\) is bounded with bounded inverse \(T_{\Theta}^{-1}=T_{\Theta^{-1}}\). Hence, it remains to determine the block matrix representation of \(T_{\Psi}\). For this purpose, we compute \(T_{\Psi}z^{n}\). That should produce a polynomial, and the coefficients of that polynomial, augmented with zeroes, give the \(n\)-th block column of the block matrix representation of \(T_{\Psi}\). By successive applications of the formula
\[(zI-\alpha)^{-1}=z^{-1}I+z^{-1}\alpha(zI-\alpha)^{-1}\]
we see that
\[\Psi(z)z^{n} =\left(\delta+\sum_{j=0}^{n-1}z^{-j-1}\gamma\alpha^{j}\beta_{ \circ}+z^{-n}\gamma\alpha^{n}(zI-\alpha)^{-1}\beta_{\circ}\right)z^{n}\] \[=\delta z^{n}+\sum_{j=0}^{n-1}z^{n-j-1}\gamma\alpha^{j}\beta_{ \circ}+\gamma\alpha^{n}(zI-\alpha)^{-1}\beta_{\circ}.\]
Now, because \(\alpha\) has all its eigenvalues in the closed unit disc \(\overline{\mathbb{D}}\), the function \(\gamma\alpha^{n}(zI-\alpha)^{-1}\beta_{\circ}\) can be written as the sum of a function in \(K_{m\times m}^{p}\) and a function in \(\operatorname{Rat}_{0}^{m\times m}(\mathbb{T})\). Thus \(T_{\Psi}z^{n}=\delta z^{n}+\sum_{j=0}^{n-1}z^{n-j-1}\gamma\alpha^{j}\beta_{ \circ}\). This shows that the block matrix representation of \(T_{\Psi}\) is indeed given by (4.1).
**Proof of Theorem 1.3.** First assume \(T_{\Omega}\) is invertible. By Theorem 1.2, it follows that \(T_{\Omega_{r}}\) is invertible for all \(1<r<r_{0}\), with \(r_{0}\) as in Theorem 1.2. It then follows from Proposition 2.1 that a solution \(Q\) to the Riccati equation (1.3) exists, and reasoning as in the proof of Theorem 1.1 if follows that for this solution \(Q\) the matrices \(A_{\circ}\) and \(\alpha_{\circ}\) in (1.4) are stable.
Conversely, if the Riccati equation (1.3) has a solution \(Q\) such that \(A_{\circ}\) and \(\alpha_{\circ}\) are stable, then Theorem 1.1 provides a pseudo-canonical factorization of \(\Omega\), and, due to the stability of \(A_{\circ}\) and \(\alpha_{\circ}\), this factorization extends to a canonical factorization of \(\Omega_{r}\) for \(r>1\) small enough. It then follows from Proposition 2.1 that \(T_{\Omega_{r}}\) is invertible, and, consequently, that \(T_{\Omega}\) is invertible, by Theorem 1.2.
From the factorization \(T_{\Omega}=T_{\Psi}T_{\Theta}\) and the boundedness and boundedly invertibility of \(T_{\Theta}\), obtained in Lemma 4.1, it follows that invertibility of \(T_{\Omega}\) corresponds to invertibility of \(T_{\Psi}\) and, moreover, \(T_{\Omega}^{-1}=T_{\Theta}^{-1}T_{\Psi}^{-1}=T_{\Theta^{-1}}T_{\Psi}^{-1}\). Since \(\Theta\), \(\Theta^{-1}\) and \(\Psi^{-1}\) have no poles on \(\mathbb{T}\), the Toeplitz operators \(T_{\Theta}\), \(T_{\Theta^{-1}}\) and \(T_{\Psi^{-1}}\) are bounded and their block matrix representations are well understood. It remains to show that the block matrix representation of \(T_{\Psi}^{-1}\) and \(T_{\Psi^{-1}}\) are the same, as this would prove that these bounded operators coincide on the subspace of polynomials \(\mathcal{P}^{m}\) and equality would follow from their boundedness and the denseness of \(\mathcal{P}^{m}\) in \(H_{m}^{p}\). Indeed, once it is proved that \(T_{\Psi}^{-1}=T_{\Psi^{-1}}\), then the block matrix representation of \(T_{\Omega}^{-1}\) in (1.11) follows directly from \(T_{\Omega}^{-1}=T_{\Theta^{-1}}T_{\Psi^{-1}}\) and the block matrix representations of \(T_{\Theta^{-1}}\) and \(T_{\Psi^{-1}}\). Hence, we need to show that the matrix entries with respect to the standard (block) basis of \(H_{m}^{p}\) of \(T_{\Psi^{-1}}T_{\Psi}\) and \(T_{\Psi}T_{\Psi^{-1}}\) are \(I_{m}\) on the diagonal and \(0\) elsewhere.
The block matrix representation of \(T_{\Psi}\), in terms of the realization (1.2), is given by (4.1). The realization formula of \(\Psi^{-1}\) in item (ii) of Theorem 1.1, together with the stability of \(\alpha_{\circ}\) shows that the block matrix representation of \(T_{\Psi^{-1}}\) is an upper block triangular Toeplitz matrix that is determined by its first block row, which is given by
\[\left[\delta^{-1}\quad-\delta^{-1}\gamma\beta_{\circ}\delta^{-1}\quad-\delta^ {-1}\gamma\alpha_{\circ}\beta_{\circ}\delta^{-1}\quad-\delta^{-1}\gamma \alpha_{\circ}^{2}\beta_{\circ}\delta^{-1}\quad\cdots\right].\]
We first consider the block matrix representation of \(T_{\Psi^{-1}}T_{\Psi}\). It is required to show that the \((i,j)\)-th block entry \([T_{\Psi^{-1}}T_{\Psi}]_{ij}\) works out as
\[[T_{\Psi^{-1}}T_{\Psi}]_{ij}=0\text{ if }j<i,\quad[T_{\Psi^{-1}}T_{\Psi}]_{ij} =I_{m}\text{ if }j=i,\quad[T_{\Psi^{-1}}T_{\Psi}]_{ij}=0\text{ if }j>i.\]
The case where \(j<i\) follows directly because the matrix representations of \(T_{\Psi^{-1}}\) and \(T_{\Psi}\) are both block upper triangular, and the case \(j=i\) follows because the block diagonal elements are each others inverses. Hence, it remains to consider the case where \(j>i\). For this purpose, notice that
\[\alpha-\alpha_{\circ}=\beta_{\circ}\delta^{-1}\gamma\quad\text{and}\quad A-A_{ \circ}=BD^{-1}C_{\circ}. \tag{4.2}\]
For \(j>i\) we have
\[[T_{\Psi^{-1}}T_{\Psi}]_{ij}=-\delta(\delta^{-1}\gamma\alpha_{ \circ}^{j-i-1}\beta_{\circ}\delta^{-1})-\sum_{k=0}^{j-i-2}\gamma\alpha^{k} \beta_{\circ}\delta^{-1}\gamma\alpha_{\circ}^{j-i-2-k}\beta_{\circ}\delta^{-1}+\] \[\qquad\qquad\qquad\qquad+\gamma\alpha^{j-i-1}\beta_{\circ}\delta ^{-1}.\]
If \(j=i+1\), then the summation in the middle term of the right-hand side is empty, and it is easy to see that the right-hand side collapses to \(0\) by a direct application of the first identity in (4.2). For \(j>i+1\), using the first identity in (4.2), we see that
\[\sum_{k=0}^{j-i-2}\gamma\alpha^{k}\beta_{\circ}\delta^{-1}\gamma \alpha_{\circ}^{j-i-2-k}\beta_{\circ}\delta^{-1}=\sum_{k=0}^{j-i-2}\gamma\alpha ^{k}(\alpha-\alpha_{\circ})\alpha_{\circ}^{j-i-2-k}\beta_{\circ}\delta^{-1}=\] \[\qquad\qquad=\sum_{k=0}^{j-i-2}\gamma\alpha^{k+1}\alpha_{\circ}^ {j-i-2-k}\beta_{\circ}\delta^{-1}-\sum_{k=0}^{j-i-2}\gamma\alpha^{k}\alpha_{ \circ}^{j-i-1-k}\beta_{\circ}\delta^{-1}\] \[\qquad\qquad=\sum_{k=1}^{j-i-1}\gamma\alpha^{k}\alpha_{\circ}^{j -i-1-k}\beta_{\circ}\delta^{-1}-\sum_{k=0}^{j-i-2}\gamma\alpha^{k}\alpha_{ \circ}^{j-i-1-k}\beta_{\circ}\delta^{-1}. \tag{4.3}\]
Inserting this formula back into the formula for \([T_{\Psi^{-1}}T_{\Psi}]_{ij}\), it follows that in the first summation in (4.3) the term \(k=0\) is added, while in the second summation the term \(k=j-i-1\) is added, so that \([T_{\Psi^{-1}}T_{\Psi}]_{ij}=0\), as claimed. A similar computation, using the second identity of (4.2), shows that the block matrix representation of \(T_{\Psi}T_{\Psi^{-1}}\) also corresponds to the block matrix representation of \(I_{H_{m}^{p}}\).
Finally, we turn to the proof of the last result in the introduction. Proposition 1.4 is stated for \(p=2\), but the lemma which we require for the proof also works for \(p\neq 2\). For \(r>1\), define the invertible linear map
\[\Upsilon:\mathcal{D}_{r}\to H_{m}^{p},\ \Upsilon:f\mapsto f_{r},\quad\text{ with inverse}\quad\Upsilon^{-1}:H_{m}^{p}\to H_{m}^{p},\ \Upsilon^{-1}:f\mapsto f_{r^{-1}}.\]
**Lemma 4.2**.: _Let \(\Omega\in\operatorname{Rat}^{m\times m}\) with \(\det\Omega\not\equiv 0\) be given by the minimal realization (1.2) with \(A\) stable and \(\alpha\) semi-stable. Define \(r_{0}\) as in Theorem 1.2, \(\mathcal{O}_{C,A}\) as in (1.12) and \(\mathcal{C}_{\alpha,\beta}\) as in (1.13), and define \(\mathcal{O}_{C_{r},A_{r}}\) and \(\mathcal{C}_{\alpha_{r},\beta_{r}}\) analogously, where \(C_{r},A_{r},\alpha_{r},\beta_{r}\) are defined as in (2.3) and \(1<r<r_{0}\). Then \(\mathcal{O}_{C,A}\), \(\mathcal{O}_{C_{r},A_{r}}\) and \(\mathcal{C}_{\alpha_{r},\beta_{r}}\) are bounded, the range of \(\mathcal{O}_{C,A}\) is contained in \(\mathcal{D}_{r}\) and \(\mathcal{D}_{r}\) is contained in \(\operatorname{Dom}(\mathcal{C}_{\alpha,\beta})\). Moreover, we have_
\[r\mathcal{C}_{\alpha_{r},\beta_{r}}=\mathcal{C}_{\alpha,\beta}\Upsilon^{-1}, \quad\mathcal{O}_{C_{r},A_{r}}=r\Upsilon\mathcal{O}_{C,A}\quad\text{and}\quad T _{\Omega_{r}}=\Upsilon T_{\Omega}\Upsilon^{-1}.\]
_Furthermore, in case \(T_{\Omega}\) is invertible, then \(T_{\Omega_{r}}\) is invertible as well and \(T_{\Omega_{r}}^{-1}=\Upsilon T_{\Omega}^{-1}\Upsilon^{-1}\)._
Proof.: Since the realization (1.2) of \(\Omega\) is minimal, it follows from the definition of \(r_{0}\) that \(A_{r}=rA\) is still stable. Since \(r>1\) and \(\alpha\) is semi-stable, \(\alpha_{r}=r^{-1}\alpha\) is stable. This implies the boundedness of \(\mathcal{O}_{C,A}\), \(\mathcal{O}_{C_{r},A_{r}}\) and \(\mathcal{C}_{\alpha_{r},\beta_{r}}\), as well as the fact that \(\mathcal{O}_{C,A}\) maps into \(\mathcal{D}_{r}\). The identity \(\mathcal{O}_{C_{r},A_{r}}=r\Upsilon\mathcal{O}_{C,A}\) is straightforward from the definitions.
To show that \(\mathcal{D}_{r}\) is in the domain of \(\mathcal{C}_{\alpha,\beta}\) and that \(r\mathcal{C}_{\alpha_{r},\beta_{r}}=\mathcal{C}_{\alpha,\beta}\Upsilon^{-1}\) holds on \(H_{m}^{p}\), let \(f(z)=\sum_{k=0}^{\infty}f_{k}z^{k}\in H_{m}^{p}\), then \(f\) is integrable in \(\mathbb{T}\) and so is the
rational matrix function \((zI-\alpha_{r})^{-1}\beta_{r}=\sum_{k=0}^{\infty}\alpha_{r}^{k}\beta_{r}z^{k}\). It follows from Lemma 1.5 on page 81 of [17] that
\[\mathcal{C}_{\alpha_{r},\beta_{r}}f =\sum_{k=0}^{\infty}\alpha_{r}^{k}\beta_{r}f_{k}=\sum_{k=0}^{ \infty}r^{-k-1}\alpha^{k}\beta f_{k}=r^{-1}\sum_{k=0}^{\infty}\alpha^{k}\beta r ^{-k}f_{k}\] \[=r^{-1}\mathcal{C}_{\alpha,\beta}f_{1/r}=r^{-1}\mathcal{C}_{ \alpha,\beta}\Upsilon^{-1}f.\]
The above computation shows in particular that \(\Upsilon^{-1}f\) is in \(\mathrm{Dom}(\mathcal{C}_{\alpha,\beta})\) for each \(f\in H_{m}^{p}\) so that \(\mathcal{D}_{r}\subset\mathrm{Dom}(\mathcal{C}_{\alpha,\beta})\).
The relation between \(T_{\Omega}\) and \(T_{\Omega_{r}}\) in (3.3) implies that \(\Upsilon T_{\Omega}|_{\mathcal{D}_{r}}=T_{\Omega_{r}}\Upsilon\). Since the range of \(\Upsilon^{-1}\) is equal to \(\mathcal{D}_{r}\) and \(\Upsilon\Upsilon^{-1}=I\), we have \(T_{\Omega_{r}}=\Upsilon T_{\Omega}\Upsilon^{-1}\), as claimed.
Assume \(T_{\Omega}\) is invertible. By Theorem 1.2, also \(T_{\Omega_{r}}\) is invertible. By Proposition 3.1, \(T_{\Omega}\) maps \(\mathcal{D}_{r}\) into \(\mathcal{D}_{r}\) and \(T_{\Omega}^{-1}\) also maps \(\mathcal{D}_{r}\) into \(\mathcal{D}_{r}\). In particular, the operator \(\Upsilon T_{\Omega}^{-1}\Upsilon^{-1}\) is a well-defined linear map on \(H_{m}^{p}\), which is closed by [18, Problem III.5.7], and hence bounded by the closed graph theorem. To see that \(T_{\Omega_{r}}^{-1}=\Upsilon T_{\Omega}^{-1}\Upsilon^{-1}\), note that if \(f\in H_{m}^{p}\), then \(\Upsilon^{-1}f\) and \(T_{\Omega}\Upsilon^{-1}f\) are in \(\mathcal{D}_{r}\) so that
\[(\Upsilon T_{\Omega}^{-1}\Upsilon^{-1})(\Upsilon T_{\Omega}\Upsilon ^{-1})f =\Upsilon T_{\Omega}^{-1}\Upsilon^{-1}\Upsilon(T_{\Omega}\Upsilon ^{-1}f)\] \[=\Upsilon T_{\Omega}^{-1}T_{\Omega}\Upsilon^{-1}f=\Upsilon\Upsilon ^{-1}f=f.\]
Hence \((\Upsilon T_{\Omega}^{-1}\Upsilon^{-1})(\Upsilon T_{\Omega}\Upsilon^{-1})=I\). Similarly one obtains \((\Upsilon T_{\Omega}\Upsilon^{-1})(\Upsilon T_{\Omega}^{-1}\Upsilon^{-1})\)\(=I\). Hence, \(T_{\Omega_{r}}^{-1}=\Upsilon T_{\Omega}^{-1}\Upsilon^{-1}\).
With the identities of the previous lemma, we can now prove Proposition 1.4.
Proof of Proposition 1.4.: Set \(p=2\) and identify \(H_{m}^{2}\) and \(\ell_{m}^{2}\) in the usual way. According to [6], the matrix \(Q\) in Proposition 2.1 is given by \(Q=\mathcal{C}_{\alpha_{r},\beta_{r}}T_{\Omega}^{-1}\mathcal{O}_{C_{r},A_{r}}\). It was already noted in Proposition 2.1 that \(Q\) is independent of the value of \(1<r<r_{0}\). That also \(Q=\mathcal{C}_{\alpha,\beta}T_{\Omega}^{-1}\mathcal{O}_{C,A}\) now follows directly from the identities derived in Lemma 4.2.
### Acknowledgements
This work is based on research supported in part by the National Research Foundation of South Africa (NRF, Grant Numbers 118513, 127364 and 145688) and the DSI-NRF Centre of Excellence in Mathematical and Statistical Sciences (CoE-MaSS). Any opinion, finding and conclusion or recommendation expressed in this material is that of the authors and the NRF and CoE-MaSS do not accept any liability in this regard.
_Declaration of interest:_ none. |
2309.13849 | PySimFrac: A Python Library for Synthetic Fracture Generation, Analysis,
and Simulation | In this paper, we introduce Pysimfrac, a open-source python library for
generating 3-D synthetic fracture realizations, integrating with fluid
simulators, and performing analysis. Pysimfrac allows the user to specify one
of three fracture generation techniques (Box, Gaussian, or Spectral) and
perform statistical analysis including the autocorrelation, moments, and
probability density functions of the fracture surfaces and aperture. This
analysis and accessibility of a python library allows the user to create
realistic fracture realizations and vary properties of interest. In addition,
Pysimfrac includes integration examples to two different pore-scale simulators
and the discrete fracture network simulator, dfnWorks. The capabilities
developed in this work provides opportunity for quick and smooth adoption and
implementation by the wider scientific community for accurate characterization
of fluid transport in geologic media. We present Pysimfrac along with
integration examples and discuss the ability to extend Pysimfrac from a single
complex fracture to complex fracture networks. | Eric Guiltinan, Javier E. Santos, Prakash Purswani, Jeffrey D. Hyman | 2023-09-25T03:24:30Z | http://arxiv.org/abs/2309.13849v1 | # pySimFrac: A Python Library for Synthetic Fracture Generation, Analysis, and Simulation
###### Abstract
In this paper, we introduce pySimFrac, a open-source python library for generating 3-D synthetic fracture realizations, integrating with fluid simulators, and performing analysis. pySimFrac allows the user to specify one of three fracture generation techniques (Box, Gaussian, or Spectral) and perform statistical analysis including the autocorrelation, moments, and probability density functions of the fracture surfaces and aperture. This analysis and accessibility of a python library allows the user to create realistic fracture realizations and vary properties of interest. In addition, pySimFrac includes integration examples to two different pore-scale simulators and the discrete fracture network simulator, drfnWorks. The capabilities developed in this work provides opportunity for quick and smooth adoption and implementation by the wider scientific community for accurate characterization of fluid transport in geologic media. We present pySimFrac along with integration examples and discuss the ability to extend pySimFrac from a single complex fracture to complex fracture networks.
+
Footnote †: offprints: ORCID(s): 0000-0002-0763-0625 (E. Guiltinan)
## 1 Introduction
The study of complex fracture geometries has important applications in Earth and material sciences. Fractures are ubiquitous in geologic formations where they represent preferential flow pathways in otherwise low permeable materials (Viswanathan et al., 2022). These conduits often control the response of fluid migration in the subsurface which has important implications for unconventional oil and gas exploration, geothermal energy, environmental remediation, carbon dioxide sequestration, hydrogen and natural gas storage, and nuclear waste isolation (Renshaw, 1995; Wang and Cardenas, 2014; Wang et al., 2015; Vogler et al., 2018). Experimental and numerical work in natural fractures is often challenging due to the difficulty of obtaining samples representative of subsurface conditions and the inability to model the wide range of relevant length scales, which span multiple orders of magnitude (Bonnet et al., 2001). To bridge the gap between natural fractures available for experimentation and the large variety of expected fractures researchers often turn to synthetic fracture generation techniques (Brown, 1995).
Several synthetic fracture generation techniques have been developed (Ogilvie et al., 2006; Brown, 1995; Glover et al., 1998a,b, 1999). Brown (1995) presented a Fourier space-based mathematical model for the generation of synthetic fracture surfaces which relied upon only three parameters; the fractal dimensions, the roughness, and a mismatch
length scale. This model assumes that at lengths less then the mismatch length the two fracture surfaces are completely uncorrelated and at lengths greater than the mismatch length they are perfectly matched. Glover et al. (1998) presented a more realistic model which included a transition length where the fracture surfaces transitions smoothly from completely uncorrelated to a specified maximum correlation. Ogilvie et al. (2006) presented an update to the Glover et al. (1998) method, which corrected an error in the the mixing of correlated random variables and also included the ability to specify a minimum correlation at short length scales. The techniques discussed here have been implemented in a graphical user interface based program called "SynFrac" (Ogilvie et al., 2006). However, SynFrac has some limitations. In particular, it can only generate fractures with square dimensions (e.g. 128x128, 256x256), only outputs csv or txt files, has limited analysis tools, and cannot be called within automated scripts. This makes the development of large datasets of fracture properties (e.g., Guiltinan et al. (2021); Ting et al. (2022)) time consuming and prone to errors. Moreover, many research teams have developed one-off scripts to generate synthetic fracture surfaces but there is not a comprehensive open source scripted toolkit available at this time.
To overcome the limitations in currently available fracture generation methods, we have developed pySimFrac. pySimFrac is a Python module for constructing 3D single fracture geometries. The software is designed to help researchers investigate flow through fractures through direct numerical simulations of single/multi-phase flow through fractures. One advantage of the python implementation is that it allows for greater flexibility and customization compared to a GUI-based approach. With a python-based interface, researchers can readily expand the development and test new fracture generation algorithms or modify existing methods to better match experimental data. pySimFrac offers spectral-based and convolution-based fracture generation methods. Both methods can be customized to produce synthetic fractures akin to different rock types. pySimFrac also includes utilities for characterizing surface and aperture properties such as the correlation length, moments, and probability density function of the fracture surfaces and aperture field.
pySimFrac also provides seamless integration with open-source flow simulation libraries (MF-LBM(Chen et al., 2018), MP-LBM (Santos et al., 2022), and dfnWorks (Hyman et al., 2015)) elevating its utility for researchers and practitioners alike. This ease of integration streamlines the process of conducting direct numerical simulations of single/multi-phase flow through fractures, fostering a comprehensive understanding of fluid dynamics within these complex structures. By providing built-in compatibility with popular open-source simulators, pySimFrac eliminates the need for time-consuming and error-prone manual configuration, allowing users to focus on their research objectives. The library's robust and extensible design caters to a wide array of applications, accommodating users with varying requirements and expertise. Ultimately, pySimFrac's integration with flow simulation libraries further enhances its value as a tool for investigating fracture flow behavior, contributing significantly to advancements in subsurface hydrology, reservoir engineering, and environmental studies.
## 2 Software Design
pySimFrac has three primary components: (1) fracture surface generation (2) analysis toolkit and (3) and interface with various flow and transport solvers via file formatting. pySimFrac is meant to be an interactive object oriented module a primary class for a single fracture. The generation method along with functions to obtain statistical information, visualization/plotting, and input/output are all attached to the object.
### Fracture Surface Generation Methods
pySimFrac has multiple generation methods to produce rough fracture surfaces. The methods can be broken into two primary categories. The first set of techniques are spectral methods that produce self-affine / fractal surfaces. The second set are convolution-based. In addition to generating synthetic fractures, pySimFrac can be used to read in profilometry of fracture surfaces obtained in the laboratory.
Each pySimFrac object instance is defined by its length in x (\(l_{x}\)) and y (\(l_{y}\)) and a discretization length \(h\). Therefore, a uniform sized grid is produced with \(nx=\lceil{lx/h}\rceil\) and \(ny=\lceil{lx/h}\rceil\) for the top and bottom fracture surfaces and the projected aperture field. The latter is the difference in the heights between the two surfaces at each discrete location in the grid. pySimFrac allows for specifying a mean aperture field which is controlled during the voxelization process.
Figure 1: pySimFrac contains three fracture generation methods (Box, Guassian, and Spectral) as well as analysis and integration with flow simulators.
Along with these domain parameters, the user must specify a generation method, which are detailed below.
#### Spectral Method
Rough fracture surfaces have been represented by fractal / self-affine models in numerous studies (da Silva et al., 2019; Kang et al., 2016; Stigsson and Mas Ivars, 2019; Wang et al., 2016). At a first order approximation, The Fourier decomposition of a rough surface indicates that many non-uniform surfaces exhibit a power-law decay in the power spectral density function with a function form of
\[G(k)=Ck^{-\alpha} \tag{1}\]
where \(k=2\pi/\lambda\) is the wave number / Fourier mode, \(\lambda\) is the wavelength, \(C\) is a proportionality constant, and \(\alpha\) is the decay exponent. Based on these observations, a number of spectral / Fourier based rough surface generation methods have been proposed, the most common being Brown (1995), Glover et al. (1998b), and Ogilvie et al. (2006). A spectral method coded in matlab and based upon these techniques is available ([https://github.com/rvillamor/digital_generation_of_fractures/blob/main/RSG_brown1995.m](https://github.com/rvillamor/digital_generation_of_fractures/blob/main/RSG_brown1995.m)) and these techniques can also be found in the synFrac program ([https://homepages.see.leeds.ac.uk/](https://homepages.see.leeds.ac.uk/) earpwjg/PG_EN/Images/Software/Manual%20for%20web/Create.htm).
While there are differences and chronological improvements between these method, the core portion of the algorithms are fairly consistent. The methods all modify the amplitude and phases of the Fourier components of the surfaces. The amplitudes are scaled according to (1) and the phases are controlled using streams of random numbers. Special care is taken to define the random numbers which define phase, cf.Ogilvie et al. (2006) for a detailed discussion. The desired fractal dimension and autocorrelation of the surface is often defined in terms of the Hurst exponent which is in a particular sense related to \(\alpha\) in (1). These features along with anisotropy are included into the method via the amplitudes of the decomposition. The spectral method implemented in pySimFrac has the following parameters: (1) Hurst exponent with range; range (0,1), (2) Roughness / standard deviation of heights; range \(\geq\) 0, (3) Anisotropy ratio; range (0,1), (4) \(\lambda_{0}\) roll-off length scale as a fraction of fracture size; range [0,1], (5) Mismatch length scale (wavelength) as a fraction of fracture size [0,1] (6) Power spectral density model roll-off function (linear / bilinear / smooth).
An example of a surface generated using the spectral method is provided in Fig. 2 and the code used to generate it is shown in listing 1. The top and bottom surfaces are shown on the left sub-figure and the projected aperture field is in the right sub-figure. The sample was generated with a mean aperture of 0.5 mm, a roughness of 0.5, and an anisotropy of 0.5, mismatch of 0.1, and the smooth power spectral density model.
Spectral = SimFrac(
h=0.01,
#### Convolution Methods
The convolution methods are based on creating a stationary random topography by convolving an uncorrelated random field (\(u(\mathbf{x})\sim U[0,1]\)) with a specified kernel (\(k(\mathbf{x})\))
\[T(\mathbf{x})=\int d\mathbf{y}\ k(\mathbf{x}-\mathbf{y})*u(\mathbf{x})\enspace. \tag{2}\]
The structure of the \(T(\mathbf{x})\) (moments, correlation, and anisotropy) are determined by the central limit theorem and the inherited properties of the kernel. pySimFrac has several built in kernels, with the primary being a multi-variant
Figure 2: Fracture surface generated using the convolution method with an anisotropic Gaussian kernel
2-dimensional Gaussian function of the form
\[k(\mathbf{x})=\frac{1}{2\pi\sqrt{Det(\Lambda)}}\exp\left[-\mathbf{x}^{\prime} \Lambda\mathbf{x}/2\right]\,, \tag{3}\]
where \(\Lambda\) is symmetric matrix of length scales whose elements \(\lambda_{i}\) determine the spread of \(k(\mathbf{x})\) in various directions. Equation (2) produces a single surface topography with mean 0 and variance determined by the support of \(k(\mathbf{x})\), a direct result of the central limit theorem Hyman and Winter (2014). Thus, to produce a fracture with desired mean aperture and variance, a copy of \(T(\mathbf{x})\) is created as the bottom surface, then both surfaces translated and rescaled to obtained to the desired values. Isotropic topographies can be created by defining \(\Lambda\) as a diagonal matrix and assigning the same length scale \(\lambda\) to every direction. Anistropic ones, but having the values unequal, i.e., larger values in \(\lambda_{x}\) than \(\lambda_{y}\) will create longer correlations in the \(x\)-direction that \(y\)-direction. Hyman and Winter (2014) introduced this method for the generation of explicit three-dimensional pore structures, which has found use in various applications and studies (Guedon et al., 2017; Hyman et al., 2012; Hyman and Winter, 2013; Hyman et al., 2013, 2015a; Siena et al., 2015, 2014). The co-variance function as well as other properties of \(T(\mathbf{x})\) generated with the Gaussian kernel are given explicitly in Hyman and Winter (2014). Its worth noting that the surface generated using the Gaussian kernel are infinitely smooth in the mathematical sense because the smoothness (infinitely differentiable) is transferred to the surface via the convolution. In addition to the Gaussian kernel, there is a uniform or box function kernel available in pySimFrac, and the inclusion of additional kernels is straightforward and an area of active development.
An example of a surface generated using the convolution method and the Gaussian kernel is shown in Fig. 3 and the code shown in listing 2. The top and bottom surfaces are shown on the left sub-figure and the projected aperture field is in the right sub-figure. The sample was generated with a mean aperture of 0.5 mm, a log variance of 0.01, and an anisotropic kernel (\(\lambda_{1,1}=0.15\), \(\lambda_{2,2}=0.25\)), and a shear of 0.5 mm, which translates the top surface along the x-axis 0.5 mm to mimic shear (additional details provided in the next sections).
```
1Gaussian=simfrac(
2method="gaussian",
3h=0.01,
4lx=3,
5ly=1)
6Gaussian.params["mean-aperture"]["value"]=0.5
7Gaussian.params["aperture-log-variance"]["value"]=0.01
8Gaussian.params["lambda_x"]["value"]=0.15
9Gaussian.params["lambda_y"]["value"]=0.25
11Gaussian.shear=0.5
```
### Additional Generation Functions
In addition to the base generation methods detailed above, there are a number of functions in pySimFrac to further manipulate the surfaces. Foremost, one can rescale the mean and variance of the surfaces, jointly or individually, and the mean projected aperture field using any desired value. Next, one can apply horizontal shear to the fracture by shifting the top fracture surface along the x-axis for the desired distance. A key property of the pySimFrac fractures is that they are periodic in both x and y and the shear effectively translates the surface around a torus. Thus, the shear translation does not introduce discontinuities in the surfaces nor shorten the domain size, which could be the case if the surface was not periodic. Maintaining periodicity in x and y is often an important requirement of numerical simulation, particularly when simulating steady state fluid distributions for relative permeability calculations. Finally, pySimFrac surfaces can be combined using weighted linear superposition to create new surfaces. An example of this is shown in Fig. 4 and listing 3. Here, we combined the surfaces shown in Fig. 2 and Fig. 3 with 1/4 and 3/4 weights, respectively. The resulting fracture surface inherits the long correlations from the Gaussian kernel convolution surface as well as the local roughness of the spectral method. Any number of fracture objects can be combined.
```
1##Createnewfractubjectthatistheweightedlinearsuperpositionoftwoexistingsurfaces(Seepreviouslistings)
2Combined=Spectral.combine_fractures([Gaussian],weights=[0.25,0.75])
3##Plotsfracturesurfaces
4Combined.plot_surfaces()
5##Plotsfractureaperture
```
Finally, pySimFrac also allows users to import surfaces obtained from real fracture scans using profilometry so long as they are mapped onto a regular grid with equal spacing in both directions. Preprocessing of the raw profilometry is not supported as part of pySimFrac module.
### Analysis tools
In addition to the generation of fracture surfaces, pySimFrac provides a suite of geostatistical analysis tools. Functions are included to compute the first four moments of the surface height and aperture distributions.
```
1##Computefirstfourmomentsofthesurfaceheightandaperturedistributions
2Spectral.compute_moments()
3##PlotHeightandaperturePDFs
4Spectral.plot_surface_pdf()
```
Listing 4: Computation of geostatistical analysis
In addition to the moments of the height and aperture distributions, functions are included to compute and plot the auto-correlation function of the surface in the principal Cartesian directions (x and y) and the radial semi-variogram. The semi-variogram is computed by calling the python module SciKit-GStat(Malicke, 2022). Figure 6 and Listing 5 provide an example for the surface shown in Fig. 2.
```
1Spectral.compute_variogram(max_lag=100,num_lags=200)
2Spectral.plot_variogram()
```
Listing 5: Computation of geostatistical analysis
Figure 4: Combined fracture surface of Fig. 2 and Fig. 3.
### Effective Properties
Estimations of the effective properties from the structure of the surfaces are also provided by pySimFrac. These estimations are categorized into two types. The first are analytical and empirically derived estimations of the effective hydraulic aperture and the second are numerical approximations. The first estimations includes standard approximations such as various means, e.g., arithmetic, harmonic, and geometric, as well as several models proposed in the literature, cf. He et al. (2021) for a comprehensive list of models proposed in the literature. Most of the models proposed in the literature use moments of the aperture distribution, which can be directly computed using the analysis toolkit. In principle, any effective hydraulic model with geo-statistical parameters can be added to pySimFrac. The second type of approximations are obtained by numerical inversion of the Darcy equation with a spatially variable permeability field \(k(\mathbf{x})\) inferred using a local cubic law from the aperture field \(b(\mathbf{x})\), that \(k(\mathbf{x})=b^{2}(\mathbf{x})/12\). Note, that other
Figure 5: Probability density functions of the surface shown in Fig. 2
Figure 6: Semi-variogram with spherical model for the surface shown in Fig. 2
functional relationships between \(k(\mathbf{x})\) and \(b(\mathbf{x})\) can be readily applied as well. We obtain pressure and volumetric flow rates by solving the standard flow equations with variable coefficients discretized using a second-order finite scheme. Flow is driven by Dirchelet pressure boundary conditions in one primary direction to obtain estimates of effective permeability in that direction, which is then converted to an effective hydraulic aperture.
## 3 Integration with Flow and Transport Simulators
In addition to the generation methods and analysis toolkits, we have developed seamless handoffs with several open-source flow and transport simulators ranging from multi-phase lattice Boltzman methods to three-dimensional discrete fracture network simulators.
### Integration with MultiPhase LBM (MP-LBM)
To demonstrate the integration with flow simulators a fracture was created using pySimFrac (Figure 7). The fracture is 128 x 512 voxels and was created using the spectral method with the following parameters: _model_'smooth', _roughness_ 4.0, \(H\) 0.7, _aniso_ 0.0, _mismatch_ 0.25, and _mean aperture_ 15. MP-LBM (Santos et al. (2022); [https://github.com/je-santos/MPLBM-UT](https://github.com/je-santos/MPLBM-UT)) is a specialized lattice-Boltzmann library that significantly simplifies the process of running pore-scale lattice Boltzmann simulations through complex porous media. MP-LBM uses the high-performance, highly parallel library Palabos ([https://gitlab.com/unigespc/palabos](https://gitlab.com/unigespc/palabos)) as the solver backend, making it easily deployable in a variety of systems, from laptops to supercomputer clusters. The pySimFrac module aims to facilitate seamless integration between complex fracture geometry generation and single-phase flow simulation to enable the study of how realist fracture heterogeneities affect the permeability of a fracture domain. After creating a pySimFrac object, and installing MP-LBM the simulation can be run by simply calling the write_MPLBM function and supplying the number of buffer layers for voxelization, the number of cups to be utilized, and number of computational hours requested (listing 6).
```
1fromwrappersimportwrite_MPLBM,postprocess
2
3#sendsimulation
4lbm=write_MPLBM(simfrac_object=Spectral,
5buffer_layers=2,
6cpus=4,
7num_hrs=1)
8
9#plotandobtainpermeability
10postprocess(lbm)
```
Listing 6: An example script demonstrating the integration with the MP-LBM code.
An example of the resulting flow field using the integration with MP-LBM is shown in Figure 8. The completion of a full simulation utilizing this feature required roughly 15 seconds of computing time on a standard laptop. We anticipate that the incorporation of this capability will streamline the research process, making various aspects of investigation more straightforward and productive for both researchers and practitioners. Understanding the relationship between geometry and permeability may offer innovative perspectives, potentially leading to the refinement of correlations that can be applied at the field scale. Furthermore, it could shed light on the effects of phenomena such as compaction, cementation, and dissolution on this critical parameter.
### Integration with MF-LBM
MF-LBM (Chen et al., 2018, 2019) is an open source ([https://github.com/lanl/MF-LBM](https://github.com/lanl/MF-LBM)) high-performance lattice Boltzmann code developed at Los Alamos National Laboratory. It combines the continuum-surface-force based color-gradient multiphase model (Gunstensen et al., 1991; Liu et al., 2012) with the geometrical wetting model (Leclaire et al., 2016, 2017; Akai et al., 2018). The code is extensively parallelized and optimized for CPUs and GPUs and is ideal for running large (billion of nodes) multiphase simulations. We have integrated MF-LBM within pySimFrac allowing users to not only specify fracture properties as part of the core pySimFrac but also to specify multiphase
Figure 7: Characterizing the pore space of a typical fracture generated using pySimFrac. The top-view of first slice for the fracture is shown on the left, while the porosity along the length of the fracture is shown on the right. The scale is adapted from the fracture mean aperture (0.584 mm) in Karpyn et al. (2007)
flow parameters such as viscosity ratios, capillary number, contact angle, and interfacial tension. This allows for users to seamlessly conduct simulations on a variety of fracture properties with varied simulation parameters.
The same fracture properties in the MP-LBM integration (Section 3.1) were used to generate a fracture for simulation with MF-LBM (Figure 9). The initial condition in the fracture began with 100% occupancy by the blue phase, which was also the wetting phase. The contact angle was set to 50\({}^{\circ}\). Fluid viscosity ratio was set as 1.0, while the capillary number was set at 10E-4. The red phase (non-wetting phase) was introduced from the bottom. With the snapshots shown in Figure 9, we see an increase in the occupancy of the red phase as injection proceeds. We also estimated the corresponding saturation of both phases for each time step. At the last time step, we notice the majority of the fracture is occupied by the red phase with as saturation of 87.6%.
Figure 8: 3D velocity field magnitude from a single-phase LBM Simulation performed using the MP-LBM extension. The depicted computational domain of the fracture measures 512\(\times\)128 in the XY-plane, with a mean aperture of 12 voxels, corresponding to dimensions of 1.53e-6, 0.384e-6, and 0.045e-6 meters, respectively.
### Integration with dfnWorks: Three-Dimensional Discrete Fracture Network
dfnWorks is an open source three-dimensional discrete fracture network (DFN) modeling suite (Hyman et al., 2015). In a 3D DFN model, fractures are represented as a network of intersecting planes. The size, shape, orientation, and other hydrological properties are sampled from distributions whose parameters are determined from a site characterization, cf. Viswanathan et al. (2022) for a comprehensive discussion of DFN modeling approaches. Once the network is produced, as computational mesh representation is generated, on which flow and transport can be resolved (Hyman et al., 2014; Krotz et al., 2022). Details of dfnWorks in terms of algorithms and various applications can found in Hyman et al. (2015).
A key capability of dfnWorks is the ability to include variable aperture values into a 3D DFN simulation, e.g., Karra et al. (2015); Frampton et al. (2019); Makedonska et al. (2016); Hyman et al. (2021). We developed a handshake between pySimFrac and dfnWorks to map pySimFrac generated aperture fields directly onto dfnWorks fractures. An example of these is shown in Fig. 10. The network is composed of thirty-eight two meter square fractures in a 10 meter cube. Each fracture has a unique aperture field generated using the pySimFrac spectral method. Each node in the mesh is assigned an aperture value from a pySimFrac fracture (see inset).
## 4 Conclusions
The study of complex fractures has important applications in many research areas within the geosciences. Here, we present a new synthetic fracture generation library and analysis toolkit which allows for the investigation of wide range of fracture properties. Implemented in python and available open-source, pySimFrac makes it significantly
Figure 9: Implementation of pySimFrac to generate multiphase flow data. Upper row shows snapshots of increasing saturation of the red phase inside the fracture. Lower row shows corresponding saturation profiles of the red and blue phases along the length of the fracture. The scale is adapted from the fracture mean aperture (0.584 mm) in Karpyn et al. (2007)
easier to create and analyze realistic fractures for a wide range of research applications. In addition the integration with open-source simulation codes such as dfnWorks, MP-LBM, and MF-LBM makes fracture network generation and direct numerical simulation fast and approachable. The ability to easily create and simulate upon fractures which span the expected variety from nature should yield important findings in a range of disciplines.
## 5 Acknowledgments
Research presented in this article was supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project number XXXF00 and has been designated with the Los Alamos Unlimited Release number LA-UR-23-26998. J.S. would like to thank the Center for Nonlinear Studies for support. J.D.H acknowledges support from the Department of Energy (DOE) Basic Energy Sciences program (LANLE3W1) for support. This work has been partially funded by the Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy Office of Nuclear Energy. The views expressed in the article do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
Figure 10: A three-dimensional discrete fracture network generated with dfnWorks that includes internal aperture variability generated using the pySimFrac spectral method
The source codes are available for downloading at the link: [https://github.com/lanl/dfnWorks](https://github.com/lanl/dfnWorks)
|
2310.20338 | de Haas-van Alphen spectroscopy and fractional quantization of
magnetic-breakdown orbits in moiré graphene | Quantum oscillations originating from the quantization of the electron
cyclotron orbits provide ultrasensitive diagnostics of electron bands and
interactions in novel materials. We report on the first direct-space nanoscale
imaging of the thermodynamic magnetization oscillations due to the de Haas-van
Alphen effect in moir\'e graphene. Scanning by SQUID-on-tip in Bernal bilayer
graphene crystal-axis-aligned to hBN reveals abnormally large magnetization
oscillations with amplitudes reaching 500 {\mu}_B/electron in weak magnetic
fields, unexpectedly low frequencies, and high sensitivity to the superlattice
filling fraction. The oscillations allow us to reconstruct the complex band
structure in exquisite detail, revealing narrow moir\'e bands with multiple
overlapping Fermi surfaces separated by unusually small momentum gaps. We
identify distinct sets of oscillations that violate the textbook Onsager Fermi
surface sum rule, signaling formation of exotic broad-band particle-hole
superposition states induced by coherent magnetic breakdown. | Matan Bocarsly, Matan Uzan, Indranil Roy, Sameer Grover, Jiewen Xiao, Zhiyu Dong, Mikhail Labendik, Aviram Uri, Martin E. Huber, Yuri Myasoedov, Kenji Watanabe, Takashi Taniguchi, Binghai Yan, Leonid S. Levitov, Eli Zeldov | 2023-10-31T10:24:30Z | http://arxiv.org/abs/2310.20338v1 | de Haas-van Alphen spectroscopy and fractional quantization of magnetic-breakdown orbits in moire graphene
###### Abstract
Quantum oscillations originating from the quantization of the electron cyclotron orbits provide ultrasensitive diagnostics of electron bands and interactions in novel materials. We report on the first direct-space nanoscale imaging of the thermodynamic magnetization oscillations due to the de Haas-van Alphen effect in moire graphene. Scanning by SQUID-on-tip in Bernal bilayer graphene crystal-axis-aligned to hBN reveals abnormally large magnetization oscillations with amplitudes reaching 500 \(\mu_{\mathrm{B}}\) /electron in weak magnetic fields, unexpectedly low frequencies, and high sensitivity to the superlattice filling fraction. The oscillations allow us to reconstruct the complex band structure in exquisite detail, revealing narrow moire bands with multiple overlapping Fermi surfaces separated by unusually small momentum gaps. We identify distinct sets of oscillations that violate the textbook Onsager Fermi surface sum rule, signaling formation of exotic broad-band particle-hole superposition states induced by coherent magnetic breakdown.
Oscillations in the thermodynamic and transport properties of metals subject to an external magnetic field are a fundamental quantum effect originating from the quantization of the cyclotron orbit areas. In 2D systems, the periodicity of quantum oscillations (QOs), explained by discrete Landau levels (LLs), is related in a universal way to the applied field strength and the Fermi surface (FS) geometry. The oscillations carry a wealth of information about the FS and are indispensable for resolving the band structure of moire materials, where the presence of a superlattice potential and enhanced electron-electron interactions lead to formation of narrow minibands with multiple FSs and symmetry broken states [1, 2, 3, 4, 5, 6, 7]. Quantum oscillations (QOs) can also reveal the band topology [8, 9] and strain-induced pseudomagnetic fields in graphene [10, 11, 12].
In bulk materials, QOs can be detected by measuring magnetization oscillations due to the de Haas-van Alphen (dHvA) effect. These thermodynamic oscillations, however, are usually experimentally inaccessible in 2D electron systems since the signal scales with the sample volume and is therefore extremely weak in 2D. This limits the studies of 2D electron systems mostly to the non-thermodynamic Shubnikov-de Haas (SdH) oscillations in transport coefficients. Nevertheless, several studies have succeeded in resolving magnetization oscillations in 2D electron gas (2DEG) in GaAs heterostructures using mm size samples [13, 14, 15, 16], as well as in magnetically doped ZnSe [17]. [17]. In contrast, exfoliated clean van der Waals structures are of typical sizes limited to tens of \(\upmu\)m, which makes observation of dHvA effect in atomically thin systems extremely challenging. Furthermore, all previous dHvA studies in 2DEG and in bulk materials have been global, providing no spatial information on the local band structure and thermodynamic electronic properties.
Here we report on the first spatial mapping of the dHvA effect in a van der Waals structure with resolution as high as 170 nm. We observe very large magnetization oscillations in moire flat bands in Bernal-stacked bilayer graphene (BLG) aligned to hBN [2, 18, 19]. The oscillations appear at low magnetic fields and at carrier densities of a few electrons per superlattice unit cell. In the integer quantum Hall effect (QHE), the periodicity of the QOs is tied in a universal manner to the carrier density through the LL degeneracy. To the contrary, here the observed oscillations display characteristic frequencies that are an order of magnitude lower and form complex spectra, revealing the coexistence of multiple FSs and allowing accurate moire band structure reconstruction.
When a number of FSs coexist, the QOs display several frequencies, reflecting the relative size of the different Fermi pockets encircled by the cyclotron orbits [17, 20, 21, 22, 23, 24]. In addition to these fundamental orbits, exotic electron orbits delocalized in \(k\)-space and supporting coherently entangled states in different bands can arise due to interpocket tunneling. Such tunneling, which is hindered by momentum conservation at zero magnetic field, is made possible at elevated fields through the coherent magnetic breakdown (CMB) mechanism [25, 26, 27, 28, 29, 30, 31, 32]. CMB has been predicted to occur in moire graphene [23, 33], yet so far it has evaded detection. We provide the first observation of CMB in atomic van der Waals structures evidenced by uniquely rich sets of QOs. Conventional CMB is expected to occur at high fields in the vicinity of saddle points or Lifshitz transitions, across which the topology of the FS changes [23, 33, 34]. To the contrary, we find a broad-band breakdown at very low applied fields, extending over several moire bands and spanning a wide range of energies. The observed QOs indicate the occurrence of particle-hole superposition states shared by closely-proximitized bands and exhibiting a high degree of interband phase coherence. The hallmark of such states is unusual CMB oscillation frequencies violating Onsager's FS area sum rule. Instead, the QO frequencies are described by fractional Onsager quantization relations.
**Transport measurements**
Transport measurements of \(\rho_{xx}\) and \(\rho_{yx}\) (Figs. 1A,B) of the BLG sample (Fig. 2A) were performed at a temperature \(T=300\) mK as a function of applied out-of-plane magnetic field \(B_{a}\) and carrier density \(n\). Peaks in \(\rho_{xx}\) (Fig. 1C) at \(n=4n_{0}\cong\pm 3.48\times 10^{12}\) cm\({}^{-2}\) indicate that the BLG is aligned to the hBN substrate with a
twist of \(\theta\cong 0.70^{\circ}\), forming a moir\(\acute{\text{e}}\) superlattice with unit cell size \(\lambda\cong 11.5\) nm [35], where \(n_{0}\) corresponds to one electron per moir\(\acute{\text{e}}\) unit cell. The weak \(\rho_{xx}\) peaks at filling factor \(\nu=n/n_{0}=\pm 4\) reflect minima in the density of states (DOS) and the absence of a full gap between the flat and remote bands.
At low fields, \(\rho_{yx}\) shows several sign reversals in Fig. 1B. The Hall carrier density \(n_{H}\) derived at \(B_{a}=300\) mT (Fig. 1D, green line) reveals a van Hove singularity at \(\nu\cong 3.5\), accompanied by a change in carrier type from electrons to holes. A similar behavior is found at \(\nu\cong-3.5\). Several additional \(n_{H}\) sign reversals are observed at higher \(|\nu|\), consistent with the presence of several remote bands. Notably, at higher fields, the sign reversals
Figure 1: **Transport measurements in BLG aligned to hBN.** (**A**) \(\rho_{xx}\) vs. carrier density \(n\) and magnetic field \(B_{a}\) at \(T=300\) mK. The oscillations at high \(B_{a}\) display four-fold degenerate LLs and Hofstadter’s butterfly (Fig. S1). Resistivity values above 500 \(\Omega\) are saturated for clarity. (**B**) \(\rho_{yx}\) vs. \(\nu\) and \(B_{a}\). Resistivity values above 500 \(\Omega\) are saturated. (**C**) \(\rho_{xx}\) vs. \(\nu\) at \(B_{a}=0\) T. (**D**) Hall carrier density \(n_{H}=B_{a}/(ep_{yx})\) vs. \(n\) derived from \(\rho_{yx}\) at \(B_{a}=0.3\) T (green) and \(B_{a}=4.2\) T (blue). The dashed red line shows a slope of 1 corresponding to \(n_{H}=n\).
disappear, as shown by the blue line in Fig. 1D (taken at \(B_{a}=4.2\) T), and \(n_{H}\) shows a continuous evolution with doping following \(n_{H}=n\) (dashed red line), a behavior characteristic of a single band.
At elevated \(B_{a}\) a Landau fan originating from CNP is visible at all fillings (Fig. 1A), corresponding to four-fold degenerate LLs due to spin and valley degeneracies in graphene (Fig. S1B). Additionally, Hofstadter patterns are visible as horizontal lines periodic with \(\phi_{0}/B_{a}\), arising from the interference of the moire unit cell with the area occupied by a flux quantum \(\phi_{0}=h/e\) (where \(h\) is Planck constant and \(e\) is the elementary charge), as reported previously [1, 2, 3, 18, 36, 37]. More complicated Landau fans that originate from the vicinity of \(\nu=\pm 4\) are discerned at intermediate fields. In the following, we describe local studies at fields below 350 mT, where QOs in transport measurements are hardly resolved.
### Imaging quantum oscillations
To study the magnetization oscillations we utilize a scanning superconducting quantum interference device fabricated on the apex of a sharp pipette (SQUID-on-tip, SOT) [38]. An Indium SOT [39] of about 150 nm diameter is scanned at a height of \(h\approx 200\) nm above the sample surface (Fig. 2A) at \(T=300\) mK [35]. The \(dc\) voltages \(V_{fg}^{dc}\) and \(V_{bg}^{dc}\) applied to the top and bottom Pt gates are used to control \(n\). A small \(ac\) voltage \(V_{bg}^{ac}\) of 5 to 20 mV rms is applied to the backgate, modulating the carrier density by \(n^{ac}\) and the corresponding \(\nu^{ac}\) by 0.004 to 0.016, and the resulting \(B_{z}^{ac}(x,y)=n^{ac}(dB_{z}/dn)\) is imaged across the sample. This signal reflects the induced \(ac\) modulation in the local magnetization, \(m_{z}(x,y)=dM_{z}(x,y)/dn\), which can be reconstructed directly from the measured \(B_{z}^{ac}(x,y)\) by a numerical inversion [40] (\(M_{z}\) is the magnetization per unit area and \(m_{z}\) is the magnetization per excess electron; both dominated by the orbital effects [35]). Figure 2B shows an example of the resulting map of the local differential magnetization at \(\nu=-7.65\) and \(B_{a}=334\) mT, displaying extremely large values of \(m_{z}(x,y)\) reaching \(\pm 500\)\(\mu_{\rm B}\)/electron and forming patches of positive and negative \(m_{z}\) with a characteristic size of about 1 \(\mu\)m. Upon varying \(\nu\), the patches move across the sample and \(m_{z}(x,y)\) reveals remarkable quasi-periodic oscillations as shown in Fig. 2C and in Movie S1. The period of the QOs and the \(m_{z}\) amplitude vary significantly with position as seen in Figs. 2D,E. We observe these oscillations starting from applied field as low as 116 mT and up to our highest \(B_{a}=334\) mT at which our SOT has sufficient sensitivity (Fig. S2). Similar behavior is found at various values of displacement field \(D\) (Fig. S3) and in two additional samples (Figs. S3, S4). This is the first spatially resolved measurement of the dHvA effect in graphene devices.
To investigate the origin of the thermodynamic QOs, we measure the evolution of \(B_{z}^{ac}(x)\) with carrier density over an extended range of \(\nu\) (Fig. 3A) by repeated scanning along the black dotted line in Fig. 2A, while incrementing \(\nu\) in 0.006 steps at \(B_{a}=300\) mT. For \(|\nu|\lesssim 3.5\) weak periodic oscillations in \(B_{z}^{ac}(x)\) are discerned as shown in Fig. 3C. For \(|\nu|\gtrsim 3.5\) the behavior is markedly different, characterized by the appearance of strong oscillations with significantly larger and variable periods \(\Delta n\), as visualized in Figs. 3B,D. To quantify the oscillations' periodicity, we perform a fast Fourier transform (FFT) of \(B_{z}^{ac}(\nu)\) over a narrow window of \(\delta\nu=1.11\) around a given \(\nu\)[35]. Figure 3E shows the FFT at the \(x\) position marked by the white dashed line in Fig. 3A (see Movie S2 for FFT at other \(x\) positions).
In the integer QHE, the frequency of the QOs as a function of \(n\) is given by \(f=\frac{1}{N}\frac{\phi_{0}}{B_{a}}\), where \(N\) is the spin-valley degeneracy. For a given \(N\), \(f\) is determined solely by \(B_{a}\) and should thus be independent of the position and moire band filling \(\nu\). For \(|\nu|\lesssim 3.5\) the FFT reveals a peak at \(f\cong\frac{\phi_{0}}{4B_{a}}\) which is rather independent of \(\nu\) (see Movie S2 and Fig. S4 for an additional sample). This shows that the QOs at these fillings originate from the standard QHE with \(N=4\) spin and valley degenerate LLs.
At \(|\nu|\gtrapprox 3.5\) the QOs show a remarkably rich behavior (Fig. 3E) that departs from the standard QHE in a number of ways: (i) The frequency of the oscillations is up to one order of magnitude lower than \(\frac{\phi_{0}}{4B_{a}}\) (ii) Rather than being restricted to integer fractions, \(f\) varies continuously as a function of \(\nu\). (iii) At higher filling factors more than one characteristic frequency is present simultaneously. (iv) \(f\) varies in space as can be appreciated from Fig. 3A and Movie S2. These features reveal the presence of narrow moire bands with overlapping FSs as discussed next.
Figure 2: **Imaging the dHvA effect.** (**A**) Top: Optical image of the BLG/hBN sample with indicated contacts for \(\rho_{xx}\) and \(\rho_{yx}\) measurements. The dashed yellow rectangle marks the area imaged in (**B**) and the dotted black line marks the line cut presented in Fig. 3. Bottom: Schematic sample structure indicating the applied top-gate and back-gate voltages, \(V_{tg}^{dc}\) and \(V_{bg}^{dc}+V_{bg}^{ac}\), and the corresponding \(ac\) magnetic field \(B_{Z}^{ac}\) imaged by the scanning SOT. (**B**) Map of orbital magnetization at \(B_{a}=334\) mT, \(T=300\) mK, and \(\nu=-7.65\) showing domains of positive and negative local magnetization \(m_{x}(x,y)\) with amplitude of up to \(\pm 500\)\(\mu_{\rm B}\)/electron. The color bar applies to all the panels. (**C**) Tomographic rendering of \(m_{x}(x,y,\nu)\) (see Movie S1). (**D**) Slice of the tomographic data \(m_{x}(y,\nu)\) at \(x=1.36\)\(\mu\)m. (**E**) \(m_{x}(x,\nu)\) slice along \(y=1.22\)\(\mu\)m.
## 3 Results
Figure 3: **Evolution of the dHvA quantum oscillations with \(\nu\) and position.** (**A**) \(B_{x}^{ac}(x,\nu)\) measured along the dotted black line in Fig. 1A at \(B_{a}=300\) mT (see Fig. S2 for additional field values). At \(|\nu|\lesssim 3.5\) the orbital magnetization and the corresponding \(B_{x}^{ac}\) are weak, while for \(|\nu|\gtrsim 3.5\) large \(B_{x}^{ac}\) accompanied by pronounced low-frequency QOs are present. (**B**) Zoomed-in cross-section of \(B_{x}^{ac}(\nu)\) along the magenta segment in (**A**), showing low-frequency QOs with period that gradually varies with \(\nu\). (**C**) \(B_{x}^{ac}(\nu)\) cross-section along the red segment in (**A**), showing high-frequency periodic oscillations due to conventional four-fold-degenerate LLs. (**D**) \(B_{x}^{ac}(\nu)\) along the green segment in (**A**), revealing large-amplitude low-frequency oscillations comprising multiple frequencies. (**E**) FFT of \(B_{x}^{ac}(\nu)\) at \(x=4.97\) μm marked by the white dotted line in (**A**) performed over a narrow window of \(\delta\nu=1.11\) around \(\nu\) (see Movie S2 for FFT at different locations). The frequency is in units of \(\phi_{0}/4B_{a}\) and both positive and negative FFT frequencies are shown for clarity. At \(|\nu|\lesssim 3.5\) the QOs arise from conventional four-fold degenerate QHE LLs with \(f=\frac{\phi_{0}}{4B_{a}}\). For \(|\nu|\gtrsim 3.5\) the low-frequency oscillations are governed by multiple overlapping Fermi surfaces.
### Band structure of moire bilayer graphene
To gain further insight we perform continuum-model single-particle band structure (BS) calculations of BLG/hBN moire [35, 41]. The mismatch of graphene and hBN's lattice constants creates a real space superlattice, and causes band folding into the moire mini-Brillouin zone (mBz) [1, 2, 41, 42, 43, 44, 45]. Figure 4A shows the calculated conduction C1 and valence V1 flat bands along with highly-overlapping remote valence bands V2 and V3. Following [41], we center the mBz around the original graphene \(K\) point, and label the original hBN Brillouin zone corner \(Y\) and its time reversal counterpart \(X\). Generally, the inequivalent lattice sites in the hBN substrate break inversion symmetry and open a gap at the charge neutrality point (CNP). Remote bands, such as V2 and V3, however, are highly overlapping creating complex Fermi pockets. Depending on the specifics of band structure parameters, either a full gap or a minimum in DOS occurs between V1 and V2 at \(\nu=-4\)[35].
For \(|\nu|\lesssim 3.5\) the FS topology is simple with a single Fermi pocket (FP) around the \(K\) point (Fig. 4B). At \(|\nu|\cong 3.5\) a Lifshitz transition occurs (consistent with the observed van Hove singularity at low \(B_{\alpha}\) in Fig. 1D), resulting in the formation of two FPs centered around \(X\) and \(Y\) points (see Movie S3). With increasing \(|\nu|\), the FS topology becomes more complicated, consisting of three or more FPs due to overlapping bands (Fig. 4C). Each FP accommodates independent LLs leading to QOs with multiple fundamental frequencies. Traditionally, in bulk materials, QOs are measured vs. \(1/B_{\alpha}\), in which case each FP contributes an oscillation with a frequency proportional to the FP area \(S_{l}\)[20, 22]. This conjecture, however, does not hold for QOs measured vs. \(n\), in which case the oscillation frequencies are given by \(f_{l}=\frac{\mathcal{N}_{l}(\epsilon_{F})~{}\phi_{0}}{\mathcal{N}(\epsilon_{F} )~{}4B_{\alpha}}\), where \(\mathcal{N}_{l}(\epsilon_{F})=\frac{1}{4\pi^{2}}\frac{\partial S_{l}}{ \partial\epsilon}\) is the DOS of pocket \(i\), \(\mathcal{N}(\epsilon_{F})=\sum_{i}\mathcal{N}_{l}(\epsilon_{F})\) is the total DOS, \(\epsilon_{F}\) is the Fermi energy, and we consider four-fold degenerate bands [35]. This stems from the fact that varying \(B_{\alpha}\) affects the cyclotron motion of the entire Fermi sea, whereas varying \(n\) affects the behavior only near \(\epsilon_{F}\). Hence, upon increasing \(\epsilon_{F}\) by \(\Delta\epsilon\), one LL is added to a pocket when its \(S_{l}\) has increased by \(\Delta S_{l}=4\pi^{2}\mathcal{N}_{l}(\epsilon_{F})\Delta\epsilon=\frac{4B_{ \alpha}}{\phi_{0}}\). This leads to the Onsager sum rule of the fundamental frequencies, \(\sum_{i}f_{l}=f_{0}=\frac{\phi_{0}}{4B_{\alpha}}\).
Figure 4D shows the experimental FFT data from Fig. 3E overlaid with color-coded lines indicating the \(f_{i}(\nu)\) of the different pockets calculated from the BS. For hole doping at \(0>\nu>-3.5\), only one FP around \(K\) point, V1x, is present, resulting in a single frequency \(f_{V1\text{K}}=f_{0}\) (green in Fig. 4D). At the Lifshitz transition at \(\nu\cong-3.5\) the FS breaks into two pockets around the mBz corners, V1x and V1y (Movie S3). As a result, two QO frequencies coexist for a small region of \(\nu\), \(f_{V1\text{X}}\) and \(f_{V1\text{Y}}\), until V1 (green) overlaps V2 (light brown). At \(\nu<-5\), V3 band (pink) starts to be occupied forming two FPs, V3x and V3y, with increasing DOS with \(|\nu|\) that coexist with the V2x FP with decreasing DOS (Fig. 4C). As a result, the two V3 frequencies, \(f_{V3\text{X}}\) and \(f_{V3\text{Y}}\), grow with \(|\nu|\) (pink in Fig. 4D), while the \(f_{V2\text{K}}\) frequency decreases (light brown). The calculated behavior for \(-5>\nu>-10.5\) closely follows the experimentally derived frequencies and their evolution with \(\nu\). Similar behavior is observed for electron doping at \(\nu>3.5\).
With the above insight we note that our high sensitivity to the oscillation frequencies at fillings where multiple Fermi pockets coexist, makes the nanoscale magnetization imaging a uniquely sensitive tool for mapping the local band structure and extracting BS parameters. There has been little theoretical work and limited experimental determination of the coupling strengths between hBN and graphene. In Ref. [41] the tunneling strengths between overlapping boron and carbon atoms (\(t_{BC}\)) and between overlapping nitrogen and carbon (\(t_{NC}\)) are taken to be equal, or \(\tau_{NB}\equiv t_{NC}/t_{BC}=1\). In Ref. [45], \(\tau_{NB}\cong 0.67\), based on _ab initio_ DFT calculations. We find that both these parameter sets fail to fit our experimental data (Fig. S5).
Recent STM experiments have revealed significant lattice relaxation in magic-angle twisted bilayer graphene [46] arising from the higher energy of \(AA\) stacking atomic configuration in comparison to \(AB\) configuration. This lattice relaxation has significant impact on the calculated BS, including gap opening between the flat and dispersive bands in magic-angle graphene [47, 48], as observed experimentally [4, 49]. In the continuum model, this lattice relaxation is captured with the phenomenological parameter \(w=t_{AA}/t_{AB}\approx 0.8\)[50]. Recently, molecular dynamics simulations on aligned BLG/hBN heterostructures have also shown significant lattice relaxation, which has been proposed to have an effect on the topology of the system [19].
By fitting the high resolution QOs we find \(r_{NB}\cong 0.5\) and \(w\cong 0.5\), significantly lower than evaluated previously, leading to larger band overlaps with smaller energy gaps (Fig. S5) and to magnetic breakdown as discussed next. This finding of strong lattice relaxation in aligned BLG/hBN calls for further exploration of lattice relaxation mechanisms and its effect on hBN aligned moire heterostructures.
Figure 4: **Calculation of BLG/hBN band structure and of QOs.** (**A**) Single-particle band structure calculation for a single valley (\(K\)) showing the conduction flat band (C1, top), valence flat band (V1, middle) and two partially overlapping remote valence bands (V2 and V3, bottom) in the moiré mini-Brillouin zone of BLG aligned to hBN with \(\theta=0.75^{\mathrm{o}}\). The bands are four-fold degenerate with \(K^{\prime}\) bands rotated by \(180^{\mathrm{o}}\). Tight-binding parameters were chosen to best fit (D) with \(r_{NB}=0.5\) and \(w=0.5\) ([35] and Fig. S5). (**B**-**C**) Example of simple (B) and complex (C) Fermi surfaces (solid contours) at \(\nu=2.054\) and \(-10.14\) respectively. The dashed contours indicate the change in the areas of the Fermi pockets with small increase in \(\nu\), reflecting the DOS and the QO frequencies of each pocket. (**D**) The FFT of QOs from Fig. 3E, overlaid with fundamental frequencies (lines coded by band colors) calculated by the relative DOS of each pocket, \(f_{i}(\nu)=\frac{N_{i}(\epsilon_{F})}{N(\epsilon_{F})}\frac{\phi_{0}}{4B_{a}}\).
### Magnetic breakdown
The BS calculations with enhanced lattice relaxation provide a good description of the observed fundamental QQ frequencies. Yet, there are a number of prominent lines in Fig. 4D that are not accounted for by the calculated \(f_{i}(\nu)\). Moreover, these lines do not obey the Onsager band area sum rule \(\sum_{i}f_{i}=f_{0}\) and cannot be explained by simple harmonics of \(f_{i}\). These unaccounted-for lines indicate the presence of new electron orbits that encompass areas outside the closed FS contours. Such trajectories can be facilitated by interband electron tunneling due to CMB mechanism, which has been widely investigated in bulk metals [27], but so far has not been identified in 2D vdW materials. When two Fermi pockets are separated by a small momentum-gap \(\Delta k\), the magnetic-field-induced interpocket tunneling occurs with probability
\[P\cong e^{-\frac{B_{MB}}{B_{a}}},\hskip 19.916929ptB_{MB}=\frac{\phi_{0}}{2} \left(\frac{\Delta k^{3}}{\frac{1}{R_{1}}+\frac{1}{R_{2}}}\right)^{1/2}\cong \frac{\phi_{0}}{2}\Delta k^{2},\]
where \(B_{MB}\) is the breakdown field, and \(R_{1}\) and \(R_{2}\) are the \(k\)-space radii of curvature of the two FS in the gap region [27]. For \(B_{MB}\lesssim B_{a}=0.3\) T this requires \(\Delta k\lesssim 0.012\) nm\({}^{-1}\). Our BS calculations indeed show very small gaps between Fermi pockets in remote bands (Fig. 5B).
To derive the MB orbits and their corresponding QO frequencies \(f_{\rm MB}\), we analyze two prominent unaccounted for lines in the experiment at \(5<\nu<10.5\) and \(-12.5<\nu<-10\) (Figs. 5C,E, bright green). Figure 5A shows the FS structure at \(\nu=8.4\), displaying electron pockets at \(X\) and \(Y\) originating from C3 band (red), and a hole pocket at \(K\) from C2 band (dark pink). The sharp touching points between the C3\({}_{\rm Y}\) and C2\({}_{\rm K}\) pockets are characterized by a very small momentum gap \(\Delta k\cong 0.010\) nm\({}^{-1}\), while the gaps between the C3\({}_{\rm X}\) and C2\({}_{\rm K}\) pockets have an even smaller \(\Delta k\cong 0.005\) nm\({}^{-1}\) as shown in Fig. 5B inset, leading to CMB at low fields. Moreover, in contrast to the common situations where close proximity of FPs is limited to the vicinity of Lifshitz transitions, the unique highly overlapping BS of relaxed BLG/hBN leads to small gaps extending over almost the entire energy range of the remote bands with sharp ridges that closely follow each other (Fig. 5B). The tunneling between the FPs can give rise to a number of extended equi-energy electron orbits. The green dashed line in Fig. 5A shows the shortest orbit that traces \(2/3\) of the circumference of the two electron pockets C3\({}_{\rm X}\) and C3\({}_{\rm Y}\) and \(1/3\) of the circumference of the hole pocket C2\({}_{\rm K}\). As a result, the QO frequency of this orbit is given by a fractional Onsager relation,
\[f_{\rm MB}=\tfrac{1}{3}(2f_{\rm C3X}+2f_{\rm C3Y}+f_{\rm C2K}),\]
where \(f_{i}\) are the corresponding QO frequencies of the individual FPs, in contrast to common CMB behavior with integer Onsager relations [27, 32]. Figure 5C shows a good fit between the calculated \(f_{\rm MB}\) (green) and the experimentally unaccounted for QO line. Interestingly, a gap of \(\Delta k\cong 0.01\) nm\({}^{-1}\) corresponds to tunneling probability \(P\approx 0.5\), allowing the carriers to orbit along both the closed FSs and along the CMB trajectories. As a result, both the fundamental (red and pink) and the CMB frequency lines (green) are observed concurrently in Fig. 5C.
Figure 5D shows the BS at \(\nu=-12.28\), displaying three degenerate hole pockets in V4 band and an electron pocket at \(K\) in V3 band. CMB creates an electron trajectory that flows along the inner and outer edges of V3 and V4 pockets (green dashed line in Fig. 5D), which explains well the unaccounted QO line in Fig. 5E as marked by the calculated green \(f_{\rm MB}\) line. Notably, the fundamental \(f_{\rm V3}\) line (purple) is essentially invisible in the experiment. This can be understood in view of the extremely small gap between the V3 and V4 pockets with \(\Delta k\cong 0.002\) nm\({}^{-1}\) and very small radii of curvature \(R\). As a result, the electrons tunnel between the pockets with probability \(P\cong 1\), leaving essentially no carriers that circulate exclusively in the V3 pocket and hence no
detectable \(f_{\rm V3}\). Moreover, in contrast to the usual CMB behavior [32] and unlike the \(f_{\rm MB}\) line in the conduction bands (green in Fig. 5C), the \(f_{\rm MB}\) line in Fig. 5E cannot be expressed as either integer or fractional Onsager combination of the fundamental frequencies due to nontrivial evolution of the FS with doping.
Figure 5: **Coherent magnetic breakdown.** (**A**) Constant energy band structure cut at \(\nu=8.4\) showing the occupied \(\rm C3_{x}\) and \(\rm C3_{y}\) electron pockets (red, clockwise black arrows) and the C2K hole pocket (dark pink, counter-clockwise arrows). The dashed green trajectory indicates the shortest magnetic breakdown orbit. (**B**) Cut of the band structure along dashed black line in (**A**). Dashed line at \(\epsilon=154.2\) meV corresponds to the energy value in (**A**). The gap between \(\rm C2_{x}\) and \(\rm C3_{x}\) pockets remains very small over a large range of energies. Inset: Zoom in showing a gap of \(\Delta k\cong 0.005\) nm\({}^{-4}\). (**C**) Expanded view of FFT of \(B_{z}^{ac}(\nu)\) at \(x=2.04\) μm showing a pronounced frequency line for \(4.5<\nu<10\) that does not obey the sum rule. This frequency is accounted for by the CMB orbit in (**A**), resulting in \(f_{\rm MB}=\frac{1}{3}(2f_{\rm C3X}+2f_{\rm C3Y}+f_{\rm C2K})\) (green line). Positive and negative FFT frequencies are shown for clarity. (**D**) Constant energy band structure cut at \(\nu=-12.28\) showing three degenerate electron pockets from V4 band (brown) and one hole pocket from V3 band (light purple). The dashed green trajectory indicates the shortest CMB orbit. (**E**) FFT of \(B_{z}^{ac}(\nu)\) at \(x=4.44\) μm with overlaid calculated \(f_{\rm MB}\) (green line).
## Discussion
Due to the extreme sensitivity of the QOs to the band structure details, our measurement provides a unique probe of multi-band moire FSs and their low-energy electronic properties. Crucially, the ability to detect thermodynamic QOs at low fields, allows probing of the BS with high energy resolution and without perturbing it by high magnetic fields. In particular, our results show that the hBN-graphene coupling is substantially weaker than previously estimated values, giving \(r_{NB}=t_{NC}/t_{BC}\cong 0.5\) and a large lattice relaxation with \(w=t_{AA}/t_{AB}\cong 0.5\), a direct demonstration of a weak moire potential with strongly overlapping bands and small energy gaps. This fine potential modulation is readily washed out at elevated magnetic fields leading to a full breakdown in which the carriers orbit mostly along the original BLG FS unperturbed by the moire potential (Fig. S6 and [35]). Indeed, the \(n_{H}\) measured at \(B_{a}=4.2\) T in Fig. 1D, shows essentially no signs of the moire multiband structure resolved at low fields.
Our findings open up a previously uncharted regime of exotic CMB physics at unusually low \(B_{a}\), arising due to cyclotron orbits delocalized in \(k\)-space and supporting states coherently entangled among different sub-bands. This regime is manifested through QOs that do not obey Onsager's FS area sum rule. Instead, a fractional Onsager quantization relation is observed that indicates the occurrence of particle-hole superposition states shared by adjacent bands and exhibiting a high degree of interband phase coherence. Remarkably, the real-space cyclotron radius of the observed CMB orbits in Fig. 5A is as large as \(R_{c}=\frac{\phi_{0}}{2\pi B_{a}}\left(\frac{s}{\pi}\right)^{1/2}\cong 500\) nm, a value comparable to the characteristic scales of disorder and sample dimensions [35]. The particle-hole coherence induced by CMB is an appealing and not-yet-explored direction for band engineering in moire materials.
|
2309.16931 | Rationality and connectivity in stochastic learning for networked
coordination games | Coordination is a desirable feature in many multi-agent systems such as
robotic and socioeconomic networks. We consider a task allocation problem as a
binary networked coordination game over an undirected regular graph. Each agent
in the graph has bounded rationality, and uses a distributed stochastic
learning algorithm to update its action choice conditioned on the actions
currently played by its neighbors. After establishing that our framework leads
to a potential game, we analyze the regime of bounded rationality, where the
agents are allowed to make sub-optimal decisions with some probability. Our
analysis shows that there is a relationship between the connectivity of the
network, and the rationality of the agents. In particular, we show that in some
scenarios, an agent can afford to be less rational and still converge to a near
optimal collective strategy, provided that its connectivity degree increases.
Such phenomenon is akin to the wisdom of crowds. | Yifei Zhang, Marcos M. Vasconcelos | 2023-09-29T02:18:52Z | http://arxiv.org/abs/2309.16931v1 | # Rationality and connectivity in stochastic learning for networked coordination games
###### Abstract
Coordination is a desirable feature in many multi-agent systems such as robotic and socioeconomic networks. We consider a task allocation problem as a binary networked coordination game over an undirected regular graph. Each agent in the graph has bounded rationality, and uses a distributed stochastic learning algorithm to update its action choice conditioned on the actions currently played by its neighbors. After establishing that our framework leads to a potential game, we analyze the regime of bounded rationality, where the agents are allowed to make sub-optimal decisions with some probability. Our analysis shows that there is a relationship between the connectivity of the network, and the rationality of the agents. In particular, we show that in some scenarios, an agent can afford to be less rational and still converge to a near optimal collective strategy, provided that its connectivity degree increases. Such phenomenon is akin to the _wisdom of crowds_.
## I Introduction
In strategic decision-making over a social network, every agent must take into account the behavior of its neighbors when choosing an action. Agents in the system are often characterized by a certain level of rationality, which can be modeled on how much weight (if any) is assigned to the actions taken by its neighbors. In one extreme, a completely irrational agent makes decisions by randomly picking an action over a finite discrete set with a uniform distribution. On the opposite extreme, a fully rational agent optimizes its local objective taking into account the actions played by its neighbors, in other words, the agent plays a _best-response_. Somewhere in between these extremes, an agent with bounded rationality will definitely make mistakes, resulting in loss in performance.
In this paper, we investigate how rationality and connectivity interact within a simple learning in games setting [1]. In particular, we are interested in establishing whether highly connected agents need to be more or less rational during the process of learning how to play a Nash equilibrium. We show that for a coordination game defined over a network with fixed connectivity degree (regular graph), there is a regime in which higher connectivity implies that the agents can afford to be less rational. This is somewhat related to the _wisdom of the crowds_[2] in which an optimal aggregate decision emerges from individual acting independently.
Coordination games are characterized by the existence of multiple Nash equilibria where the agents play the same action [3]. These types of games can be used to model many social-economic phenomena such as technology adoption [4], political revolutions [5], task collaboration in robotic networks [6, 7, 8] and microbiology [9]. Network games consider the interaction of strategic agents over a graph, and how the graph influences the structure of equilibria [10, 11, 12]. A model of networked coordination games subject to cyber-security attacks have been considered in [13, 14]. The topic of learning in networked coordination games was investigated in [15, 16] for fully rational agents under the best-response dynamics.
The centerpiece of this paper is the analysis of a non-trivial behavior between the rationality of agents and their connectivity in a specific class of coordination games. We look at the convergence of the learning process to a Nash equilibrium when the agents have bounded rationality. This is the case when the agents are not choosing actions which are **not** the best-response to the actions of their neighbors at any given time. A learning model that captures the rationality of an agent is called _Log-Linear Learning_ (LLL) [17, 18].
LLL is intimately related to the class of potential games [19]. A seminal result shows that when the agents in a potential game use LLL, they converge to one of the Nash equilibria of the game in probability, as the rationality parameter tends to infinity. While in the asymptotic regime, the agents behave predictably, this situation rarely occurs in practice as humans tend to behave with varying degrees of (bounded) rationality. In this regime, the network's connectivity seems to play a very important role. In the bounded rationality regime, a consensus configuration can only be guaranteed with high probability, but never with probability \(1\). Our main contribution is to show that the minimum level of rationality required to achieve a certain level of coordination may decrease as the connectivity degree of the graph increases. We investigate this issue by constraining our setup to regular graphs [20, and references therein], which greatly reduces the overall complexity of the analysis.
The rest of the paper is organized as follows. In Section II, we introduce our task allocation networked coordination game. In Section III, we show that when the network is described by a regular graph of \(N\) vertices and degree \(K\), the game is potential. In Section IV, we obtain an alternative expression for the potential function. In Section V, we show that the only maximizers of the potential function are the two consensus configurations and we obtain a complete characterization of when each one is optimal. In Section VI, we obtain the relationship between rationality and connectivity for a large subset of the games considered herein. In Section VII, we present a numerical example. The paper concludes
in Section VIII, where we present future research directions.
## II Problem setup
Here we define a class of binary networked coordination games that will be the focus of this paper. Let \([N]\stackrel{{\mathrm{def}}}{{=}}\{1,2,\ldots,N\}\) denote the set of agents in the network described by an undirected and connected graph \(\mathcal{G}\stackrel{{\mathrm{def}}}{{=}}([N],\mathcal{E})\). Two nodes \(i,j\in[N]\) are connected if \((i,j)\in\mathcal{E}\). The set of neighbors of agent \(i\) is denoted by \(\mathcal{N}_{i}\stackrel{{\mathrm{def}}}{{=}}\{j\in[N]\mid(i,j) \in\mathcal{E}\}\). The number of neighbors of agent \(i\) is denoted by \(|\mathcal{N}_{i}|\). We assume there are no self loops, i.e., \((i,i)\notin\mathcal{E}\), \(i\in[N]\).
Let \((i,j)\in\mathcal{E}\), and suppose that \(a_{i},a_{j}\in\{0,1\}\) are the actions played by agents \(i\) and \(j\), respectively. For \(\theta\in\mathbb{R}\), the following bi-matrix game specifies the payoffs for the pairwise interaction between \(i\) and \(j\).
**Remark 1** (Payoff interpretation): _The payoff structure of the bi-matrix game in Fig. 1 corresponds to a coordination game between two agents. Notice the payoff matrix depends on a parameter \(\theta\in\mathbb{R}\), which we refer to as a task difficulty. The difficulty is amortized over the total number of agents in the system, \(N\). We refer to them as subtasks of difficulty \(\theta/N\). However, when agent \(i\) decides to take on a subtask, it contributes \(1/|\mathcal{N}_{i}|\) units of effort towards the subtask of each of its neighbors. Therefore, depending on \(\theta\), a single agent may not be able to carry out its subtask by itself. This payoff structure simultaneously captures coordination and the graph structure of a networked system in a distributed task allocation problem._
A binary coordination game between two players is characterized by the existence of two pure strategy Nash equilibria. The following result establishes the range of values of \(\theta\) for which the game in Fig. 1 corresponds to a coordination game.
**Proposition 1**: _Consider the bimatrix game in Fig. 1, and \(\mathcal{S}_{ij}\) denote its set of pure-strategy Nash equilibria. Let \(M_{ij}\stackrel{{\mathrm{def}}}{{=}}N/\max\{|\mathcal{N}_{i}|,| \mathcal{N}_{j}|\}\). The following holds:_
\[\mathcal{S}_{ij}=\begin{cases}\big{\{}(0,0)\big{\}}&\text{if}\ \ \theta>M_{ij}\\ \big{\{}(0,0),(1,1)\big{\}}&\text{if}\ \ 0\leq\theta\leq M_{ij}\\ \big{\{}(1,1)\big{\}}&\text{if}\ \ \theta\leq 0.\end{cases} \tag{1}\]
The proof can be obtained by inspection using the definition of a Nash equilibrium [1].
### _Coordination games over networks_
We study a network coordination game with \(N\) agents, where agent \(i\) plays the same action with all of its neighbors \(j\in\mathcal{N}_{i}\). Let \(V_{i}:\{0,1\}^{2}\rightarrow\mathbb{R}\) be defined as:
\[V_{i}(a_{i},a_{j})\stackrel{{\mathrm{def}}}{{=}}a_{i}\Big{(} \frac{a_{j}}{|\mathcal{N}_{i}|}-\frac{\theta}{N}\Big{)}. \tag{2}\]
In a network game, the payoff that one player receives is the sum of all the payoffs of the bi-matrix games \(V_{i}(a_{i},a_{j})\) played with each one of its neighbors. Therefore for the \(i\)-th player, the utility is determined as follows:
\[U_{i}(a_{i},a_{-i})\stackrel{{\mathrm{def}}}{{=}}\sum_{j\in \mathcal{N}_{i}}V_{i}(a_{i},a_{j}). \tag{3}\]
Therefore, the payoff of the \(i\)-th agent in our game is
\[U_{i}(a_{i},a_{-i})=a_{i}\Big{(}\frac{1}{|\mathcal{N}_{i}|}\sum_{j\in \mathcal{N}_{i}}a_{j}-\frac{|\mathcal{N}_{i}|}{N}\theta\Big{)}. \tag{4}\]
## III A potential networked coordination game
We are interested in obtaining a potential networked coordination game. Potential games [19] have many interesting properties. For example, a potential game implies the convergence of learning algorithms, as will be the case of log-linear learning, which will be the focus of the next section. In this section, we will show that network regularity implies the game is potential. We start with the definition of a potential game.
### _Potential Games_
**Definition 1**: _Let \(\mathcal{A}_{i}\) denote the action set of the \(i\)-th agent in a game with payoff functions \(U_{i}(a_{i},a_{-i})\), \(i\in[N]\). Let \(\mathcal{A}=\mathcal{A}_{1}\times\cdots\times\mathcal{A}_{n}\). A game is an exact potential game if there is a so-called potential function \(\Phi\colon\mathcal{A}\rightarrow\mathbb{R}\) such that_
\[U_{i}(a^{\prime}_{i},a_{-i})-U_{i}(a^{\prime\prime}_{i},a_{-i})=\Phi(a^{\prime }_{i},a_{-i})-\Phi(a^{\prime\prime}_{i},a_{-i}), \tag{5}\]
_for all \(a^{\prime}_{i},a^{\prime\prime}_{i}\in A_{i}\), \(a_{-i}\in A_{-i}\), \(i\in[N]\)._
**Theorem 1**: _Consider the networked coordination game indexed by the parameter \(\theta\). Let \(K\in\{2,\ldots,N-1\}\). If the network is regular, i.e., if \(|\mathcal{N}_{i}|=K,\ i\in[N]\), the game is potential for any value of \(\theta\). In particular, a potential function for this game is given by \(\Phi(a)\) defined as_
\[\Phi(a)\stackrel{{\mathrm{def}}}{{=}}\frac{1}{2}\sum_{i\in[N]} \sum_{j\in\mathcal{N}_{i}}\phi(a_{i},a_{j}), \tag{6}\]
_where_
\[\phi(a_{i},a_{j})\stackrel{{\mathrm{def}}}{{=}}\frac{a_{i}a_{j}}{ K}+(1-a_{i}-a_{j})\frac{\theta}{N}. \tag{7}\]
For a regular graph of connectivity \(K\), the coordination game is given by:
\[\begin{array}{c}a_{j}\\ \\ a_{i}\\ \end{array}\]
Fig. 1: A coordination game with parameter \(\theta\) between two players.
Fig. 2: A coordination game with parameter \(\theta\) between two players on a regular graph of degree \(K\).
The game above is an exact potential game with potential function determined by the matrix in Fig. 3. Therefore, the following holds:
\[\phi(a^{\prime}_{i},a_{j})-\phi(a^{\prime\prime}_{i},a_{j})=V_{i}(a^{\prime}_{i}, a_{j})-V_{i}(a^{\prime\prime}_{i},a_{j}), \tag{8}\]
for all \(a^{\prime}_{i},a^{\prime\prime}_{i}\in\{0,1\}\) such that \(a^{\prime}_{i}\neq a^{\prime\prime}_{i}\).
The matrix above corresponds to \(\phi\) defined in Eq. (7). Define the function \(\Phi:\{0,1\}^{N}\rightarrow\mathbb{R}\) such that
\[\Phi(\mathbf{a})\stackrel{{\mathrm{def}}}{{=}}\frac{1}{2}\sum_{i \in[N]}\sum_{j\in\mathcal{N}_{i}}\phi(a_{i},a_{j}). \tag{9}\]
We proceed by verifying that the function in Eq. (9) satisfies the condition in Eq. (5).
Let \(m\in[N]\), and \(a^{\prime}_{m},a^{\prime\prime}_{m}\in\{0,1\}\) such that \(a^{\prime}_{m}\neq a^{\prime\prime}_{m}\). Then,
\[\Phi(a^{\prime}_{m},a_{-m})-\Phi(a^{\prime\prime}_{m},a_{-m})=\\ \frac{1}{2}\sum_{i\in[N]}\sum_{j\in\mathcal{N}_{i}}\phi(a_{i},a_{ j})\Bigg{|}_{(a^{\prime}_{m},a_{-m})}\\ -\frac{1}{2}\sum_{i\in[N]}\sum_{j\in\mathcal{N}_{i}}\phi(a_{i},a_ {j})\Bigg{|}_{(a^{\prime\prime}_{m},a_{-m})}. \tag{10}\]
Then, notice that
\[\sum_{i\in[N]}\sum_{j\in\mathcal{N}_{i}}\phi(a_{i},a_{j})=\sum_{j\in\mathcal{ N}_{m}}\phi(a_{m},a_{j})+\sum_{i\neq m}\sum_{j\in\mathcal{N}_{i}}\phi(a_{i},a_{j}). \tag{11}\]
Recall that
\[\phi(a^{\prime}_{m},a_{j})-\phi(a^{\prime\prime}_{m},a_{j})=V_{m}(a^{\prime}_ {m},a_{j})-V_{m}(a^{\prime\prime}_{m},a_{j}). \tag{12}\]
Therefore,
\[\Phi(a^{\prime}_{m},a_{-m})-\Phi(a^{\prime\prime}_{m},a_{-m})=\\ \frac{1}{2}\sum_{j\in\mathcal{N}_{m}}\big{[}V_{m}(a^{\prime}_{m}, a_{j})-V_{m}(a^{\prime}_{m},a_{j})\big{]}\\ +\frac{1}{2}\sum_{i\neq m}\sum_{j\in\mathcal{N}_{i}}\phi(a_{i},a_ {j})\Bigg{|}_{(a^{\prime\prime}_{m},a_{-m})}\\ -\frac{1}{2}\sum_{i\neq m}\sum_{j\in\mathcal{N}_{i}}\phi(a_{i},a_ {j})\Bigg{|}_{(a^{\prime\prime}_{m},a_{-m})}. \tag{13}\]
The first term in Eq. (13) is equal to
\[\frac{1}{2}\big{[}U_{m}(a^{\prime}_{m},a_{-m})-U_{m}(a^{\prime\prime}_{m},a_{ -m})\big{]}. \tag{14}\]
We proceed with showing that the other two terms also add up to the same value.
For all \(i\neq m\) such that \(m\notin\mathcal{N}_{i}\),
\[\sum_{j\in\mathcal{N}_{i}}\phi(a_{i},a_{j})\Bigg{|}_{(a^{\prime}_{m},a_{-m})} =\sum_{j\in\mathcal{N}_{i}}\phi(a_{i},a_{j})\Bigg{|}_{(a^{\prime\prime}_{m},a_ {-m})}. \tag{15}\]
Define the following set:
\[\mathcal{S}_{m}\stackrel{{\mathrm{def}}}{{=}}\big{\{}i\mid i\neq m \text{ and }m\in\mathcal{N}_{i}\big{\}} \tag{16}\]
and evaluate the difference
\[\sum_{i\in\mathcal{S}_{m}}\sum_{j\in\mathcal{N}_{i}}\phi(a_{i},a_ {j})\Bigg{|}_{(a^{\prime}_{m},a_{-m})}\\ -\sum_{i\in\mathcal{S}_{m}}\sum_{j\in\mathcal{N}_{i}}\phi(a_{i},a _{j})\Bigg{|}_{(a^{\prime\prime}_{m},a_{-m})}, \tag{17}\]
which is equal to
\[\sum_{i\in\mathcal{S}_{m}}\phi(a_{i},a^{\prime}_{m})\Bigg{|}_{(a^{\prime}_{m}, a_{-m})}-\sum_{i\in\mathcal{S}_{m}}\phi(a_{i},a^{\prime\prime}_{m})\Bigg{|}_{(a^{ \prime\prime}_{m},a_{-m})}. \tag{18}\]
Since
\[\phi(a_{i},a_{j})=\phi(a_{j},a_{i}), \tag{19}\]
we have that Eq. (18) is equal to
\[\sum_{i\in\mathcal{S}_{m}}\phi(a^{\prime}_{m},a_{i})-\phi(a^{\prime\prime}_{m},a_{i}). \tag{20}\]
Finally, since \(\phi\) is a potential function for the two-player game, we have:
\[\sum_{i\in\mathcal{S}_{m}}V_{m}(a^{\prime}_{m},a_{i})-V_{m}(a^{ \prime\prime}_{m},a_{i})=\\ U_{m}(a^{\prime}_{m},a_{-m})-U_{m}(a^{\prime\prime}_{m},a_{-m}). \tag{21}\]
We have established that this networked coordination game admits an exact potential function when the graph is regular. In the next section, we will obtain a closed form expression for the potential function for this game, and prove that the only two possible Nash equilibria are \(\mathbb{0}_{N}\) and \(\mathbb{1}_{N}\).
## IV Alternative form for the potential function
**Proposition 2**: _The potential function \(\Phi\) for the coordination game over a regular network is given by_
\[\Phi(a)=\frac{\theta K}{2}-\frac{\theta K}{N}\sum_{i\in[N]}a_{i}+\frac{1}{2K}a ^{\mathsf{T}}\mathbf{A}a, \tag{22}\]
_where \(\mathbf{A}\) is the adjacency matrix for the undirected regular connected graph that describes the network, \(\theta\) is the task difficulty, \(K\) is the degree of the regular graph \(\mathcal{G}\), \(N\) is the number of agents in the network, \(a\) is the action profile, and \(a_{i}\) is the \(i\)-th component of \(a\)._
The proof is immediate from equations Eqs. (6) and (7). There is only one step that requires some attention, and that is with the following identity:
\[\sum_{i\in[N]}\sum_{j\in\mathcal{N}_{i}}a_{j}=K\sum_{i\in[N]}a_{i}. \tag{23}\]
Fig. 3: The potential function for the bi-matrix coordination game between two players on a regular graph of degree \(K\).
To prove it, notice that the left hand side of Eq. (23) corresponds to the number of active agents in each of the \(N\) local neighborhoods of \(\mathcal{G}\). Since each agent belongs to exactly \(K\) neighborhoods, every active agent is counted \(K\) times. Therefore, this sum is equal to \(K\cdot\mathbb{1}_{N}^{\mathsf{T}}a\).
The seminal result obtained by Monderer and Shapley [19] establishes that the set of Nash equilibria of a potential game coincides with the sets of maximizers of the potential function. Therefore, we procced with the characterization of the solution set of the following optimization problem:
\[\begin{split}\underset{a}{\mathrm{minimize}}&\frac{ 1}{2}K\theta-\frac{K\theta}{N}\sum_{i\in[N]}a_{i}+\frac{1}{2K}a^{\mathsf{T}} \mathbf{A}a\\ \text{subject to}& a\in\{0,1\}^{N}.\end{split} \tag{24}\]
Before finding the maximizers of the problem in Eq. (24), we enumerate the following useful facts:
1. If \(\mathbf{A}\) is the adjacency matrix of a _regular_ graph of degree \(K\), then \(\mathbb{1}_{N}\in\mathbb{R}^{N}\) is an eigenvector of \(\mathbf{A}\), whose corresponding eigenvalue is \(\lambda_{1}=K\).
2. The largest eigenvalue (in magnitude) of \(\mathbf{A}\) is \(\lambda_{1}=K\).
3. Since \(\mathbf{A}\) is real and symmetric, it has \(N\) orthogonal eigenvectors \(\mathtt{e}_{1}\), \(\mathtt{e}_{2}\),..., \(\mathtt{e}_{N}\). These eigenvectors form an orthogonal basis for \(\mathbb{R}^{N}\).
4. The graph \(\mathcal{G}\) is connected if and only if \(\lambda_{1}=K\) has multiplicity one.
## V Maximizers of the potential function
### _An equivalent optimization problem_
Consider the orthonormal basis of \(\mathbb{R}^{N}\): \(\{\mathtt{e}_{1},\mathtt{e}_{2},\ldots,\mathtt{e}_{N}\}\), i.e., the set of eigenvectors of \(\mathbf{A}\). Therefore, any action profile \(a\in\{0,1\}^{N}\) can be written as a linear combination of \(\{\mathtt{e}_{i}\}_{i=1}^{N}\). That is,
\[a=\sum_{t=1}^{N}c_{t}\mathtt{e}_{t}, \tag{25}\]
where \(\mathtt{e}_{t}\) is an eigenvector of \(\mathbf{A}\) such that \(\left\|\mathtt{e}_{t}\right\|_{2}^{2}=1\), for \(t\in[N]\). Thus, the following identities hold:
\[\begin{split} a^{\mathsf{T}}\mathbf{A}a&=\Big{\langle} \sum_{t=1}^{N}c_{t}\mathtt{e}_{t},\sum_{t=1}^{N}c_{t}\mathbf{A}\mathtt{e}_{t} \Big{\rangle}\\ &=\Big{\langle}\sum_{t=1}^{N}c_{t}\mathtt{e}_{t},\sum_{t=1}^{N}c _{t}\lambda_{t}\mathtt{e}_{t}\Big{\rangle}\\ &=\sum_{t=1}^{N}\lambda_{t}c_{t}^{2},\end{split} \tag{26}\]
where \(\lambda_{t}\) is the \(t\)-th largest eigenvalue (not necessarily distinct) of \(\mathbf{A}\) corresponding to \(\mathtt{e}_{t}\). Since \(a\) is a binary vector, we have that \(\sum_{i\in[N]}a_{i}=\left\|a\right\|_{1}=\left\|a\right\|_{2}^{2}\). Then,
\[\sum_{i\in[N]}a_{i}=\langle a,a\rangle=\Big{\langle}\sum_{t=1}^{N}c_{t} \mathtt{e}_{t},\sum_{t=1}^{N}c_{t}\mathtt{e}_{t}\Big{\rangle}=\sum_{t=1}^{N}c _{t}^{2}. \tag{27}\]
Using Eqs. (26) and (27), we can rewrite the potential function as:
\[\begin{split}\Phi(a)&=\frac{1}{2}K\theta-\frac{K \theta}{N}\sum_{t=1}^{N}c_{t}^{2}+\frac{1}{2K}\sum_{t=1}^{N}\lambda_{t}c_{t}^{2 }\\ &=\frac{1}{2}K\theta+\sum_{t=1}^{N}\Big{(}\frac{\lambda_{t}}{2K} -\frac{K\theta}{N}\Big{)}c_{t}^{2}.\end{split} \tag{28}\]
Thus, the optimization problem in Eq. (24) can be reformulated in terms of \(c_{1},c_{2},\ldots,c_{N}\):
\[\begin{split}\underset{c_{1},c_{2},\ldots,c_{N}}{\mathrm{maximize}}& \frac{1}{2}K\theta+\sum_{t=1}^{N}\Big{(}\frac{\lambda_{t}}{2K}- \frac{K\theta}{N}\Big{)}c_{t}^{2}\\ \text{subject to}&\sum_{t=1}^{N}c_{t}\mathtt{e}_{t} \in\{0,1\}^{N},\end{split} \tag{29}\]
where the optimization variables \(c_{t}\in\mathbb{R}\), \(t\in[N]\).
### _A relaxed version of the problem_
The constraint in Eq. (29) is difficult to deal with. To find a solution to our optimization problem, we use the following relaxed version of the problem. Consider a new constraint \(\sum_{t=1}^{N}c_{t}^{2}\leq N\), which defines a hyper-spherical set that completely covers the feasible set of the original problem in Eq. (29). Therefore, we will solve
\[\begin{split}\underset{c_{1},c_{2},\ldots,c_{N}}{\mathrm{maximize}}& \frac{1}{2}K\theta+\sum_{t=1}^{N}\Big{(}\frac{\lambda_{t}}{2K}-\frac{K \theta}{N}\Big{)}c_{t}^{2}\\ \text{subject to}&\sum_{t=1}^{N}c_{t}^{2}\leq N. \end{split} \tag{30}\]
**Theorem 2**: _Let \(\mathcal{S}_{\mathcal{G}}\) denote the set of possible Nash equilibria for the network coordination game over a regular connected graph. Then,_
\[\mathcal{S}_{\mathcal{G}}^{\star}=\{0_{N},\mathbb{1}_{N}\}. \tag{31}\]
The proof corresponds to showing that the possible solutions to the optimization problem in Eq. (29) are \(\mathbb{1}_{N}\) and \(\mathbb{0}_{N}\). Let us solve the relaxed problem in Eq. (30). Notice that the objective function is continuous over a compact feasible set. Therefore, it always admits a solution. Intuitively, the problem consists of assigning a limited energy budget to each component \(c_{1},c_{2},\ldots,c_{N}\).
Recall that \(\lambda_{1}>\lambda_{2}\) for connected graphs. In particular, for a regular graph of degree \(K\), \(\lambda_{1}=K\). Therefore, we need to consider three cases:
* Case 1: Let \(\theta\) be such that \[\frac{\lambda_{1}}{2K}-\frac{K\theta}{N}<0\Rightarrow\frac{\lambda_{t}}{2K}- \frac{K\theta}{N}<0,\ \ t\in[N].\] (32) Therefore, \[c_{t}^{\star}=0,\ \ t\in[N].\] (33)
* Case 2: Let \(\theta\) be such that \[\frac{\lambda_{1}}{2K}-\frac{K\theta}{N}>0.\] (34)
Since
\[\frac{\lambda_{1}}{2K}-\frac{K\theta}{N}>\frac{\lambda_{t}}{2K}-\frac{K\theta}{N}, \ \ t\in[N]\backslash\{1\}, \tag{35}\]
the optimal allocation places the entire energy budget in \(c_{1}\). That is,
\[c_{t}^{\star}=\begin{cases}\pm\sqrt{N},&t=1\\ 0,&\text{otherwise}.\end{cases} \tag{36}\]
* Case 3: Let \(\theta\) be such that \[\frac{\lambda_{1}}{2K}-\frac{K\theta}{N}=0.\] (37) Then, \[\frac{\lambda_{t}}{2K}-\frac{K\theta}{N}<0,\ \ t\in[N]\backslash\{1\}.\] (38) Therefore, \[c_{t}^{\star}=\begin{cases}\zeta\in\big{[}-\sqrt{N},+\sqrt{N}\big{]}&t=1\\ 0,&\text{otherwise}.\end{cases}\] (39)
When \(c_{1},c_{2},\ldots,c_{N}\) takes the value \((\pm\sqrt{N},0,\ldots,0)^{\mathsf{T}}\) or \(\mathbb{0}_{N}\) in \(\mathbb{R}^{N}\), it corresponds to \(a=\mathbb{1}_{N}\) and \(\mathbb{0}_{N}\), respectively. In Case 3, notice that the optimal solution is a continuum. However, only when \(\zeta\in\{\pm\sqrt{N},0\},\) we have a corresponding solution in the original feasible set, and these points map to \(a=\mathbb{1}_{N}\) and \(\mathbb{0}_{N}\), respectively. Since \(\mathbb{1}_{N}\) and \(\mathbb{0}_{N}\) are contained in the feasible set of problem in Eq. (24), the relaxation in Eq. (30) is exact. Therefore, there is a unique \(a^{\star}\in\{\mathbb{1}_{N},\mathbb{0}_{N}\}\) that solves Eq. (24) if \(\theta\neq\frac{N}{2K}\), with a corresponding optimal values of \((N-K\theta)/2\), and \(K\theta/2\), respectively.
## VI Trade-off between rationality and connectivity
### _Log-Linear Learning_
We assume that all players in the network use a learning algorithm known as _Log-Linear Learning_ (LLL) [17, 18]. LLL is a widely used learning algorithm where each agent updates its action at time \(t\) based on its payoff given the actions played by its neighbors at time \(t-1\). At each time step \(t>0\), one agent \(i\in[N]\) is chosen uniformly at random, and allowed to update its current action. All other agents repeat the action taken at the previous time-step. The probability that agent \(i\) chooses action \(a_{i}\in\{0,1\}\) is determined as follows [17]:
\[\Pr\big{(}A_{i}(t)=a_{i}\big{)}=\frac{e^{\beta U_{i}\big{(}a_{i},a_{-i}(t-1) \big{)}}}{\sum_{a_{i}^{\prime}\in\mathcal{A}_{i}}e^{\beta U_{i}\big{(}a_{i}^{ \prime},a_{-i}(t-1)\big{)}}},\ a_{i}\in\mathcal{A}_{i}. \tag{40}\]
If the game is an _exact potential game_, then the Markov chain induced by LLL has a unique stationary distribution \(\mu:\{0,1\}^{N}\rightarrow[0,1]\) given by
\[\mu(a\mid\beta)=\frac{e^{\beta\Phi(a)}}{\sum_{a^{\prime}\in\{0,1\}^{N}}e^{ \beta\Phi(a^{\prime})}}, \tag{41}\]
where \(\Phi:\{0,1\}^{N}\rightarrow\mathbb{R}\) is the potential function.
We are interested in the minimum value of \(\beta\) such that all agents coordinate at the optimal action profile of the game with high probability. Consider the following definition, given a relatively small \(\delta\in(0,1)\):
\[\beta^{\min}(\delta)\stackrel{{\rm def}}{{=}}\min\Big{\{}\beta \mid\mu(a^{\star}\mid\beta)\geq 1-\delta\Big{\}}. \tag{42}\]
where \(a^{\star}\) is the optimal action profile, i.e., the unique maximizer of \(\Phi\).
Before discussing the interplay between rationality parameter \(\beta\) in LLL and connectivity \(K\), we need to address the fact that there may exist multiple regular graphs with the same connectivity degree. These graphs differ from each other up to a similarity transformation on columns (or rows by the fact that the adjacency matrix is symmetric) of their adjacency matrix. Applying the similarity transformation are equivalent to re-assigning indices to agents. Although the single potential value can be different on two isomorphic graphs \(\mathcal{G}^{1}\) and \(\mathcal{G}^{2}\) with the same \(K\) and \(N\), there exists a unique \(\tilde{a}^{\prime}\in\{0,1\}^{N}\) such that \(\Phi_{\mathcal{G}^{1}}(a^{\prime})=\Phi_{\mathcal{G}^{2}}(\tilde{a}^{\prime})\). Such \(\tilde{a}^{\prime}\) can be derived by applying the same similarity transformation on \(a^{\prime}\). Therefore, when computing the exact value of \(\mu(a^{\prime}\mid\beta)\) for a specific \(a^{\prime}\neq a^{\star}\), we should specify and fix a graph \(\mathcal{G}\). Nevertheless, \(\Phi(a^{\star})\) stays the same for any isomorphic graphs with the same degree and so does \(\mu(a^{\star}\mid\beta)\). This is further discussed in the proof of our next theorem.
The following lemma from graph theory is useful when characterizing the existence of a regular graph.
**Lemma 1**: _A regular graph \(\mathcal{G}_{K}\) with \(N\) vertices of degree \(K\) exists if and only if \(K\in\{2,\ldots,N-1\}\) and \(NK\) is even._
**Corollary 1**: _Let \(\mathbf{A}_{K}\) be the adjacency matrix of a regular graph \(\mathcal{G}_{K}\) of degree \(K\). The following statements on \(\mathcal{G}_{K+1}\) hold:_
1. _Suppose_ \(N\) _is even. Then_ \(\mathcal{G}_{K+1}\) _always exists. Moreover, the adjacency matrix of the regular graph_ \(\mathcal{G}_{K+1}\) _can be formulated in the following sense: there exist two permutation matrices_ \(\mathbf{P}_{1}\) _and_ \(\mathbf{P}_{2}\) _such that,_ \(\mathbf{A}_{K+1}=\mathbf{P}_{1}(\mathbf{A}_{K}+\mathbf{P}_{2})\)_._
2. _Suppose_ \(N\) _is odd. Then_ \(\mathcal{G}_{K+1}\) _does not exist. However,_ \(\mathcal{G}_{K+2}\) _exists, and its adjacency matrix is given by_ \(\mathbf{A}_{K+2}=\mathbf{P}_{3}(\mathbf{A}_{K}+\mathbf{P}_{4}+\mathbf{P}_{5})\) _for some permutation matrices_ \(\mathbf{P}_{3}\)_,_ \(\mathbf{P}_{4}\) _and_ \(\mathbf{P}_{5}\)_._
The next step is to evaluate \(a^{\prime\mathsf{T}}\mathbf{A}a^{\prime}\) for each \(a^{\prime}\in\{0,1\}^{N}\). Consider the following bound for the quadratic form \(a^{\prime\mathsf{T}}\mathbf{A}a^{\prime}\).
**Lemma 2**: _Let \(\mathbf{A}\) be the adjacency matrix of a regular graph of degree \(K\). Let \(a^{\prime}\in\mathbb{R}^{N}\) be a binary vector with \(\|a\|_{1}=m\). The following inequality holds:_
\[a^{\prime\mathsf{T}}\mathbf{A}a^{\prime}\leq mK,\ \ a^{\prime}\in\{0,1\}^{N} \tag{43}\]
Let \(\|\mathbf{A}\|_{2}\) denote the \(\ell^{2}\) induced operator norm of \(\mathbf{A}\), that is, \(\|\mathbf{A}\|_{2}=\sup_{x\neq\theta_{N}}\frac{\|\mathbf{A}\|_{2}}{\|x\|_{2}}\). \(\|\mathbf{A}\|_{2}\) is known to be the largest singular value of \(\mathbf{A}\), in our case, \(K\). As any
operator norm is consistent with the vector norm inducing it, this gives us, for all \(a^{\prime}\in\{0,1\}^{N}\),
\[\|\mathbf{A}a^{\prime}\|_{2}\leq\|\mathbf{A}\|_{2}\|a^{\prime}\|_{2} \tag{44}\]
Using Holder's inequality on \(a^{\prime\mathsf{T}}\mathbf{A}a^{\prime}\), we obtain:
\[a^{\prime\mathsf{T}}\mathbf{A}a^{\prime}\leq\|a^{\prime}\|_{2}\|\mathbf{A}a^{ \prime}\|_{2}\leq\|a^{\prime}\|_{2}\|\mathbf{A}\|_{2}\|a^{\prime}\|_{2}=mK. \tag{45}\]
An interesting interpretation of Lemma 2 is that \(a^{\prime\mathsf{T}}\mathbf{A}a^{\prime}\) can be seen as an inner product of \(a^{\prime}\) and \(\mathbf{A}a^{\prime}\). This value is obtained by sampling the sequence \(\{(\mathbf{A}a^{\prime})[n]\}_{n=1}^{N}\). The sampling rule is defined by the action profile \(a\), where we keep the \(i\)-th value if \(a^{\prime}_{i}=1\) and discard it if \(a^{\prime}_{i}=0\). As suggested by the largest eigenvalue of \(\mathbf{A}\), the magnitude of the largest component in sequence \(\{(\mathbf{A}a^{\prime})[n]\}_{n=1}^{N}\) can not exceed \(K\), and there are at most \(m\) such samples, \(mK\) becomes a natural upper bound for \(a^{\prime\mathsf{T}}\mathbf{A}a^{\prime}\). In fact, the insight of this interpretation is captured by the permutation invariance of \(\ell^{p}\) norms and the definition of \(\|\mathbf{A}\|_{2}\).
From this point on, for simplicity, we ignore the constant term in our potential function \(\Phi\), as potential functions of the same coordination game differ from each other by a constant. We use the following expression instead:
\[\hat{\Phi}(a)=-\frac{K\theta}{N}\sum_{i\in[N]}a_{i}+\frac{1}{2K}a^{\mathsf{T}} \mathbf{A}a. \tag{46}\]
**Theorem 3**: _Consider a potential game in which the agents use LLL with rationality parameter \(\beta\in\mathbb{R}_{\geq 0}\). Suppose all agents are distributed on a regular graph of connectivity \(K\). The probability that all agents learn to play the optimal action profile is defined as_
\[g(\beta,K)\stackrel{{\mathrm{def}}}{{=}}\mu_{K}(a^{\star}\mid \beta), \tag{47}\]
_where \(a^{\star}\) is a maximizer of \(\Phi\). The function \(g\) is strictly increasing in \(\beta\). Moreover, \(g\) is monotone in \(K\) for \(K>N/(2\theta)\):_
1. \(g(\beta,K)<g(\beta,K+1)\) _for even_ \(N\)_;_
2. \(g(\beta,K)<g(\beta,K+2)\) _for odd_ \(N\)_._
For monotonicity with respect to \(\beta\), computing the derivative of \(g\) with respect to \(\beta\), we obtain the following equivalence: \(\frac{\partial g}{\partial\beta}>0\) if and only if
\[\hat{\Phi}(a^{\star})e^{\beta\hat{\Phi}(a^{\star})}\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
the reason \(K\in\{2,\ldots,N-1\}\) has a restricted range of values, such that \(K\) must stop growing larger before it hit the monotonicity threshold \(\frac{N}{2\theta}\). For \(K\leq\left\lfloor\frac{N}{2\theta}\right\rfloor-1\), numerical results suggest that the monotonicity is reversed. However, we conjecture that LLL would still show increasing behavior with respect to \(K\) in the sense of expected total reward, that is, \(\mathbb{E}^{\mu_{K}}[\hat{\Phi}]\) is an increasing function of \(K\), as \(\hat{\Phi}(a)\) being a random variable defined on \(\{0,1\}^{N}\) with distribution \(\mu_{K}\). We leave this issue to be addressed in future work.
Recall the definition of \(\beta^{\min}(\delta)\) in Eq. (42). Suppose the failure probability \(\delta\) is fixed, then there exists a trade-off between the minimal rationality \(\beta^{\min}(\delta)\) and connectivity \(K\) since \(\mu_{K}(a^{\star}\mid\beta)\) is increasing in both \(\beta\) and sufficiently large \(K\). For \(\theta\notin(0,\frac{N}{2})\), a network of better connectivity would allow for a smaller \(\beta^{\min}(\delta)\), in order to guarantee that Log-Linear Learning procedure hits the same probabilistic bound.
We proceed to estimate the value of \(\beta^{\min}_{K}(\delta)\). First notice that:
\[\mu_{K}(a^{\star}\mid\beta)=\frac{e^{\beta\left(-\frac{K\theta}{N}\mathbb{I} _{N}^{\intercal}a_{1}^{\prime}+\frac{1}{2K}a^{\prime\intercal}\mathbf{A}a^{ \prime}\right)}}{\sum_{a^{\prime}\in\{0,1\}^{N}}e^{\beta\left(-\frac{K\theta} {N}\mathbb{I}_{N}^{\intercal}a_{1}^{\prime}+\frac{1}{2K}a^{\prime\intercal} \mathbf{A}a^{\prime}\right)}}. \tag{57}\]
Using the inequality from Lemma 2 and \(\beta\in\mathbb{R}_{\geq 0}\), we have
\[e^{\beta\left(-\frac{K\theta}{N}\mathbb{I}_{N}^{\intercal}a_{1}^{\prime}+ \frac{1}{2K}a^{\prime\intercal}\mathbf{A}a^{\prime}\right)}\leq e^{\beta \left(-\frac{K\theta}{N}m+\frac{m\theta}{2K}\right)}. \tag{58}\]
Counting the number of binary vectors with m 1's in them, we have
\[\underset{a^{\prime}\in\{0,1\}^{N}}{\sum} e^{\beta(-\frac{K\theta}{N}\mathbb{I}_{N}^{\intercal}a_{1}^{ \prime}+\frac{1}{2K}a^{\prime\intercal}\mathbf{A}a^{\prime})}\leq\underset{m =0}{\overset{N}{N}}\overset{N}{m}\overset{N}{e^{\beta(-\frac{K\theta}{N}m+ \frac{m}{2})}}. \tag{59}\]
Multiplying both numerator and denominator in Eq. (57) with \(e^{\beta K\theta}\), upon using Binomial Theorem, a lower bound on Eq. (57) can be obtained as
\[\mu_{K}(a^{\star}\mid\beta)\geq\frac{e^{\beta\hat{\Phi}(a^{\star})}e^{\beta K \theta}}{(e^{\frac{1}{2}\beta}+e^{\frac{\beta K\theta}{N}})^{N}}. \tag{60}\]
Note that the right-hand-side of Eq. (60) is also an increasing function of \(\beta\) for any \(\theta\neq\frac{N}{2k}\). This can be verified by taking its derivative with respect to \(\beta\). Moreover, this lower bound matches with the true value of \(\mu_{K}(a^{\star}\mid\beta)\) when \(\beta=0\) and \(\beta\rightarrow\infty\). This means the lower bound in Eq. (60) is asymptotically tight if \(\theta\neq\frac{N}{2k}\). Now we are ready to propose our final theorem which quantifies the interplay between \(\beta\) and \(K\).
**Theorem 4**: _Suppose LLL is performed on a networked coordination game with task difficulty \(\theta\neq\frac{N}{2k}\). Then, if_
\[\beta\geq\bigg{|}\frac{K\theta}{N}-\frac{1}{2}\bigg{|}^{-1}\bigg{(}\frac{\log (1-\delta)}{N}-\log(1-e^{\frac{\log(1-\delta)}{N}})\bigg{)} \tag{61}\]
LLL is guaranteed to force all agents to play the optimal action profile with probability at least \(1-\delta\). In other words, \(\mu(a^{\star}\mid\beta)\geq 1-\delta\).
The proof follows immediately from setting the right-hand-side of Eq. (60) to \((1-\delta)\) and solving for \(\beta\).
The above figure shows graphs of the same number of nodes but different connectivity degrees. Simulations are run on these networks and \(\mu_{K}(a^{\star}\mid\beta)\) as functions of \(\beta\) are plotted for different values of \(K\).
We can clearly identify the monotonic behavior of \(\mu_{K}(a^{\star}\mid\beta)\) in both \(\beta\) and \(K\) for \(\theta=5.1\) and \(\theta=3\). However, for \(\theta=5.1\), the monotonicity holds for every possible regular graph of degree \(K\geq 2\), whereas for \(\theta=3\), the monotonicity only holds for \(K\geq 4\).
Fig. 4: Two examples of connected regular graphs with \(N=20\) vertices: \((a)\)\(K=2\) and \((b)\)\(K=10\).
## VIII Conclusions and future work
In this paper we discussed about a coordination game played on a regular network of agents. We showed that the game is potential and gave a closed-form potential function. We proved that the maximizers of the potential function are strictly one of the Nash-equilibria of the original game. We exploited LLL on the network. Upon analysis on the steady state distribution induced by LLL and numerical experiments, we showed that for regular networks with sufficiently large connectivity, there exist a trade-off between connectivity and rationality: better connectivity would allow for a smaller rationality in LLL to achieve the same level of success probability. We also gave an upper bound for the minimal rationality, as a function of connectivity, to guarantee to effectiveness of LLL in the long run.
We left the reversed monotonic behavior of steady state probability on poorly connected networks for future work. A hypothesis is presented in the paper: the potential value, which is regarded as a random variable defined on the action space, would show pure monotonicity on connectivity. A rigorous proof for this hypothesis is currently under investigation. We are also interested in the finite-time analysis, specifically the expected first hitting time of LLL. Stochastic learning in networked coordination games is a rich topic with many open problems, which we are going to explore using analytic and algorithmic approaches.
|
2304.00177 | Hierarchical Vision Transformers for Cardiac Ejection Fraction
Estimation | The left ventricular of ejection fraction is one of the most important metric
of cardiac function. It is used by cardiologist to identify patients who are
eligible for lifeprolonging therapies. However, the assessment of ejection
fraction suffers from inter-observer variability. To overcome this challenge,
we propose a deep learning approach, based on hierarchical vision Transformers,
to estimate the ejection fraction from echocardiogram videos. The proposed
method can estimate ejection fraction without the need for left ventrice
segmentation first, make it more efficient than other methods. We evaluated our
method on EchoNet-Dynamic dataset resulting 5.59, 7.59 and 0.59 for MAE, RMSE
and R2 respectivelly. This results are better compared to the state-of-the-art
method, Ultrasound Video Transformer (UVT). The source code is available on
https://github.com/lhfazry/UltraSwin. | Lhuqita Fazry, Asep Haryono, Nuzulul Khairu Nissa, Sunarno, Naufal Muhammad Hirzi, Muhammad Febrian Rachmadi, Wisnu Jatmiko | 2023-03-31T23:42:17Z | http://arxiv.org/abs/2304.00177v1 | # Hierarchical Vision Transformers for Cardiac Ejection Fraction Estimation
###### Abstract
The left ventricular of ejection fraction is one of the most important metric of cardiac function. It is used by cardiologist to identify patients who are eligible for life-prolonging therapies. However, the assessment of ejection fraction suffers from inter-observer variability. To overcome this challenge, we propose a deep learning approach, based on hierarchical vision Transformers, to estimate the ejection fraction from echocardiogram videos. The proposed method can estimate ejection fraction without the need for left ventricle segmentation first, make it more efficient than other methods. We evaluated our method on EchoNet-Dynamic dataset resulting \(5.59\), \(7.59\) and \(0.59\) for MAE, RMSE and R\({}^{2}\) respectively. This results are better compared to the state-of-the-art method, Ultrasound Video Transformer (UVT). The source code is available on [https://github.com/lhfazry/UltraSwin](https://github.com/lhfazry/UltraSwin).
Echocardiography, Cardiac Ejection Fraction, UltraSwin, Vision Transformers, EchoNet-Dynamic
## I Introduction
The cardiovascular system is the human circulatory system consists of various important organs which have the main function to circulate oxygen, nutrients, and hormones to all cells and tissues of the body [1]. One of the vital organs in the circulatory system is the cardiac which pump blood throughout the body and receive blood flow back. Based on data from the World Health Organization (WHO), cardiovascular disease is still a deadly disease worldwide. Every year the death rate from this disease increases and in 2019 around 17.9 million people died or 32% of the world's mortality rate [2]. Therefore, a fast and accurate method is needed for cardiac diagnoses so it can be handled quickly and properly.
A common method to diagnose cardiac disease is the assesment through echocardiograph video. It is an imaging technique to assess the cardiac function and structure [3]. The information that is taken from echocardiograph video can be used as the basis for initial screening to diagnoses the cardiac disease. It can also helps for deciding further treatments.
One of the most important metric that can be used to determine the cardiac function is Left Ventricular Ejection Fraction (LVEF) or Ejection Fraction (EF) for short [4]. EF measures how much blood volume that are ejected out of cardiac within one heart-beat. To calculate EF from echocardiograph video, a cardiologist need to tracing the left ventricular to estimate End Systolic Volume (ESV) and End Diastolic Volume (EDV). ESV is the volume of left ventricular after the ejection process. On the other hand, EDV is the volume of left ventricular before the ejection process. Having the value of ESV and EDV in hand, EF is then calculated using the following formula:
\[EF=\frac{EDV-ESV}{EDV}\times 100\% \tag{1}\]
EF can be used to classify the cardiac condition using common threshold. EF value which is less than 50% can be considered as cardiomyopathies [5]. Cardiomyopathies are a heterogeneous group of heart muscle diseases and an important cause of heart failure (HF) [6]. Cardiac with EF less than \(50\%\) is an indication of heart failure. Heart failure with preserved ejection fraction (HFpEF) has been defined as having signs and symptoms of heart failure with preserved EF and diastolic abnormalities [7].
However, manually tracing the left ventricular and calculate the EF is very complicated task. It suffer from inter-observer variability. The EF can varies from one heart-beat to another. Furthermore, the American Society of Echocardiography (ASE) and the European Association of Cardiovascular Imaging (EACVI) recommend to observe up to 5 consecutive heart-beats, thus making the approach more complicated [8]. So the method that can estimate EF faster is needed.
With the advance of deep learning, some methods are developed to overcome this problem. Jahren et. al. use the combination of Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) to predict the location of end-diastole from electrocardiogram data (ECG) [9]. Ouyang et. al. use the combination of 3D convolution and atrous convolution to estimate EF [5]. It's clear from formula 1, ESV and EDV are needed to calculate EF. The above methods first segment the left ventricular in an echocardiograph video. From the segmentation, they try to detect ESV and EDV and then estimate the volume.
Recently, Reynaud et al. proposed UltraSound Video Transformers (UVT) [3] to estimate EF from echocardiograph video. UVT uses the Transformers, a popular model in Natural Language Processing (NLP), as a features extraction. Before processed by the Transformers, the input video is splitted frame by frame. Each frame is then encoded by ResNet Auto Encoder (ResNetAE) to reduce the dimension into \(1,024\) token length. This low dimensional features are then learned by the Transformers to produce another feature maps. The feature
maps are then processed by head regressor to produce EF estimation.
However, the process of learning from low dimensional features are not optimal, because most important feature may be loss during the encoding process. In this paper, we propose a novel method to predict EF by directly process the input video using hierarchical vision Transformers. Our method also can directly estimate the EF without the need to segment ESV and EDV and calculate their respective volume.
The focus of this research is to estimate the value of the EF that can be used to diagnose cardiomyopathy (the abnormalities in the heart muscle that can cause heart failure), assess eligibility for certain chemotherapies, and determine indication for medical devices [10]. The output of the regression task that utilize deep learning model in this study is the EF value. The common threshold value on the EF which is less than 50% can be used to classify cardiomyopathy, this threshold will be used as a reference to determine the heart condition [5].
## II Related Work
Video data is any sequence of time-varying images. In the video data, the picture information is digitized both spatially and temporally. Nowadays, research on video data processing is also an emerging field of computer vision (CV) [11].
Furthermore, video processing techniques have begun to be used in research in the field of medical imaging. One of them is Ghorbani et al. that uses the CNN method with the architecture based on Inception-Resnet-v1 [12]. Inception-Resnet-v1 has good performance on the Imagenet benchmark dataset and computationally efficient compared to other architectures [13]. This research proved that the use of the deep learning method applied to echocardiography was able to identify the cardiac local anatomy and structure, to estimate metrics for measuring cardiac function and to predict the characteristics of the patients such as gender, height, and weight that not easily observed by human [12].
Other research from [14] comparing the use of four CNN architectures which aims to classify 14 classes of echocardiographic views consisting of single frame classification (2D CNN), multi-frame classification (TD CNN), spatio-temporal convolution (3D CNN) and two stream classifications. The best-performing model was a "two-stream" network using both spatial and optical flow inputs, with a corresponding error rate \(3.9\%\).
Ouyang et al. used the model of spatio-temporal convolutions with residual connections and generates frame-level to predict the EF for each cardiac cycle and then generates frame-level semantic segmentations of the left ventricle using weak supervision from expert human tracings. These outputs are combined to create beat-to-beat predictions of the EF and to predict the presence of heart failure. This study uses echocardiography video dataset from EchoNet-Dynamic [10].
In other research [3] which can also perform the task of predicting EF values by utilizing the Transformers based architecture that capable to process the videos of arbitrary duration. The method uses a Transformers architecture based on the Residual Auto-Encoder (ResAE) Network and a BERT model adapted for token classification.
Based on Reynaud et al [3], it can be concluded that the backbone architecture modeling in computer vision (CV) has begun to shift to the use of the Transformers architecture. The trend started with the introduction of ViT (Vision Transformers), which globally models non-overlapping spatial relationships in image patches using the standard Transformers encoder [15]. For this research, we use Video Swin Transformers [16], which completely follows the original Swin Transformers hierarchical structure [17]. However, we extend the local attention computation scope from the spatial domain to the spatio-temporal domain. The adaptation process was carried out in the 3D patch partition section and replaced the local window self-attention module into a 3D shifted window based on multi-head self-attention (MSA) and shifted window multi-head self-attention (SW-MSA) in the Transformers Block section. Video Swin Transformers can do the video-recognition tasks that contains an inductive bias towards spatio-temporal locality.
## III Method
In this paper, we propose a novel method to estimate EF from a cardiac ultrasound video. Our method uses a deep learning model, based on hierarchical vision Transformers. We named the model as UltraSwin. UltraSwin adopts Transformers [15], a popular deep learning model in Natural Language Processing (NLP) and its derivative work on Computer Vision (CV) [18, 17].
### _Model Architecture_
The architecture of UltraSwin is described in figure 1. The model is received a video as an input, specifically ultrasound video containing short time cardiac recording. The output of the model is the estimation of EF for the cardiac in the ultrasound video.
UltraSwin has 2 main modules, Transformers Encoder (TE) and EF Regressor. The TE module acts as feature extractor while EF Regressor as regressor head. The TE module is used to learn representation from input video and then output the feature maps. They are then processed by EF Regressor and transformed into scalar value. This value is then used as an EF estimation for the input ultrasound video.
Instead of treat a frame of the video as input token like in UVT [3], UltraSwin uses 3D video patches as input token following work of Liu et. al. [16].
### _Pre-processing_
In this research, we use ultrasound video from EchoNet-Dynamic dataset [10]. This dataset contains echocardiography videos with variety of frame length and contains at least one heart-beat. Although, it can have more than one heart-beat per video, ES (End Systolic) and ED (End Dyastolic) ground-truth are only for one heart-beat.
Each frame in the video have spatial dimension of \(112\times 112\) pixels. The frame's width and height must satisfy \(2^{n}\), so it
can be processed into patches. Each input for Transformers must have same length, so we cut the video into fixed length of \(128\) frames. We choose \(128\), because it is the closest \(2^{n}\) value from \(112\). We select ES and ED frames and any frames between them then cut them. The Echonet Dynamic dataset that we used in this research, contains varied length (total frames), frame rate and image quality [10]. Therefore, if the total frames in the cut out video are more than \(128\), we subsample the frames between ES and ED. Otherwise, we repeat or mirroring the frames between ES and ED and place them after ED to get the total \(128\) frame length. Suppose the sequence of frames \(F=[m_{ES},m_{b_{1}},\cdots,m_{b_{n}},m_{ED}]\), we repeat the frames between ED and ES to create new sequences \(\hat{F}=[m_{ES},m_{b_{1}},\cdots,m_{b_{n}},m_{ED},m_{b_{1}},\cdots]\). We choose this technique based on the research by Reynaud et al., where the mirroring technique gives better results than the random sampling technique in terms of fitting the number of total frames to 128 [3]. After that, we pad the frames with blank pixels, so its dimension becomes \(128\times 128\) pixels.
We also tried to augment the video with standard augmentations like horizontal flip, vertical flip, random rotation and others. Suprisingly, we found that augmentations lead to worse performance. This result indicates that ultrasound video dataset are sensitive to augmentation operations.
### _Transformers Encoder_
This module contains 4 stages. Unlike Vision Transformers (ViT) [18] that has fixed patch size along the stages, UltraSwin use hierarchical architecture following Swin Transformers [17] in the spatial spaces. At every stage, the patch size is downsampled into half of the patch size in the previous stages. To make the model learn temporal information, UltraSwin follows Video Swin Transformers [16] to process the video input in the shape the 3D patches.
TE module contains two main components.
#### Iii-C1 3D Patch Partition
Suppose an input video has dimension of \(T\times H\times W\times 3\), where \(T,H,W\) and \(3\) represent number of frames, frame's height, frame's width and number of channels respectively. The video input is then partitioned into 3D patch with dimension \(2\times 4\times 4\times 3\). In the Transformers world, this 3D patch is called token. Each token contains embedding features with length \(96\). Actually, we can use any number other than \(96\), but greater number can significantly
Fig. 1: Overall UltraSwin architecture. UltraSwin processes cardiac ultrasound video then output an estimation of ejection fraction for the video. UltraSwin consists of two main modules: Transformers Encoder (TE) and EF Regressor. TE modules acts as a features extractor and EF Regressor as regressor head.
Fig. 2: The illustration of 3D tokens and shifted windows mechanism. At first, each frame is splitted into patches. The patches are then grouped into windows. In the two consecutive attention layers, the windows configuration are then shifted. In this way, the attention can happen across windows while keeping the computation cheap, because the attention are only calculated within window (not calculated globally)
affects the computation cost. We use \(96\) following Video Swin Transformers [19] as it gives a good performance. This process yields \(\frac{T}{2}\times\frac{H}{4}\times\frac{W}{4}\) tokens in total. The tokens are then flattened into sequences before processed by the Transformers. Figure 2 illustrates the \(3\)D tokens.
The features of each token are then transformed by a linear layer into an arbitrary \(C\) dimensions. So, the dimension of the tokens is now \(\frac{T}{2}\times\frac{H}{4}\times\frac{W}{4}\times C\). This number is hyper-parameter and we can use arbitrary number for \(C\).
#### Iii-C2 Block Swin Transformers
Transformers [15] and Vision Transformers (ViT) [18] use global self-attention (SA) and compute softmax score between each tokens, thus making the computation and memory resources grow quadratically with token length. This approach is efficient enough for single input image. On the other side, the video have multiple image frames, so the approach are not suitable for video related tasks like video classification, video segmentation and others. UltraSwin use local window self-attention following Swin Transformers [17] that is proven more efficient in video related task than global self-attention [19].
While efficient, local window self-attention lacks of connection accross window. This can cause performance degradation on the model. To solve this issue, UltraSwin shifts the window partition in two consecutive Swin Transformers Block as illustrated in figure 2. As Transformers can have multiple layers of blocks, UltraSwin shifted the window configuration in every two consecutive blocks. This design is proven to be effective in image recognition task [17]. The main reason why it is effective is because it enables the connections between non-overlapping windows with their neighbours.
Suppose a sequence of 3D tokens with size \(T^{{}^{\prime}}\times H^{{}^{\prime}}\times W^{{}^{\prime}}\times 3\). In the first layer, these tokens are then arranged into regular non-overlapping window of size \(P\times M\times M\), thus resulting \(\lceil\frac{T^{{}^{\prime}}}{P}\rceil\times\lceil\frac{H^{{}^{\prime}}}{M} \rceil\times\lceil\frac{W^{{}^{\prime}}}{M}\rceil\) non-overlapping 3D windows in total. In the second layer, configuration of every window is shifted within width, height and temporal axes by \((\frac{P}{2}\times\frac{M}{2}\times\frac{M}{2})\).
The self-attention mechanism is applied multiple times in parallel. This is called heads. In multi-head scenario, the output from each self-attention are concatenated. In first layer, we called multi-head self-attention (MSA) and shifted window multi-head self-attention (SW-MSA) for second layer. Formally, we stated MSA as \([\text{SA}_{1},\text{SA}_{2},\cdots,\text{SA}_{n}]\) and SW-MSA as \([\text{SW-SA}_{1},\text{SW-SA}_{2},\cdots,\text{SW-SA}_{n}]\), where SA\({}_{i}\) and SW-SA\({}_{i}\) refer to self-attention in layer-\(i\) and shifted-window self-attention in layer-\(i\) respectivelly. The SA itself can be formulated as follow:
\[\text{SA}(Q,K,V)=\text{SoftMax}(\frac{QK^{T}}{\sqrt{d}}+B)V \tag{2}\]
where \(K,V,Q\in\mathbb{R}^{PM^{2}\times d}\) are matrices for _key_, _value_ and _query_ respectively, while \(d\) is _query_ and _key_ dimension, \(PM^{2}\) is the number of tokens in 3D window, and \(B\in\mathbb{R}^{P^{2}\times M^{2}\times M^{2}}\) is matrix of relative position bias.
The self-attention blocks are then followed by feed forward networks, which is 2 layers MLP with GELU [20] non-linearity in between. Layer Normalization (LN) [21] is applied before self-attention module and before the MLP. Residual connection [22] is then applied after self-attention and after MLP. Two consecutives of Swin Transformers blocks in layer-\(l\) and layer-\(l+1\) can be formulated as follow:
\[\hat{z}^{l} =\text{MSA}(\text{LN}(z^{l-1}))+z^{l-1}\] \[z^{l} =\text{MLP}(\text{GELU}(\text{LN}(\hat{z}^{l})))+\hat{z}^{l}\] \[\hat{z}^{l+1} =\text{SW-MSA}(\text{LN}(z^{l}))+z^{l}\] \[z^{l+1} =\text{MLP}(\text{GELU}(\text{LN}(\hat{z}^{l+1})))+\hat{z}^{l+1} \tag{3}\]
### _EF Regressor_
The EF regressor take the output of the TE module as input. The input is a features map with dimension \(\frac{T}{2}\times\frac{H}{32}\times\frac{W}{32}\times\frac{W}{32}\times 8C\). The temporal axes are then reduced from the map, resulting new dimension \(\frac{H}{32}\times\frac{W}{32}\times 8C\). A lnier layer is then applied to reduce the last axes of feature maps from \(8C\) into \(4C\). A Layer Normalization (LN) is then applied followed by lnier layer to reduce the feature axes into \(1\) dimension. Spatial reduction is then applied to the map resulting \(1\times 1\) dimesion scalar. This scalar value is then used as EF estimation.
### _Model Variants_
We propose two variants of UltraSwin: UltraSwin-base and UltraSwin-small. Table I summarizes the two variants. Number of head and layer depth values in table I refer to configurations on stage \(1,2,3\) and \(4\) respectivelly. The total parameter for UltraSwin-small is almost half from total parameter for UltraSwin-base.
### _Loss Function_
Both variants are trained to minimize MSE (Mean Squared Error). We use MSE because it is commonly used in regression task and gives the best performances. MSE is defined as follow:
\[L(y,\hat{y})=\frac{1}{N}\sum_{i=1}^{N}\left(y_{i}-\hat{y}_{i}\right)^{2} \tag{4}\]
where \(y_{i}\) and \(\hat{y}_{i}\) refer to EF ground-truth and EF prediction from the model respectivelly.
## IV Experiments
### _Dataset_
The dataset used in this research is EchoNet-Dynamic [10] which is an open dataset and sourced from the Stanford Artificial Intelligence in Medicine and Imaging (AIMI) Center obtained from [https://stanfordaimi.azurewebsites.net](https://stanfordaimi.azurewebsites.net). The dataset
contains videos of heart movement and chamber volume from echocardiography or cardiac ultrasound. The total video is \(10,030\) in _avi format consists of \(7,465\) training data, \(1,288\) validation data and \(1,277\) test data.
Each video has varies duration with number of frames ranging from \(28\) to \(1002\). The spatial dimension for each frame is \(112\times 112\) pixels. Each video has frame per second (FPS) \(50\). The videos come with EF ground truth which value ranging from \(6.9\) to \(96.96\).
### _Implementation Details_
The model architecture was created using the Python 3.8 programming language and the PyTorch 1.11 framework. The Pytorch Lightening 1.6.4 library was used to simplify the training process. We also use Tensorboard library to records the evaluation metrics. The model was trained using a \(1\) core NVidia Tesla T4 GPU. To save the memory usage, \(16\) bit precision is used for gradient calculations during training and batch_accumulation\(=2\) to speed up the training process.
In the UltraSwin-base model, the batch_size parameter used is \(2\), while in the UltraSwin-small is \(4\). To speed up the model convergencies during training, we initialize the TE module weights using the pre-trained Swin Transformer model that had been trained using the ImageNet 22k dataset [23].
### _Training Details_
The UltraSwin models were trained without freezing the TE module to avoid the problem of different domains in transfer learning. For the UltraSwin-base, the TE module weights are initialized using pretrained swin_base_patch4_window7_224_22k, while for the UltraSwin-small using the pretrained swin_small_patch4_window7_224_22k. Both pretrained models can be downloaded at the [https://github.com/microsoft/Swin-Transformer](https://github.com/microsoft/Swin-Transformer) page.
During the training process, AdamW [24] optimization was used with an initial learning rate of \(10^{-4}\) and weight decay of \(10^{-4}\). Both models were trained for \(20\) epochs. At each epoch, the learning rate was reduced by \(0.15\) from the learning rate in the previous epoch.
On the UltraSwin-base model, the training process takes approximately \(30\) minutes for one epoch, while on the UltraSwin-small it takes approximately \(15\) minutes for one epoch. When making predictions using the trained model, the same configuration is used as the configuration in the training. However, because the inference process only performs forward propagation without the need to calculate the gradient (back propagation), the batch_size parameter can be increased to \(8\).
## V Result and Discussion
Here we show the result of the experiments for UltraSwin-base and UltraSwin-small. We then compared the results of our two variations models with the state-of-the-art method, Ultrasound Video Transformers (UVT) [3].
Table II summarizes the result of our experiments and we compare it with the results of UVT model from Reynaud et al [3]. We use three metrics to evaluate the models, MAE (Mean Absolute Error), RMSE (Root Mean Squared Error) and \(R^{2}\) (Coefficient of Determination). Smaller value of MAE and RMSE means better performance. However, higher value of \(R^{2}\) means better performance.
It can be seen that UltraSwin-small with smaller number of parameters than UVT is able to produce a smaller values for MAE and RMSE and higher value for \(R^{2}\). This proves that UltraSwin-small is superior to UVT. Furthermore, UltraSwin-base is superior to UltraSwin-small. Both variations of UltraSwin are able to outperform the UVT on the three evaluation metrics.
During training, we log the training and validation losses at every epoch. Figure 3 shows the training and validation losses for UltraSwin-small model. The blue and orange line represent training and validation losses. From the graph, it can be seen that both training and validation losses are reduced as the epoch increased. But the validation loss seems to be fluctuated on early epoch for UltraSwin-small. It's because the model still in the early learning phase. After 3 epochs, the reduction of validation loss is quite stable.
Similar to UltraSwin-small, both training and validation losses for UltraSwin-base are reduced as the epoch increased. It can be seen from Figure 4 that the loss reduction are quite stable both for training and validation loss.
## VI Conclusion
In this paper, we propose UltraSwin, a novel method to estimate EF from echocardiogram videos. This method uses Swin Transformers, a hierarchical vision transformers to extract spatio-temporal features. Furthermore, it gives better EF
Fig. 3: Graph of training and validation loss for UltraSwin-small. It can be seen from this graph that both training and validation loss are reduced as epoch increases. On the early epoch, the validation loss is fluctuated. It is because the model still in early learning phase. After 3 epochs, the validation is quite stable.
estimation than UVT. One can futher research to improve UltraSwin performance, for example by aggregating the features extraction on every stages before processed by EF regressor or use combination between 3D tokens and another Vision Transformers backbone such as Pyramid Vision Transformer (PVTv2) [25].
## Acknowledgment
This work is supported by Research Laboratory of Faculty of Computer Science, Universitas Indonesia. Thank you for contributing to provide some facilities in laboratory and supporting this research.
|
2309.06511 | DF-TransFusion: Multimodal Deepfake Detection via Lip-Audio
Cross-Attention and Facial Self-Attention | With the rise in manipulated media, deepfake detection has become an
imperative task for preserving the authenticity of digital content. In this
paper, we present a novel multi-modal audio-video framework designed to
concurrently process audio and video inputs for deepfake detection tasks. Our
model capitalizes on lip synchronization with input audio through a
cross-attention mechanism while extracting visual cues via a fine-tuned VGG-16
network. Subsequently, a transformer encoder network is employed to perform
facial self-attention. We conduct multiple ablation studies highlighting
different strengths of our approach. Our multi-modal methodology outperforms
state-of-the-art multi-modal deepfake detection techniques in terms of F-1 and
per-video AUC scores. | Aaditya Kharel, Manas Paranjape, Aniket Bera | 2023-09-12T18:37:05Z | http://arxiv.org/abs/2309.06511v1 | # _DF-TransFusion_: Multimodal Deepfake Detection via
###### Abstract
With the rise in manipulated media, deepfake detection has become an imperative task for preserving the authenticity of digital content. In this paper, we present a novel multi-modal audio-video framework designed to concurrently process audio and video inputs for deepfake detection tasks. Our model capitalizes on lip synchronization with input audio through a cross-attention mechanism while extracting visual cues via a fine-tuned VGG-16 network. Subsequently, a transformer encoder network is employed to perform facial self-attention. We conduct multiple ablation studies highlighting different strengths of our approach. Our multi-modal methodology outperforms state-of-the-art multi-modal deepfake detection techniques in terms of F-1 and per-video AUC scores.
## 1 Introduction
Deepfake refers to the application of deep learning techniques for generating manipulated digital media, such as video or audio, often with malicious intent for purposes like fraud, defamation, and the dissemination of disinformation or propaganda. The proliferation of deepfakes poses a significant threat to the authenticity and credibility of digital media, causing adverse effects on businesses, governments, and political leaders as it becomes increasingly difficult for humans to discern whether a piece of audio or video has been manipulated.
Although multimedia forgery is not a novel phenomenon, advancements in deep learning and the development of generative adversarial networks (GAN) Goodfellow et al. (2020); Creswell et al. (2018); Wang et al. (2017) have revolutionized the field of computer vision and deepfake generation. State-of-the-art techniques, such as FaceSwap, FakeApp, FaceShifter Li et al. (2020), Face2FaceThies et al. (2018), DeepFaceLab Perov et al. (2020), and Neural Textures Thies et al. (2019), have been employed to create deepfakes by swapping faces in original videos with target images. The widespread availability of deepfake generation methods necessitates the development of sophisticated deep learning techniques to combat the issue.
Several convolutional neural network (CNN) architectures have been proposed for deepfake detection Zhou et al. (2017); Afchar et al. (2018); Li and Lyu (2018); Rossler et al. (2019); Nguyen et al. (2019); Yang et al. (2018). Recurrent neural networks have also been utilized to capture time dependencies in deepfake detection tasks Guera and Delp (2018); Masi et al. (2020). More recently, transformer-based architectures with multi-head attention mechanisms Vaswani et al. (2017); Khormali and Yuan (2021); Zhao et al. (2021) have demonstrated promising results compared to CNN-based methods. Some approaches even combine CNN and attention mechanisms for deepfake detection Coccomini et al. (2022). However, most deepfake detection techniques focus on either video or audio modalities, with only a few addressing both Mittal et al. (2020). Given the rise in accessibility of deepfake generation, it is vital to develop deepfake detection methods that consider both audio and video modalities.
The majority of deepfake detection models focus on either audio-only or video-only detection methods, primarily due to the scarcity of datasets featuring both audio and video deepfakes. Datasets like UADFV Yang et al. (2019), FaceForensics++ Rossler et al. (2019), CelebDF Li et al. (2019), and Deeper Forensics 1.0 Jiang et al. (2020) contain video-only deepfakes. In contrast, DFDC Dolhansky
Figure 1: We propose a multi-modal deepfake detection technique that uses self-attention to detect deepfake artifacts and cross-attention to identify discrepancies between lip movements and audio signals.
et al., 2020), DF-TIMIT (Korshunov and Marcel, 2018), and FakeAVCeleb (Khalid et al., 2021) feature deepfakes in both audio and video modalities. Ignoring the audio modality can be problematic, as audio provides crucial information for multimodal deepfake detection tasks (Mittal et al., 2020). Table 1 summarizes deepfake dataset based on the modality in which the deepfake occurs.
In this paper, we address the limitations of existing deepfake detection approaches by proposing a multi-modal method that effectively leverages both audio and video information for deepfake detection. We introduce a novel pipeline that employs a fine-tuned VGG-16 feature extractor and transformer encoders to process and analyze input audio and video data. Our approach utilizes self-attention mechanisms to detect deepfake artifacts in facial regions, and cross-attention mechanisms to identify discrepancies between lip movements and audio. We evaluate the performance of our method through rigorous ablation studies and demonstrate that our multi-modal approach outperforms state-of-the-art methods in deepfake detection, even when compared to unimodal approaches with significantly more trainable parameters. To summarize, we propose the following:
* We introduce a novel multi-modal deepfake detection technique that capitalizes on both audio and video modalities by employing fine-tuned VGG-16 feature extractor and transformer encoders.
* Our method uses self-attention to detect deepfake artifacts in facial regions, and cross-attention to identify discrepancies between lip movements and audio signals.
* Our proposed approach surpasses the previous state-of-the-art in multi-modal deepfake detection strategies
* We conduct comprehensive ablation studies to demonstrate the efficacy of our approach.
In Section 2, we perform a thorough literature review on media forensics as well as unimodal and multimodal deepfake detection methods. In Section 3, we discuss our model architecture and data pre-processing in detail. In Section 4, we show our experimental results on multimodal baseline methods as well as results from our ablation study. In Section 5, we provide concluding remarks. Finally, in section 6, we provide direction for future work in deepfake detection tasks.
## 2 Related Work
In this section, we provide an overview of previous work in the domain. First, we discuss multimedia forensics literature. Second, we review unimodal deepfake detection methods. Third, we examine multimodal approaches for deepfake detection.
### Media Forensics
Multimedia forensics tackles verification and authenticity of digital media sources to detect forgeries and malicious contents (Battiato, Giudice, and Paratore, 2016). Traditional computer-vision methods (Battiato, Giudice, and Paratore, 2016; Wen, Qi, and Lyu, 2018) have been adequate for dealing with multimedia forgery, which was not generated using deep learning methods. However, almost all deepfakes that are recently created use deep learning to tamper with the media. For example, (Chesney and Citron, 2018) leverages artificial intelligence to generate highly realistic and persuasive fake media. Identifying false media that has been manipulated with deep learning proves challenging for classical computer vision techniques (Verdoliva and Bestagini, 2019). As a result, there is a growing interest in creating deep learning-based solutions for multimedia forensics (Bahirat et al., 2019; Mayer, Bayar, and Stamm, 2018; Chen et al., 2018). Prior studies have also investigated the role of affect in deception detection (Randhavane et al., 2019).
### Unimodal DeepFake Detection Methods
Most previous research in deepfake detection focuses on dissecting videos into individual frames and analyzing the visual discrepancies within them. For example, (Li and Lyu, 2018) suggest a Deep Neural Network (DNN) for detecting fake videos by examining artifacts present during the face distortion phase of generation methods. In a similar vein, (Yang, Li, and Lyu, 2019) explore inconsistencies in head positions in synthesized videos, while (Matern, Riess, and Stamminger, 2019) identifies anomalies in the eyes, teeth, and facial outlines of generated faces. Prior work has also experimented with a variety of network architectures, such as (Nguyen et al., 2019) investigating capsule structures, (Rossler et al., 2019) utilizing XceptionNet, and (Zhou et al., 2017) adopting a two-stream Convolutional Neural Network (CNN) to attain state-of-the-art performance in general-purpose image forgery detection. Additionally, researchers have noticed and taken advantage of the fact that temporal coherence is not effectively maintained during deepfake synthesis. For instance, (Sabir et al., 2019) employs spatio-temporal characteristics of video sequences to detect deepfakes, and (Guera and Delp, 2018) highlights the presence of intra-frame consistencies in deepfake videos, leading them
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Dataset** & **\# Videos** & **Visual** & **Audio** \\ \hline UADFV (Yang, Li, and Lyu, 2019) & 98 & ✓ & \(\times\) \\ \hline DF-TIMIT (Korshunov and Marcel, 2018) & 320 & ✓ & ✓ \\ \hline Face Forensics++ (Rossler et al., 2019) & 5,000 & ✓ & \(\times\) \\ \hline CelebDF (Li et al., 2019) & 1,203 & ✓ & \(\times\) \\ \hline DFDC (Dolhansky et al., 2020) & 119,146 & ✓ & ✓ \\ \hline Deeper Forensics 1.0 (Jiang et al., 2020) & 60,000 & ✓ & – \\ \hline FakeAVCeleb (Khalid et al., 2021) & 21,566 & ✓ & ✓ \\ \hline \end{tabular}
\end{table}
Table 1: Table showing various deepfake datasets along with the modality in which deepfake occurs. There are very few datasets where deepfake occurs on both audio and video modality, hence making multimodal experimentation limited.
to implement a CNN combined with a Long Short Term Memory (LSTM) for deepfake video identification.
There are multiple audio deepfake detection techniques as well. Many prior methods that have evaluated on ASVspoof2021 dataset have used the model ensemble technique. For example, [15] propose using two feature extractions method, namely, Mel-Frequency Cepstral Coefficients (MFCC) and Constant Q Cepstral Coefficients (CQCC), to evaluate the prediction accuracy using SVM and Gaussian Mixture Model (GMM). Similarly, [14] proposes a two-path spoofing detection method where authors use Linear Frequency Cepstral Coefficients (LFCC) and CQCC to extract the features. Then, one of the paths contains real-GMM and the other path contains fake-GMM. The output from each GMM, which contains the Gaussian probability feature, is then fed into their respective identical Res [13] blocks. The output from two Conv networks is then concatenated to feed into a fully connected layer for output. As part of an automated end-to-end pipeline, [26] proposed Wav2Vec for feature extraction and representation for unlabelled speech data.
### Multimodal DeepFake Detection Methods
Although unimodal DeepFake Detection approaches (mentioned in Section 2.2) have predominantly concentrated on an individual's facial characteristics, the inclusion of multiple modalities within the same video has received limited attention. [12] introduce FakeTalkerDetect, which is a Siamese-based network devised for identifying fake videos produced by neural talking head models. The use of lip-sync mechanisms for deepfake detection is not a novel idea. For instance, Haliassos [15] have already used lip-synchronization using ResNet-18 as a feature extractor and temporal convolutional network (MS-TCN) on DFDC. However, the reported AUC is inferior compared to our model's AUC score. Other various works on lip-sync exist but not necessarily for the downstream task of deepfake detection. For instance, [10] have used the temporal alignment of the lip-to-speech. [14] leverage face-warping artifacts for deepfake detection. Likewise, [12] uses spatio-temporal inconsistencies for deepfake detection tasks. Similarly, [15, 16] uses dishamony between the audio and video modality to detect lip-sync inconsistencies. Similarly, [17] suggests that deepfake detection can benefit a lot from synchronizing the audio modality with the video modality. [10] focuses entirely on lipreading mechanisms to detect deepfake. However, if deepfake does not occur in the mouth region, then the model fails to detect deepfake. Our model addresses this issue by not only performing lip-audio cross-attention but also facial self-attention, encapsulating eclectic deepfake scenarios.
## 3 Our Approach
Our proposed method employs a fine-tuned VGG-16 feature extractor in conjunction with transformer encoder modules to address deepfake detection tasks. We begin by extracting the facial region from the video using the MTCNN network. Simultaneously, we feed the raw audio and the extracted lip region into the audio transformer encoder module. To process the data, facial self-attention is applied for the video-only pipeline, while cross-attention is employed for lip-audio synchronization analysis.
The outputs from both the audio and video transformer encoder modules are subsequently passed through a multi-layer perceptron (MLP) head, which generates the final classification label, as illustrated in Figure 2. In the following sections, we provide a comprehensive explanation of our approach, encompassing the pre-processing steps involved. Table 2 lists the variables and their dimensions used in our approach.
### Video Preprocessing
Our method for detecting deepfakes in videos primarily focuses on the facial region, as this is where most deepfake
Figure 2: Our model architecture shows input video frames, audio and lip region along with fine-tuned VGG-16 feature extractor, transformer encoders, and multi-layer perceptron heads.
manipulations occur. We first use the MTCNN network to extract facial bounding boxes throughout the video. From these bounding boxes, we crop the facial regions and isolate them for further analysis.
### Video Frame Rates
Video Self-AttentionThe DFDC dataset's video frame rate is approximately \(30\) frames per second, resulting in \(300\) frames per video. However, computing attention across all \(300\) frames for each video is computationally expensive. Since most deepfakes in the dataset are present throughout the entire video, we can sample equally spaced frames for the video section of the transformer. Therefore, we use only \(30\) equally spaced frames in the video-only pipeline.
Audio Cross-AttentionTo perform lip-syncing, we require all \(300\) frames to correlate with the audio. We extract lip regions from the faces and use all \(300\) frames for cross-attention to perform lip-syncing. These images are also converted to low-resolution, black-and-white images, as color is irrelevant for lip-syncing purposes.
### Image Preprocessing
Video Self-AttentionAfter extracting 30 frames from the video, we preprocess these images by resizing them to a final size of \(256\times 256\times 3\), where 3 is the number of channels \((R,G,B)\). We feed each of these images individually to our fine-tuned VGG-16 feature extractor.
Audio Cross-AttentionDuring the extraction of the 300 frames from the video, we crop the frames to get only the lip regions. We crop these images as well to get a final image size of \(35\times 140\). We feed these images as one of the inputs to the audio cross-attention layer of the audio transformer encoder block.
### Audio Preprocessing
The input audio for all videos in the dataset is recorded at a standard rate of \(44.1KHz\). We first convert this audio from stereo to mono to reduce the input size by half without affecting lip-sync functionality. We crop the audio to the first \(441,000\) values, as all videos in the dataset are almost exactly 10 seconds long. This does not result in significant data loss, as the average number of values in the input tensor is \(441,300\). This audio is then fed into the audio cross-attention layer of the audio transformer encoder block.
### Cuboid Embedding
We use tubelet embedding, as suggested in [1], to capture the spatio-temporal dimension of the input video frames. A tubelet is a 3-D volume that captures the height (\(h\)), width (\(w\)), and depth (\(t\)) of the frames. The total number of tokens extracted in each dimension can be given as:
\[\text{Number of tokens in time}(n_{t})=\left\lfloor\frac{T}{t}\right\rfloor\] \[\text{Number of tokens in frame height}(n_{h})=\left\lfloor\frac{H}{h}\right\rfloor\] \[\text{Number of tokens in frame width}(n_{w})=\left\lfloor\frac{W}{w}\right\rfloor\]
where, \(H,W,T\) correspond to the frame height \((H)\), frame width \((W)\) and depth or number of frames in temporal dimension \((T)\). The tubelet embedding has been shown in Fig 3. The tubelet embedding method captures both spatial and temporal relationships between frames simultaneously, unlike the uniform frame sampling approach, where each 2D frame is tokenized independently, and temporal embedding information must be provided separately to the transformer encoder. Using tubelet embedding simplifies computing cross-attention between the lip region and audio as temporal synchronization is not required since the tubelet embedding inherently preserves both positional and temporal information.
### Self-Attention Mechanism
For the video-only pipeline, we use multi-head self-attention to compute different attention filters. This is achieved by using a transformer encoder with multiple sets of query (\(Q\)), key (\(K\)), and value (\(V\)) inputs to obtain \(n\) attention layers.
\begin{table}
\begin{tabular}{|l|l|} \hline
**Var** & **Dimensions** \\ \hline \(N_{f}\) & Number of temporal divisions per video \\ \hline \(N_{p}\) & Number of spatial divisions per frame \\ \hline \(P\) & Size of each spatial division \\ \hline \(F\) & Size of each temporal division \\ \hline
**x** & Feature extracted video patches \\ \hline
**xp** & Flattened video patches \\ \hline
**E** & Linear projection layer \\ \hline \(\mathbf{E}pos\) & Positional embedding \\ \hline \(\mathbf{z}_{0}\) & Initial patch embedding \\ \hline \(\mathbf{z}_{1}\) & Output of multi-head self-attention layer \\ \hline \(\mathbf{z}_{1}\) & Output of MLP \\ \hline \(\mathbf{y}\) & Final video representation \\ \hline \(d\) & Dimension of the transformer embedding output \\ \hline \(\mathbf{y}_{a}\) & Output embedding for audio transformer \\ \hline \(\mathbf{y}_{v}\) & Output embedding for video transformer \\ \hline \end{tabular}
\end{table}
Table 2: Variables and their dimensions used in our approach
Figure 3: Cuboid Embedding for spatio-temporal 3-D Attention
Self-attention means that the initial query, key, and value are all equal (i.e., \(Q=K=V\)) and come from the image patches of the detected faces. Our processing pipeline is similar to that used in Vision Transformer (ViT) [14] and Convolutional Cross EfficientNet [15].
In the video pipeline, attention is computed by first computing the softmax of the scaled cosine similarity between \(Q\) and \(K\), and then multiplying it with \(V\) to compute the attention filter, as shown in Equation 1 below:
\[Attention(Q,K,V)=softmax\Big{(}\frac{QK^{T}}{\sqrt{d_{k}}}\Big{)}V \tag{1}\]
In the self-attention mechanism used in the video pipeline, \(Q=K=V\in\mathbb{R}^{3073\times 1280}\). This approach allows us to capture the spatio-temporal relationships in the video and compute cross-attention between the lip region and the audio without needing to synchronize them in time.
### Video Transformer
Figure 2 illustrates how we process input for the video branch in the transformer encoder. In this section, we define the important parameters and their dimensions. The output of the VGG-16 feature extractor, which is the transformer encoder input, consists of \(N_{p}\) patches of size \(P\times P\) for each of the \(N_{f}\) frames. We denote the output of the feature extractor as \(\textbf{x}\in\mathbb{R}^{N_{f}\times N_{p}\times P^{2}}\). We partition these \(N_{f}\) patches into \(\frac{N_{f}}{F}\) parts, each with size \(F\), and then combine each of these parts to get 3-dimensional patches of size \(P^{2}\cdot F\), which are then flattened. This can be written as:
\[\textbf{x}_{p}\in\mathbb{R}^{\frac{(N_{p}-N_{f})}{F}\times(P^{2}\cdot F)} \tag{2}\]
Once the frames are rearranged in this format, we pass them through a linear layer **E** of dimension \((P^{2}\cdot F)\times(3\cdot P^{2}\cdot F)\) similar to [14]. The output of this projection is referred to as the patch embedding. The process is shown as:
\[\textbf{z}_{0} =[\textbf{x}_{CLS};\textbf{x}^{i}_{p}\textbf{E}]+\textbf{E}_{pos} \tag{3}\] \[\textbf{z}^{{}^{\prime}}_{1} =MSA(LN(\textbf{z}_{0}))+\textbf{z}_{0}\] (4) \[\textbf{z}_{1} =MLP(LN(\textbf{z}^{{}^{\prime}}_{1}))+\textbf{z}_{1}\] (5) \[\textbf{y} =LN(z_{1}) \tag{6}\]
Here, \(x^{i}p\) refers to the flattened \(i^{th}\) video patches, and \(\textbf{x}_{CLS}\) is the prepended classification token. \(\textbf{E}pos\) is the positional embedding, and \(LN\) is the layer normalization. \(MSA\) is the multi-headed self-attention, and \(MLP\) is the multi-layer perceptron, as in [12, 14]. The prepended classification token (\(\textbf{z}^{0}_{0}=\textbf{x}_{CLS}\)) is a learnable embedding whose state at the output of the Transformer encoder (\(\textbf{z}^{0}_{1}\)) serves as the video representation **y**. The positional embedding \(\textbf{E}_{pos}\) helps maintain the positional information.
### Cross-Attention Mechanism
To enable cross-attention between the video and audio modalities for lip-synchronization and lip-audio consistency, we need to consider the different dimensions of \(Q\), \(K\), and \(V\). Specifically, \(Q\in\mathbb{R}^{300\times 4900}\) from the video-only pipeline, while \(K\) and \(V\in\mathbb{R}^{300\times 1470}\) from the lip-audio pipeline. Since \(Q\) and \(K\) need to have the same dimensions for computing the cosine similarity, as shown in Equation 1, we pass \(Q\) through a linear layer \(L_{1}\in\mathbb{R}^{4900\times 4900}\), which reshapes the matrix \(Q\) into \(Q^{\prime}\in\mathbb{R}^{300\times 4900}\). Similarly, we reshape \(K\) using a linear layer \(L_{2}\in\mathbb{R}^{1470\times 4900}\) into \(K^{\prime}\in\mathbb{R}^{300\times 4900}\) to ensure the compatibility of dimensions. This is shown below:
\[Q\cdot L_{1} =Q^{\prime}, Q^{\prime}\in\mathbb{R}^{300\times 4900}, \tag{7}\] \[K\cdot L_{2} =K^{\prime}, K^{\prime}\in\mathbb{R}^{300\times 4900} \tag{8}\]
After this reshape, we compute the attention filter or the cosine similarity between \(Q^{\prime}\) and \(K^{\prime}\), pass it through a softmax and then finally multiply it with \(V\). That is, we compute the similarity in the same way as in Equation 1, except that \(Q=Q^{\prime}\) and \(K=K^{\prime}\).
### Multi-Layer Perceptron (MLP)
In the final stage of our approach, we pass the output embeddings from the transformer encoder modules of both the video and audio modalities through a multi-layer perceptron (MLP) head to obtain the final classification label. Let \(\textbf{y}_{a}\) and \(\textbf{y}_{v}\) denote the output embeddings from the audio and video transformer encoder blocks, respectively. We concatenate these two embeddings to obtain a joint embedding \(\textbf{y}\in\mathbb{R}^{2d}\), where \(d\) is the dimension of the transformer embedding output, and in our case, \(d=2\). We then pass this joint embedding through a linear layer of size \(d\times 2\) to output the final probability for real and fake classification.
## 4 Experiments and Results
In this section, we describe our experimental setup as well as demonstrate the quantitative and qualitative results of our model.
### Experimental Setup
We tested our model on the DFDC dataset, DF-TIMIT and FakeAVCeleb, which are the only known dataset containing deepfake in both audio and video modality. We leveraged four A100 (80 GB) GPUs with NVLink for training our model. Each GPU used approximately 40 GB of memory during the training. The batch size is 8 videos with a total of 1310 batches per epoch. The loss function used is the Cross-Entropy loss with a learning rate of \(10^{-6}\). When computing the accuracy for each epoch, we compute the accuracy for each batch and average the accuracy across all batches in every epoch to report accuracy per epoch. We show that our method strongly outperforms all existing methods on multi-modal deepfake detection in Table 3.
### Qualitative Results
In Figure 4, we display some selected frames of videos from the DFDC, DF-TIMIT, and FakeAVCeleb datasets, along
with the corresponding labels (real or fake). Note that the real videos in the DF-TIMIT section of Figure 4 are not in the testing set, but the original videos from which the deepfakes were generated. Overall, our qualitative results demonstrate the effectiveness of our proposed approach in detecting deepfaked videos, particularly in capturing abnormal features in such videos.
### Ablation Study
To evaluate the contribution of different components of our proposed network architecture, we conduct two ablation studies.
Lip-Audio OnlyThe ablated pipeline utilized in our model involved both audio and video modalities but only focused on lip-sync between the audio and lip region of the video. While this model learned to differentiate between real and fake videos to an extent, solely relying on lip-audio sync is not enough, as evidenced by 5. As a result, we maintain that the video pipeline is integral to getting better results due to the lack of visual features, which are omitted for the audio-lip pipeline.
Effect of VGG-16 (Removing Transformer Encoder)Likewise, we also tried removing the transformer encoder from our unimodal pipeline and performed the classification based on the features extracted by our fine-tuned VGG-16 architecture. Our results show that the F-1 score of such a fine-tuned VGG-16 architecture is 87.77% with and AUC score of 0.944 as shown in Table 5. While the F1-score is high for an independent VGG-16 network, the addition of transformers after feature extraction still provides valuable gains.
### Failure Cases
While our deepfake detection approach achieves high accuracy on all multimodal datasets, it still has some failure cases. One of the most common causes of failure is when the face is not directly pointing toward the camera. In such cases, our approach may not be able to capture enough features of the face and lip movements, which can result in inaccurate predictions.
Another issue that can lead to failure is when the video is blurry or has low resolution. Similarly, when two speakers are facing each other, our approach may detect the non-speakers face, leading to incorrect predictions. This can happen when the non-speakers face is more visible or when the speaker's lips and facial region are not clearly visible.
## 5 Conclusion
In this work, we proposed a novel deepfake detection method that leverages both audio and visual modalities. Our proposed approach achieved state-of-the-art performance on the DFDC and DF-TIMIT, which are all multimodal deepfake detection datasets, outperforming eleven prior deepfake detection methods. We also achieve close to state-of-the-art performance on the FakeAVCeleb dataset independent of deepfake generation methodology. We also provide ablations on sections of the model that perform less than optimally compared to the multimodal approach. Additionally, we evaluated our approach on in-the-wild videos and demonstrated promising results.
## 6 Future Work
One primary issue with deepfake datasets is that there is a major class imbalance between the real and the fake samples, which becomes a bottom-neck when creating an unbiased model. Either larger and more balanced datasets are necessary, or methods that can handle class imbalance are necessary. Current deepfake detection methods cannot adapt to multi-speaker situations in videos which is another promising future work direction. Model performance is also impacted if data is noisy due to factors, such as poor lighting conditions and camera angle, i.e., if participants are not directly facing the camera. Future deepfake detection requires techniques that are beyond facial and audio analysis.
Figure 4: Qualitative Results: We show some sample frames from the DFDC, FakeAVCeleb, and DF-TIMIT that the model uses during training and testing. Our model uses both audio-video modality and the cross-attention between the two modalities to classify as real and fake videos.
Figure 5: Evaluation on In-The-Wild Videos: To evaluate the robustness of our model, we tested it on out-of-dataset videos. Our model correctly detected sample videos downloaded from Youtube, MIT Deepfake Lab, and other public sources. In this image, we show that our method was able to identify a recently popular DeepFake video of celebrities Tom Cruise and Will Smith. |
2309.14115 | On Galois realizations of special linear groups | We study the determinant of certain etale sheaves constructed via middle
convolution in order to realize special linear groups regularly as Galois
groups over the rationals. | Michael Dettweiler, Stefan Reiter | 2023-09-25T13:13:28Z | http://arxiv.org/abs/2309.14115v1 | # On Galois realizations of special linear groups
###### Abstract
We study the determinant of certain etale sheaves constructed via middle convolution in order to realize special linear groups regularly as Galois groups over \(\mathbb{Q}(t)\).
###### Contents
* 1 Basic results and notation
* 1.1 Galois covers
* 1.2 Monodromy of etale sheaves.
* 1.3 Local monodromy
* 2 Construction of some smooth sheaves of rank \(2\) with finite monodromy
* 2.1 The monodromy tuples
* 2.2 Construction of the underlying sheaves
* 3 Galois realizations of special linear groups
* 3.1 Construction of the underlying sheaves via middle convolution
* 3.2 Galois realizations of finite and profinite special linear groups
* 4 Appendix: Arithmetic middle convolution
## Introduction
Recall that the regular inverse Galois problem is the following question:
_Given a finite group \(G\), does there exist a Galois extension \(L/\mathbb{Q}(t)\) with \(G\simeq\operatorname{Gal}(L/\mathbb{Q}(t))\) such that additionally \(G\simeq\operatorname{Gal}(L/\overline{\mathbb{Q}}(t))\) holds?_
If this condition holds for \(G\), then one says that \(G\)_occurs regularly as Galois group over \(\mathbb{Q}(t)\)_.1 The second isomorphism, the regularity condition, ensures that the field extension \(L/\mathbb{Q}(t)\) is geometric in the sense that it arises from a ramified cover \(f:X\to\mathbb{P}^{1}_{\mathbb{Q}}\) with \(\operatorname{Aut}(f)\simeq G\).
It follows from Hilbert's irreducibility theorem that a positive answer to the regular inverse Galois problem implies a positive answer to the inverse Galois problem: _can every finite group \(G\) be realized as Galois group of a Galois extension \(L/\mathbb{Q}\)?_ Both problems, however, are far from being solved, cf. [12], [15]. A weaker question, first posed by John Thompson, is the following:
_Given a finite field \(\mathbb{F}_{q},\) is it true that almost all finite groups of Lie type \(G(\mathbb{F}_{q})\) occur regularly as Galois group over \(\mathbb{Q}(t)\)?_
It follows from the work of Volklein and Thompson, Volklein ([14], [13]) and from our previous work [4],[5] that Thompson's question holds true under the further restriction to specific families of Lie type (like \(\mathrm{GL}_{n}(\mathbb{F}_{q})\)) if \(q\) is odd. It is the aim of this work to prove a similar result for the family of special linear groups (cf. Thm. 3.2.2 and its corollary):
**Theorem:**_Let \(\mathbb{F}_{q}\) be a finite field of odd order \(q>3.\) Then the special linear group \(\mathrm{SL}_{n}(\mathbb{F}_{q})\) occurs regularly as Galois group over \(\mathbb{Q}(t)\) if \(n>8\varphi(q-1)+11,\) where \(\varphi\) denotes Euler's \(\varphi\)-function._
The proof relies on the Galois representations associated to certain non-rigid etale sheaves of rank two with finite monodromy. Using two middle convolution steps with quadratic Kummer sheaves, combined with a tensor operation with rank-one sheaves, and using the permanence of having at most quadratic determinant under \(\mathrm{MC}_{-\mathbf{1}}\) ([3], Thm. 4.3.5), applied to these sheaves, we obtain etale sheaves whose monodromy is contained in \(\mathrm{SL}_{n}(\overline{\mathbb{Q}}_{\ell}).\) The residual representations associated to these sheaves then give rise to the above result.
We thank N. Katz for helpful remarks on an earlier version of this article.
## 1 Basic results and notation
Let in the following \(R\) be a field or a normal integral domain which is finite over \(\mathbb{Z},\) let \(X\) be a connected, regular and separated scheme of finite type over \(R,\) and let \(\overline{x}\) be a geometric point of X.
**1.1 Galois covers** ([7], [12]) Any finite etale Galois cover \(f:Y\to X\) with \(G=\mathrm{Aut}(f)\) corresponds up to isomorphism to a surjective homomorphism of the etale fundamental group of \(X\) onto \(G\):
\[\Pi_{f}:\pi_{1}(X,\overline{x})\to G\leq\mathrm{Sym}(Y(\bar{x}))\quad\text{ with}\quad Y(\bar{x})=\mathrm{Hom}_{X}(\bar{x},Y).\]
**1.1.1** Assume that \(R\) is a subring of \(\mathbb{C}\) and that
\[X=\mathbb{A}_{R}^{1}\setminus\mathbf{x}=\mathbb{A}_{R}^{1}\setminus\{x_{1}, \ldots,x_{r}\}=\mathrm{Spec}\left(R[x][\frac{1}{(x-x_{1})\cdots(x-x_{r})}]\right)\]
with \((x-x_{1})\cdots(x-x_{r})\in R[x]\) separable and \(x_{i}\in\overline{\mathrm{Quot}(R)}\) etale over \(\mathrm{Spec}\left(R\right).\) Let
\[\pi_{1}^{\mathrm{top}}(\mathbb{P}^{1}(\mathbb{C})\setminus\{x_{1},\ldots,x_{r },x_{r+1}=\infty\})=\langle\gamma_{1},\ldots,\gamma_{r+1}\mid\gamma_{1}\cdots \gamma_{r+1}=1\rangle\ \stackrel{{\iota}}{{\longrightarrow}}\ \pi_{1}(\mathbb{A}_{R}^{1}\setminus\mathbf{x})\]
be the natural inclusion, where \(\gamma_{i}\left(i=1,\ldots,r+1\right)\) is a counterclockwise simple loop around \(x_{i}\) as usual (cf. [12], Chap. I.1, [7], Appendix A). The _monodromy tuple of \(f:Y\to X\)_ is by definition
the tuple of elements
\[(\sigma_{1},\ldots,\sigma_{r},\sigma_{\infty}=\sigma_{r+1})\in G^{r+1}\quad \text{where}\quad\sigma_{i}:=\Pi_{f}(\iota(\gamma_{i})),\quad i=1,\ldots,r+1.\]
Note that by construction, the _product relation_\(\sigma_{1}\cdots\sigma_{r+1}=1\) holds. Moreover, the operation of \(G\) on \(Y(\bar{x})\) is isomorphic to the regular representation of \(G\) on itself. In this sense, the homomorphism \(\Pi_{f}\) will be viewed as homomorphism of \(\pi_{1}(X,\overline{x})\) onto \(G=\text{Aut}(f).\)
For \(R\) finite over \(\mathbb{Z},\) let \(x\in X(\mathbb{F}_{q^{k}}).\) Then the functoriality of \(\pi_{1}\) yields a homomorphism
\[\pi_{1}(x,\overline{x})\simeq\text{Gal}(\overline{\mathbb{F}}_{q}/\mathbb{F} _{q^{k}})\to\pi_{1}(X,\overline{x}).\]
This leads to the notion of a (geometric) Frobenius element \(\text{Frob}_{x}\) in \(\pi_{1}(X,\overline{x})\) by sending the profinite generator \(\text{Frob}_{q^{k}}\) of \(\text{Gal}(\overline{\mathbb{F}}_{q}/\mathbb{F}_{q^{k}})\) (which is inverse to the arithmetic Frobenius \(\overline{\mathbb{F}}_{q^{k}}\to\overline{\mathbb{F}}_{q^{k}},\)\(a\mapsto a^{q^{k}}\)) to \(\pi_{1}(X,\overline{x}).\) For \(\tilde{x}\) another geometric point of \(X\) there is an isomorphism \(\pi_{1}(X,\overline{x})\stackrel{{\beta_{x,\tilde{x}}}}{{\to}} \pi_{1}(X,\tilde{x}),\) well defined up to inner automorphisms of \(\pi_{1}(X,\overline{x})\). By sending \(\text{Frob}_{x}\in\pi_{1}(X,\overline{x})\) via this isomorphism to \(\pi_{1}(X,\tilde{x})\) one obtains a Frobenius element, also denoted \(\text{Frob}_{x},\) in the group \(\pi_{1}(X,\tilde{x}),\) well defined up to inner automorphisms.
### Monodromy of etale sheaves.
([2], [7])
Let in the following \(\mathscr{R}\) be a ring used as coefficient field in etale cohomology (like \(\overline{\mathbb{Q}}_{\ell},\) a finite extension of \(\mathbb{Q}_{\ell},\) the valuation ring of such a field, or the residue field of these). Let \(X\) be as in the previous section and let \(\text{LocSys}(X,\mathscr{R}),\) resp. \(\text{Constr}(X,\mathscr{R}),\) denote the category of smooth (=lisse), resp. constructible, \(\mathscr{R}\)-sheaves on \(X.\)
For each geometric point \(\bar{x}\) of \(X,\) the association \(L\in\text{LocSys}(X,\mathscr{R})\)\(\longmapsto\)\(L_{\bar{x}}\) establishes an equivalence of categories between \(\text{LocSys}(X,\mathscr{R})\) and the category of finite continuous \(\pi_{1}(X,\bar{x})\)-modules. The monodromy representation of \(L\) is by definition this representation \(\rho_{L}:\pi_{1}(X,\bar{x})\to\text{Aut}(L_{\bar{x}}).\)
Let \(f:Y\to X\) be a Galois cover with associated homomorphism \(\pi_{f}:\pi_{1}(X,\bar{x})\to G=\text{Aut}(f)\) as above and let \(V=\mathscr{R}^{n}\,(n\in\mathbb{N}).\) If \(\rho:G\to\text{GL}(V)\) is a representation, then one has a sheaf \(L=\mathscr{L}_{(f,\rho)}\in\text{LocSys}(X,\mathscr{R})\) associated to the composition
\[\rho_{L}=\rho\circ\Pi_{f}:\pi_{1}(X,\bar{x})\to\text{GL}(V).\]
If \(y:\text{Spec}\,(\mathbb{F}_{q})\to X\) is a closed point of \(X,\) and if \(L\in\text{Constr}(X,\mathscr{R}),\) then the stalk \(L_{\bar{y}}\) is a \(\pi_{1}(y,\bar{y})\simeq\text{Gal}(\overline{\mathbb{F}}_{q}/\mathbb{F}_{q})\)-module in a natural way. Hence one has associated the characteristic polynomial \(\det(1-\text{Frob}_{y}t,L_{\bar{y}})\) to the Frobenius element \(\text{Frob}_{y}\in\pi_{1}(y,\bar{y}).\) Let \(L\in\text{LocSys}(X,\mathscr{R})\) and let \(\rho_{L}:\pi_{1}(X,\bar{x})\to\text{GL}(V)\) be the monodromy representation of \(L.\) By [2], 1.1.8, one has an equality of characteristic polynomials
\[\det(1-\text{Frob}_{y}t,L_{\bar{y}})=\det(1-\rho_{L}(\text{Frob}_{y})t,V)\,, \tag{1.1}\]
where on the right hand side, the Frobenius element \(\text{Frob}_{y}\) is viewed as an element (or rather a conjugacy class) in \(\pi_{1}(X,\bar{x})\) via the isomorphism \(\beta_{\bar{y},\bar{x}}\) from Section 1.1.2.
**1.2.4** Let \(L\in\mathrm{LocSys}(\mathbb{A}^{1}_{R}\setminus\mathbf{x},\mathscr{R}).\) Then the _monodromy tuple_ of \(L\) is defined as
\[\mathbf{T}=\mathbf{T}_{L}=(T_{1},\ldots,T_{r+1})\in\mathrm{GL}_{n}(\mathscr{R})^ {r+1},\quad T_{i}=\rho_{L}(\iota(\gamma_{i}))\,(i=1,\ldots,r+1),\]
with \(\iota:\pi_{1}(\mathbb{A}^{1}(\mathbb{C})\setminus\mathbf{x})\to\pi_{1}( \mathbb{A}^{1}_{R}\setminus\mathbf{x})\) as in Section 1.1.1.
**1.3** Local monodromy** ([2], [12]) Recall the notion of local monodromy: if \(L\) is a smooth sheaf on an open subscheme \(U\) of \(X\) (\(X\) a smooth and geometrically connected variety over a field \(\kappa\)) and if \(x\) is a point of \(S=X\setminus U\) then the stalk \(L_{\overline{\eta}_{x}}\) (with \(\overline{\eta}_{x}\) denoting an algebraic closure of the completion of the function field \(\eta_{x}\) of \(X\) w.r. to \(x\)) is a \(\mathrm{Gal}(\overline{\eta}_{x}/\eta_{x})\)-module in a natural way: the _local monodromy of \(L\) at \(x\)_. The associated local monodromy representation is denoted
\[\rho_{(x)}:\mathrm{Gal}(\overline{\eta}_{x}/\eta_{x})\ \longrightarrow\ \mathrm{Aut}(L_{ \overline{\eta}_{x}})\simeq\mathrm{GL}_{n}(\mathscr{R}).\]
If \(x\) is a closed point of \(U\) then the stalk \(L_{\overline{x}}\) is a \(\mathrm{Gal}(\overline{\kappa}/\kappa(x))\)-module. The associated representation of \(\mathrm{Gal}(\overline{\kappa}/\kappa(x))\) is denoted \(\rho_{x}.\) Note that for \(X=\mathbb{P}^{1}_{\kappa},\) for \(x\in\mathbb{P}^{1}(\kappa),\) and for \(L\) tame at \(x,\) one has an isomorphism
\[\mathrm{Gal}(\overline{\eta}_{x}/\eta_{x})^{\mathrm{tame}}=I^{\mathrm{tame}}_{ x}\rtimes\mathrm{Gal}(\overline{\kappa}/\kappa)=\widehat{\mathbb{Z}}(1)(\overline{ \kappa})\rtimes\mathrm{Gal}(\overline{\kappa}/\kappa)\,, \tag{1.2}\]
where \(\mathrm{Gal}(\overline{\eta}_{x}/\eta_{x})^{\mathrm{tame}}\) denotes the tame quotient of \(\mathrm{Gal}(\overline{\eta}_{x}/\eta_{x})\) and where \(I^{\mathrm{tame}}_{x}\) denotes the tame inertia group at \(x.\) If \(x_{i}\in\mathbb{P}^{1}(\mathbb{C})\) is as above then an image of a profinite generator \(\gamma_{i}\) of \(I^{\mathrm{tame}}_{x_{i}}\) in \(\mathrm{Aut}(L_{\overline{\eta}_{x_{i}}})\simeq\mathrm{GL}_{n}(\mathscr{R})\) is conjugate to the \(i\)-th entry of the monodromy tuple \(T_{i}\) (cf. [12]). Similarly as in Section 1.2.3 one obtains a conjugacy class of morphisms \(\mathrm{Gal}(\overline{\eta}_{x}/\eta_{x})\to\pi_{1}(X,\tilde{x})\) (for \(\tilde{x}\) another base point of \(X\)), describing the operation of \(\mathrm{Gal}(\overline{\eta}_{x}/\eta_{x})\) on \(L_{\overline{\eta}_{x}}.\)
## 2 Construction of some smooth sheaves of rank \(2\) with finite monodromy
**2.1** The monodromy tuples** In the following we use the following notation: for \(n\in\mathbb{N},\) let \((\zeta_{n})_{n\in\mathbb{N}}\in\overline{\mathbb{Q}}\) be a system of primitive \(n\)-th roots of unity such that for \(d\mid n\) one has \(\zeta_{d}=\zeta_{n}^{n/d}.\) Let also \(m\in\mathbb{N}\) be a fixed integer \(>2\) and fix an embedding of \(\bar{\mathbb{Q}}\) into \(\overline{\mathbb{Q}}_{\ell}.\)
Let \((T_{1},\ldots,T_{r+1})\in\mathrm{GL}_{2}(\overline{\mathbb{Q}}_{\ell})^{r+1},r\geq 4,\) with
\[T_{i} = \mathrm{diag}(\lambda_{i},\lambda_{i}^{-1}),\quad i=1,\ldots,r-3, \,\text{with }1\neq\lambda_{i}\in\overline{\mathbb{Q}}_{\ell},\] \[T_{r-2} = \mathrm{diag}(1,-1),\] \[T_{r-1} = \left(\begin{array}{cc}0&1\\ 1&0\end{array}\right),\] \[T_{r} = -(T_{1}\cdots T_{r-1})^{-1},\] \[T_{r+1} = -\mathbf{1}_{2},\]
with \(\mathbf{1}_{n}\) denoting the \(n\times n\)-identity matrix and with \(\mathrm{diag}(\mu_{1},\ldots,\mu_{n})\in\mathrm{GL}_{n}(\overline{\mathbb{Q}}_{ \ell})\) denoting the diagonal matrix with diagonal entries \(\mu_{1},\ldots,\mu_{n}\) (in this order).
We assume in the following that the following conditions hold:
1. \(2\varphi(m)<r-4,\)__
2. _the first_ \(2\varphi(m)\) _elements_ \(\lambda_{i}\) _run twice through the primitive powers of_ \(\zeta_{m},\) _and the remaining_ \(\lambda_{i}\) _are all equal to_ \(-1.\)__
Under these conditions, we define
\[\mathbf{T}_{m,r}:=(T_{1},\ldots,T_{r},T_{r+1})\in\mathrm{GL}_{2}(\overline{ \mathbb{Q}}_{\ell})^{r+1}\quad\text{and}\quad Q_{m}:=\langle T_{1},\ldots,T_{r+ 1}\rangle\leq\mathrm{GL}_{2}(\overline{\mathbb{Q}}_{\ell}).\]
**2.1.1 Remark**.: _Note that then the \(r\)-th component in \(\mathbf{T}_{m,r}\) is an element of order \(4,\) having a trivial \(1\)-eigenspace:_
\[T_{r}=\pm\left(\begin{array}{cc}0&-1\\ 1&0\end{array}\right).\]
_Note further that the only components of \(\mathbf{T}_{m,r}\) with nontrivial invariants are the matrices \(T_{r-2}\) and \(T_{r-1}.\)_
### Construction of the underlying sheaves
**2.2.1** It follows from the strong rigidity theorem ([12], Thm. I.4.11) that there exists an etale Galois cover \(f:X\to\mathbb{A}_{\mathbb{Q}}^{1}\setminus\zeta\) with \(\zeta=\{\zeta_{m}^{d}\mid d\in(\mathbb{Z}/m\mathbb{Z})^{*}\}\) such that the monodromy tuple of \(f\) is
\[(\zeta_{m},\zeta_{m}^{d_{2}},\ldots,\zeta_{m}^{d_{\varphi(m)}},1)\,\in( \overline{\mathbb{Q}}_{\ell}^{\,\,\times})^{\varphi(m)+1},\]
with \(d_{1}=1,d_{2},\ldots,d_{\varphi(m)}\) running through the elements of \((\mathbb{Z}/m\mathbb{Z})^{*},\) cf. [12], Thm. 5.1. The Galois cover \(f,\) together with the embedding of \(\mu_{m}\) into \(\overline{\mathbb{Q}}_{\ell}^{\,\,\times}=\mathrm{GL}_{1}(\overline{\mathbb{ Q}}_{\ell}),\) defines a smooth etale \(\overline{\mathbb{Q}}_{\ell}\)-sheaf \(\mathscr{L}_{1}\) on \(\mathbb{A}_{\mathbb{Q}}^{1}\setminus\zeta\) of rank one. Let
\[\zeta^{\prime}=\left\{\pm\zeta_{2m},\pm\zeta_{2m}^{d_{2}},\ldots,\pm\zeta_{ 2m}^{d_{\varphi(m)}}\right\}.\]
By pulling back \(\mathscr{L}_{1}\) along the map \(\mathbb{A}_{\mathbb{Q}}^{1}\setminus\zeta^{\prime}\to\mathbb{A}_{\mathbb{Q} }^{1}\setminus\zeta,\)\(x\mapsto x^{2},\) one obtains a smooth sheaf \(\mathscr{L}_{2}\) on \(\mathbb{A}_{\mathbb{Q}}^{1}\setminus\zeta^{\prime}\) with monodromy tuple
\[(\zeta_{m},\zeta_{m}^{d_{2}},\ldots,\zeta_{m}^{d_{\varphi(m)}},\zeta_{m}, \zeta_{m}^{d_{2}},\ldots,\zeta_{m}^{d_{\varphi(m)}},1)\,\in(\overline{\mathbb{ Q}}_{\ell}^{\,\,\times})^{2\varphi(m)+1},\]
up to a suitable renumeration of the elements in \(\zeta^{\prime}.\)
**2.2.2 Remark**.: _By construction, the sheaf \(\mathscr{L}_{2}\) has the property that for any \(x\in\mathbb{A}^{1}(\mathbb{Q})\setminus\zeta^{\prime},\) under the chain of isomorphisms \(\pi(x,\overline{x})\simeq\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\simeq \pi(-x,-\overline{x}),\) the \(\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\)-modules \((\mathscr{L}_{2})_{\overline{x}}\) and \((\mathscr{L}_{2})_{-\overline{x}}\) are isomorphic._
**2.2.3** _Let in the following \(\mathbf{x}=\{x_{1},\ldots,x_{r-1}\}\in\overline{\mathbb{Q}}^{\,\times}\) with \(r-1>2\varphi(m)+2\) be pairwise distinct points such that_
\[x_{i}=-x_{\varphi(m)+i}=\zeta_{2m}^{d_{i}}\quad(i=1,\ldots,\varphi(m),\,d_{i }\in(\mathbb{Z}/m\mathbb{Z})^{*}\text{ as above})\]
_and such that for \(i>2\varphi(m)\) the element \(x_{i}\) is \(\mathbb{Q}\)-rational and such that_
\[x_{r-2}\neq-x_{r-1}. \tag{2.1}\]
Since \(\zeta^{\prime}\subset\mathbf{x}\), we can view \(\mathscr{L}_{2}\) as a smooth sheaf on \(\mathbb{A}^{1}_{\mathbb{Q}}\setminus\mathbf{x}\) by restriction. Using suitable quadratic covers, by the construction in Section 1.1, there exist smooth sheaves \(\mathscr{L}_{3},\mathscr{L}_{4}\) on \(\mathbb{A}^{1}\setminus\mathbf{x}\) whose monodromy tuples are \(r\)-tuples (the last component belonging to the point at \(\infty\)) of the form
\[\mathbf{T}_{\mathscr{L}_{3}}=(1,\ldots,1,-1,\ldots,-1,1,1,\pm 1)\quad\text{and} \quad\mathbf{T}_{\mathscr{L}_{4}}=(1,\ldots,1,-1,-1,1),\]
resp., where in \(\mathbf{T}_{\mathscr{L}_{3}}\) the entries \(-1\) are at the positions \(2\varphi(m)+1,\ldots,r-3.\) Let \(\mathscr{L}^{\prime}=\mathscr{L}_{2}\otimes\mathscr{L}_{3}\otimes\mathscr{L}_ {4}\) and let \(\mathscr{L}^{\prime\prime}=(\mathscr{L}_{2}\otimes\mathscr{L}_{3})^{\vee}\) be the dual of \(\mathscr{L}_{2}\otimes\mathscr{L}_{3}\).
Form the external tensor product \(\mathscr{N}=\mathscr{L}^{\prime}\boxtimes\mathscr{L}^{\prime\prime}\) on \(V=\mathbb{A}^{1}_{x}\setminus\{x_{1},\ldots,x_{r-1}\}\times\mathbb{A}^{1}_{y }\setminus\{x_{1},\ldots,x_{r-1}\}\) with respect to the canonical projections. Note that
\[\pi_{1}(V,(\bar{x}_{0},\bar{y}_{0}))=\pi_{1}(\mathbb{A}^{1}_{x}\setminus\{x_ {1},\ldots,x_{r-1}\},\bar{x}_{0})\times\pi_{1}(\mathbb{A}^{1}_{y}\setminus\{x _{1},\ldots,x_{r-1}\},\bar{y}_{0}) \tag{2.2}\]
(where we view \(\bar{x}_{0},\bar{y}_{0}\) as complex points) comes equipped with the projections \(\pi_{x},\pi_{y}\) onto \(\pi_{1}(\mathbb{A}^{1}_{x}\setminus\{x_{1},\ldots,x_{r-1}\},\bar{x}_{0})\), resp. \(\pi_{1}(\mathbb{A}^{1}_{y}\setminus\{x_{1},\ldots,x_{r-1}\},\bar{y}_{0}).\) In the following, we choose a base point \((\bar{x}_{0},\bar{y}_{0})\) of \(V\) with \(\mathbb{Q}\)-rational points \(x_{0},y_{0}\) satisfying \(x_{0}\neq y_{0}\).
Let \(\gamma_{1,x},\ldots,\gamma_{r-1,x},\gamma_{\infty,x}\), resp. \(\gamma_{1,y},\ldots,\gamma_{r-1,y},\gamma_{\infty,y}\), be standard counterclockwise generators of \(\pi_{1}(\mathbb{A}^{1}_{x}\setminus\{x_{1},\ldots,x_{r-1}\}(\mathbb{C}),\bar{ x}_{0})\), resp. \(\pi_{1}(\mathbb{A}^{1}_{y}\setminus\{x_{1},\ldots,x_{r-1}\}(\mathbb{C}),\bar{ y}_{0})\), viewed as elements in \(\pi_{1}(\mathbb{A}^{1}_{x}\setminus\{x_{1},\ldots,x_{r-1}\},\bar{x}_{0})\), resp. \(\pi_{1}(\mathbb{A}^{1}_{y}\setminus\{x_{1},\ldots,x_{r-1}\},\bar{y}_{0})\), as in Section 1.1. Hence the monodromy of \(\mathscr{N}\) is given by
\[\rho_{\mathscr{N}}:\pi_{1}(V)\to\overline{\mathbb{Q}}_{\ell}^{\ \times},\quad \alpha\mapsto(\rho_{\mathscr{L}_{2}\otimes\mathscr{L}_{3}\otimes\mathscr{L}_ {4}})(\pi_{x}(\alpha))\cdot\rho_{\mathscr{L}_{2}\otimes\mathscr{L}_{3}}^{-1}( \pi_{y}(\alpha)). \tag{2.3}\]
By Eq 2.2 we can view the elements \(\gamma_{i,x},\gamma_{j,y}\) also as elements in \(\pi_{1}(V,(\bar{x}_{0},\bar{y}_{0})).\) Hence
\[\rho_{\mathscr{N}}(\gamma_{i,x})=\rho_{\mathscr{N}}(\gamma_{i+\varphi(m),x})= \zeta_{m}^{d_{i}}\ (i=1,\ldots,\varphi(m)),\quad\rho_{\mathscr{N}}(\gamma_{\varphi(m)+1,x})= \cdots=\rho_{\mathscr{N}}(\gamma_{r-1,x})=-1, \tag{2.4}\]
\[\rho_{\mathscr{N}}(\gamma_{i,y})=\rho_{\mathscr{N}}(\gamma_{i+\varphi(m),y})= \zeta_{m}^{-d_{i}}\ (i=1,\ldots,\varphi(m)),\quad\rho_{\mathscr{N}}(\gamma_{2\varphi(m)+1,y})= \cdots=\rho_{\mathscr{N}}(\gamma_{r-3,y})=-1 \tag{2.5}\]
and
\[\rho_{\mathscr{N}}(\gamma_{r-2,y})=\rho_{\mathscr{N}}(\gamma_{r-1,y})=1.\]
Consider the canonical quotient map
\[h:\mathbb{A}^{2}_{x,y}\to\mathbb{A}^{2}_{s,t},\,(x,y)\mapsto(x+y,x\cdot y)\]
for the automorphism which switches the coordinates. Under the map \(h\), the diagonal \(\Delta\subset\mathbb{A}^{2}_{x,y}\) is mapped to the conic \(C:t-s^{2}/4=0\), and a line \(x-x_{i}=0\) (resp \(y-x_{i}=0\)) is mapped under \(h\) to a tangent \(L_{x_{i}}:t-x_{i}s+x_{i}^{2}=0\) to \(C.\)
Let \(V^{\prime}=V\setminus\Delta(V)\), with \(\Delta(V)\) denoting the diagonal, let \(\mathscr{N}^{\prime}=\mathscr{N}|_{V^{\prime}}\), and let
\[W:=\mathbb{A}^{2}_{s,t}\setminus(C\cup(\bigcup_{i=1,\ldots,r-1}L_{x_{i}})).\]
The map \(h\) restricts to a quadratic etale cover \(\tau:V^{\prime}\to W.\)
The direct image \(\mathscr{E}=\tau_{*}(\mathscr{N}^{\prime})\) is a smooth rank-2 sheaf on \(W\) whose monodromy representation is by construction the induced rank-2 representation \(\rho_{\mathscr{E}}=\operatorname{Ind}_{\pi_{1}(V^{\prime})}^{\pi_{1}(W)}(\rho_ {\mathscr{N}}),\) where we view \(\pi_{1}(V^{\prime},(\bar{x}_{0},\bar{y}_{0}))\) as a subgroup of \(\pi_{1}(W,(\bar{s}_{0},\bar{t}_{0}))\) (with \((\bar{s}_{0},\bar{t}_{0})=h(\bar{x}_{0},\bar{y}_{0})\)).
Using a base point \((\bar{x}_{0},\bar{y}_{0})\) which is sufficiently close to the diagonal and by considering the punctured line \(L\) through \((x_{0},y_{0})\) and \((y_{0},x_{0})\) one verifies the following:
\[\pi_{1}(V^{\prime}(\mathbb{C}),(\bar{x}_{0},\bar{y}_{0}))=\langle\gamma_{i,x},\gamma_{i,y},\gamma\mid i=1,\dots,r-3\rangle,\]
where \(\gamma\) is a path on \(L(\mathbb{C})\) moving counterclockwise around \(L\cap\Delta(\mathbb{A}^{2})(\mathbb{C})=(\frac{x_{0}+y_{0}}{2},\frac{x_{0}+y_{ 0}}{2})\) from \((\bar{x}_{0},\bar{y}_{0})\) to \((\bar{y}_{0},\bar{x}_{0})\) and back.
The image \(\tau(L)\) is the parallel to the \(t\)-axis going through \((x_{0}+y_{0},0).\) Let \(\widetilde{\gamma}\) denote a simple loop in \(\tau(L)\) around \(\overline{\tau(L)}\cap C,\) represented by the non-closed half-twist in \(L\) moving counterclockwise from \((x_{0},y_{0})\) to \((y_{0},x_{0})\) around \((\frac{x_{0}+y_{0}}{2},\frac{x_{0}+y_{0}}{2}),\) so that \(\widetilde{\gamma}^{2}=\gamma.\) We can view \(\pi_{1}(V^{\prime})\) as a subgroup of \(\pi_{1}(W)\) with \(\gamma_{i,x}\) identified with a simple loop around the lines \(L_{x_{i}},\)\((i=1,\dots,x_{r-1}).\) Then the fundamental group of \(W\) is generated by \(\pi_{1}(V^{\prime}),\) viewed as subgroup of \(\pi_{1}(W),\) together with \(\widetilde{\gamma}.\) By construction,
\[\gamma_{i,x}^{\widetilde{\gamma}}=\gamma_{i,y}\quad i=1,\dots,r-1,\]
and vice versa.
By the last remark we have
\[\rho_{\mathscr{E}}(\gamma_{i,x})=\operatorname{Ind}_{\pi_{1}(W)}^{\pi_{1}(V^{ \prime})}(\rho_{\mathscr{N}})(\gamma_{i,x})=\rho_{\mathscr{N}}(\gamma_{i,x}) \oplus\rho_{\mathscr{N}}(\gamma_{i,x}^{\widetilde{\gamma}})=\rho_{\mathscr{N }}(\gamma_{i,x})\oplus\rho_{\mathscr{N}}(\gamma_{i,y}). \tag{2.6}\]
With Eq. (2.4) we obtain explicitly
\[\rho_{\mathscr{E}}(\gamma_{i,x})=\operatorname{diag}(\lambda_{i},\lambda_{i}^ {-1})\quad i=1,\dots,r-3, \tag{2.7}\]
with \(\lambda_{i}=\lambda_{i+\varphi(m)}=\zeta_{m}^{d_{i}}\,(i=1,\dots,\varphi(m))\) and \(\lambda_{i}=-1\) for \(i=2\varphi(m)+1,\dots,r-3,\) and also we obtain
\[\rho_{\mathscr{E}}(\gamma_{i,x})=\operatorname{diag}(-1,1)\quad i=r-2,r-1. \tag{2.8}\]
Since \(\gamma_{i,x}^{\widetilde{\gamma}}=\gamma_{i,y}\) we conclude from Eq. (2.6) that
\[\rho_{\mathscr{E}}(\widetilde{\gamma})=\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right). \tag{2.9}\]
Let \(\overline{Z}\) be the connecting line in \(\mathbb{A}^{2}_{s,t}\) through
\[z_{r}:=C\cap L_{x_{r-1}}\]
and through
\[z_{r-2}:=L_{x_{r-2}}\cap(\text{$t$-axis})=(0,-x_{r-2}^{2}).\]
Let \(z_{r-1}\) denote the second intersection point of \(\overline{Z}\) with \(C.\) (Here we have used the condition in (2.1) to ensure that \(\overline{Z}\) is not tangent to \(C.\)) Let further
\[Z=\overline{Z}\cap W\simeq\mathbb{A}^{1}\setminus\{z_{1},\ldots,z_{r}\}\]
with
\[z_{i}=\overline{z}\cap L_{x_{i}},\,i=1,\ldots,r-3.\]
Let
\[\mathscr{F}_{m,r}:=\mathscr{E}|_{Z},\]
viewed as an object in \(\operatorname{LocSys}(\mathbb{A}^{1}_{\mathbb{Q}}\setminus\{z_{1},\ldots,z_{ r}\},\overline{\mathbb{Q}}_{\ell}).\) In case that \(\mathscr{F}_{m,r}\) has trivial local monodromy at \(\infty\) we replace \(\mathscr{F}_{m,r}\) by a tensor product \(\mathscr{F}\otimes\mathscr{L}_{5},\) where \(\mathscr{L}_{5}\) is a rank-one sheaf in \(\operatorname{LocSys}(\mathbb{A}^{1}_{\mathbb{Q}}\setminus\{z_{1},\ldots,z_{ r}\},\overline{\mathbb{Q}}_{\ell})\) whose monodromy tuple is the \(r+1\)-tuple
\[(1,\ldots,1,-1,-1).\]
**2.2.6 Proposition**.: There exist generators \(\gamma_{1},\ldots,\gamma_{r}\) of \(\pi_{1}(\mathbb{A}^{1}(\mathbb{C})\setminus\{z_{1},\ldots,z_{r}\})\) such that the monodromy tuple of \(\mathscr{F}_{m,r}\) with respect to these generators is the tuple \(\mathbf{T}_{m,r}=(T_{1},\ldots,T_{r+1})\in\operatorname{GL}_{2}(\overline{ \mathbb{Q}}_{\ell})^{r+1},\) specified in Section 2.1, assuming conditions a) and b).
**Proof:** It follows from the above description of the local monodromy of \(\mathscr{E}\) that the inertial local monodromy at the point \(z_{i}\) is represented by \(T_{i}\) (\(i=1,\ldots,r+1\)). By [5], Lem. 7.2, the pure braid group acts transitively on the corresponding monodromy tuples, modulo diagonal conjugation with inner automorphisms from \(Q_{m}.\) Hence, by an appropriate braiding, we can assume that there exist generators \(\gamma_{1},\ldots,\gamma_{r}\) of \(\pi_{1}(Z)\) such that the monodromy tuple with respect to these generators coincides with \(\mathbf{T}_{m,r}.\)\(\Box\)
**2.2.7 Remark**.: For any etale Galois cover \(f:X\to\mathbb{A}^{1}_{\mathbb{Q}}\setminus\mathbf{x},\) by generic smoothness there exists an natural number \(N\) and an etale Galois cover
\[f_{R}:X_{R}\to\mathbb{A}^{1}_{R}\setminus\mathbf{x}_{R}=\operatorname{Spec} \left(R[x][\frac{1}{(x-x_{1})\cdots(x-x_{r})}]\quad(R=\mathbb{Z}[1/N]\right)\]
such that \(f\) is the base change of \(f_{R}\) induced by the inclusion \(R\subseteq\mathbb{Q}\) and such that the divisor \(D\) associated to \((x-x_{1})\cdots(x-x_{r})\) is etale over the spectrum of \(R.\)
Hence, the above sheaves \(\mathscr{F}_{m,r}\) extend to smooth sheaves on \(\mathbb{A}^{1}_{R}\setminus\{z_{1},\ldots,z_{r}\},\) for \(R=\mathbb{Z}[1/(N\cdot\ell)]\) with \(N\) large enough, denoted by the same symbols.
With \(D=\{z_{1},\ldots,z_{r}\}\cup\infty\) and \(j:\mathbb{A}^{1}_{R}\setminus D\hookrightarrow\mathbb{A}^{1}_{R}\) the inclusion one sees that \(j_{*}\mathscr{F}_{m,r}[1]\) is an object in \(\mathscr{T}(\mathbb{A}^{1}_{R},\overline{\mathbb{Q}}_{\ell})_{R,D}\) in the sense of Def. 4.0.2 of the Appendix to this article.
Recall that the only components of the monodromy tuple \(\mathbf{T}_{m,r}\) with nontrivial invariants are the matrices \(T_{r-2}\) and \(T_{r-1},\) having a one-dimensional \(1\)-eigenspace. The operation of Frobenius elements on these invariants is as follows:
**2.2.8 Proposition**.: Let \(\mathscr{F}_{m,r}\in\operatorname{LocSys}(\mathbb{A}^{1}_{R}\setminus\{z_{1}, \ldots,z_{r}\},\overline{\mathbb{Q}}_{\ell})\) where \(R\) is as in Rem 2.2.7. Let \(x,x^{\prime}\in\mathbb{A}^{1}_{R}(\mathbb{F}_{q})\) be \(\mathbb{F}_{q}\)-points lying over \(z_{r-1},\)\(z_{r-2}\) (resp.), where the characteristic of \(\mathbb{F}_{q}\) is also supposed to be \(\neq\ell.\) Then the elements \(\det(\operatorname{Frob}_{x},j_{*}\mathscr{F}_{\overline{x}})\) and \(\det(\operatorname{Frob}_{x^{\prime}},j_{*}\mathscr{F}_{\overline{x}^{\prime}})\) are equal to \(\{\pm 1\}.\)
**Proof:** To prove the claim for \(\det({\rm Frob}_{x},j_{*}{\mathscr{F}}_{\overline{x}})\) it suffices to show that with \(h^{-1}(x)=(z,z)\), the stalk \(j_{*}{\mathscr{F}}_{\overline{x}}\simeq{\mathscr{N}}_{(\overline{z},\overline{ z})}\) is an at most quadratic \({\rm Frob}_{(z,z)}\)-module (hereby we can neglect the possible tensor product with \({\mathscr{L}}_{5}\)).
By Eq. (2.3),
\[\rho_{\mathscr{N}}({\rm Frob}_{(z,z)})=\rho_{{\mathscr{L}}_{2}}({\rm Frob}_{z} )\rho_{{\mathscr{L}}_{3}}({\rm Frob}_{z})\rho_{{\mathscr{L}}_{4}}({\rm Frob}_{ z})\cdot\rho_{{\mathscr{L}}_{2}}^{-1}({\rm Frob}_{z})\rho_{{\mathscr{L}}_{3}}^{-1}({ \rm Frob}_{z})=\rho_{{\mathscr{L}}_{4}}({\rm Frob}_{z})=\pm 1,\]
as claimed.
The stalk \(j_{*}{\mathscr{F}}_{\overline{x}}\) is isomorphic to \({\mathscr{N}}_{(-\overline{t}_{0},\overline{t}_{0})}\) for some \(t_{0}\in{\mathbb{A}}_{R}^{1}(\mathbb{F}_{q}).\) Hence by Eq. (2.3),
\[\rho_{\mathscr{N}}({\rm Frob}_{(-t_{0},t_{0})}) = \rho_{{\mathscr{L}}_{2}}({\rm Frob}_{-t_{0}})\rho_{{\mathscr{L}}_ {3}}({\rm Frob}_{-t_{0}})\rho_{{\mathscr{L}}_{4}}({\rm Frob}_{-t_{0}})\rho_{{ \mathscr{L}}_{2}}^{-1}({\rm Frob}_{t_{0}})\rho_{{\mathscr{L}}_{3}}^{-1}({\rm Frob }_{t_{0}})\] \[= \rho_{{\mathscr{L}}_{2}}({\rm Frob}_{t_{0}})\rho_{{\mathscr{L}}_ {3}}({\rm Frob}_{-t_{0}})\rho_{{\mathscr{L}}_{4}}({\rm Frob}_{-t_{0}})\rho_{{ \mathscr{L}}_{2}}^{-1}({\rm Frob}_{t_{0}})\rho_{{\mathscr{L}}_{3}}^{-1}({\rm Frob }_{t_{0}})\] \[= \rho_{{\mathscr{L}}_{3}}({\rm Frob}_{-t_{0}})\rho_{{\mathscr{L}}_ {4}}({\rm Frob}_{-t_{0}})\rho_{{\mathscr{L}}_{3}}^{-1}({\rm Frob}_{t_{0}})\] \[= \pm 1,\]
where the second equality follows from the equality
\[\rho_{{\mathscr{L}}_{2}}({\rm Frob}_{-t_{0}})=\rho_{{\mathscr{L}}_{2}}({\rm Frob }_{t_{0}})\]
holding by the pullback construction of \({\mathscr{L}}_{2}\), cf. Rem. 2.2.2. This proves the claim for \({\rm Frob}_{x^{\prime}}.\)\(\Box\)
## 3 Galois realizations of special linear groups
### Construction of the underlying sheaves via middle convolution
Let \({\mathscr{F}}={\mathscr{F}}_{m,r}\) be as in Rem. 2.2.7. It follows from the existence of suitable quadratic covers of \({\mathbb{A}}_{R}^{1}\setminus\{z_{1},\ldots,z_{r}\}\) (possibly by enlarging \(R\)) that there exist smooth \(\overline{\mathbb{Q}}_{\ell}\)-sheaves \({\mathscr{N}}_{1},\cdots,{\mathscr{N}}_{5}\) on \({\mathbb{A}}_{R}^{1}\setminus\{z_{1},\ldots,z_{r}\}\) whose monodromy tuples are \(r+1\)-tuples of the form (resp.)
\[{\bf T}_{{\mathscr{N}}_{1}}=(1,\ldots,1,1,-1,1,-1,1),,\quad{\bf T}_{{\mathscr{ N}}_{2}}={\bf T}_{{\mathscr{N}}_{4}}=(1,\ldots,1,1,-1,1,-1),\]
\[{\bf T}_{{\mathscr{N}}_{3}}=(1,\ldots,1,1,1,-1,-1,1,1,1),\quad{\bf T}_{{ \mathscr{N}}_{5}}=(1,\ldots,1,-1,1,1,-1,1).\]
Let \(-{\bf 1}:\pi_{1}(\mathbb{G}_{m,R})\to\overline{\mathbb{Q}}_{\ell}^{\ \times}\) be the quadratic character associated to the etale cover \(\mathbb{G}_{m,R}\to\mathbb{G}_{m,R},\ x\mapsto x^{2},\) and to the inclusion of \({\rm Aut}(f)\simeq\mu_{2}\) into \(\overline{\mathbb{Q}}_{\ell}^{\ \times}.\) The latter data define a smooth sheaf \({\mathscr{L}}_{-{\bf 1}}\) on \(\mathbb{G}_{m,R},\) which will be used in the following middle convolution steps (cf. Rem. 4.0.3 of the Appendix). In the following, we use the convolution \({\rm MC}_{\chi}\) of smooth sheaves as defined in Def. 4.0.5 below.
We use the following notation: an expression like \((i,-i,J(2)^{2r-6},1),\) occurring in Prop. 3.1.1 below, denotes a matrix in Jordan canonical form having three Jordan blocks of length \(1\) one for each eigenvalue \(i,i^{-1},1\) (resp.) and having \(2r-6\) Jordan blocks of length \(2\) to the eigenvalue \(1,\) etc..
In the following, we use the middle convolution functor \({\rm MC}_{\chi}\) as defined in Def. 4.0.5 below. The next result is an easy exercise using the numerology of the middle convolution given in [9], Cor. 3.3.6 (cf. [6], Prop. 1.2.1):
**3.1.1 Proposition**.: Let \(m\in\mathbb{N}_{>2}\) be even, let \(r\in\mathbb{N}\) with \(2\varphi(m)\leq r-5\), and let \(\mathscr{F}_{m,r}\) be the smooth sheaf on \(\mathbb{A}_{R}^{1}\setminus\{z_{1},\ldots,z_{r}\}\) as in Rem. 2.2.7. Then the following holds:
1. The smooth \(\overline{\mathbb{Q}}_{\ell}\)-sheaf \[\mathscr{G}_{1,m,r}:=\mathscr{N}_{2}\otimes\operatorname{MC}_{-\mathbf{1}}( \mathscr{N}_{1}\otimes\operatorname{MC}_{-\mathbf{1}}(\mathscr{F}_{m,r}))\] has rank \(n_{1}=4r-9\). The Jordan form of the \(i\)-th entry \(T_{i}\) of the monodromy tuple \(\mathbf{T}_{\mathscr{G}_{1,m,r}}\) is as follows (resp): \[(\lambda_{i},\lambda_{i}^{-1},1^{4r-11}) i=1,\ldots,r-3,\] \[(J(2)^{2r-6},J(3)), i=r-2,\] \[(1,-1^{4r-10}), i=r-1,\] \[(i,-i,J(2)^{2r-6},1), i=r,\] \[(1,\ldots,1), i=r+1.\]
2. The smooth \(\overline{\mathbb{Q}}_{\ell}\)-sheaf \[\mathscr{G}_{2,m,r}:=\mathscr{N}_{4}\otimes\operatorname{MC}_{-1}(\mathscr{N}_ {3}\otimes\operatorname{MC}_{-1}(\mathscr{F}_{m,r}\otimes\mathscr{N}_{5}))\] has rank \(n_{2}=4r-11\). The Jordan form of the \(i\)-th entry \(T_{i}\) of the monodromy tuple \(\mathbf{T}_{\mathscr{G}_{2,m,r}}\) is as follows (resp.): \[(\lambda_{i},\lambda_{i}^{-1},1^{4r-13}) i=1,\ldots,r-4,\] \[(\operatorname{J}(2)^{2r-6},1) i=r-3,\] \[(\operatorname{J}(3),\operatorname{J}(2)^{2r-8},1^{2}) i=r-2,\] \[(1,-1^{4r-12}) i=r-1,\] \[(i,-i,1^{4r-13}) i=r,\] \[(1,\ldots,1) i=r+1\,.\]
3. The smooth \(\overline{\mathbb{Q}}_{\ell}\)-sheaf \[\mathscr{G}_{3,m,r}:=\operatorname{MC}_{-1}(\mathscr{N}_{5}\otimes \operatorname{MC}_{-1}(\mathscr{F}_{m,r}))\] has rank \(n_{3}=4r-10\). The Jordan form of the \(i\)-th entry \(T_{i}\) of the monodromy tuple \(\mathbf{T}_{\mathscr{G}_{3,m,r}}\) is as follows (resp.): \[(\lambda_{i},\lambda_{i}^{-1},1^{4r-12}) i=1,\ldots,r-4,\] \[(\operatorname{J}(3)^{2},\operatorname{J}(2)^{2r-8}) i=r-3,\] \[(-1,1^{4r-11}) i=r-2,r-1\] \[(i,-i,\operatorname{J}(2)^{2r-6}) i=r,\] \[(-1,\ldots,-1) i=r+1\,.\]
4. The smooth \(\overline{\mathbb{Q}}_{\ell}\)-sheaf \[\mathscr{G}_{4,m,r}:=\operatorname{MC}_{-1}(\mathscr{N}_{5}\otimes \operatorname{MC}_{-1}(\mathscr{F}_{m,r}\otimes\mathscr{N}_{5}))\]
has rank \(n_{4}=4r-12.\) The Jordan form of the \(i\)-th entry \(T_{i}\) of the monodromy tuple \(\mbox{\bf T}\mbox{\it g}_{4,m,r}\) is as follows (resp.): \[(\lambda_{i},\lambda_{i}^{-1},1^{4r-14}), \quad i=1,\ldots,r-4,\] \[(\mbox{J}(2)^{2r-6}), \quad i=r-3,\] \[(-1,1^{4r-13}), \quad i=r-2,r-1,\] \[(i,-i,J(2)^{2r-8},1,1), \quad i=r,\] \[(-1,\ldots,-1) \quad i=r+1\,.\]
### Galois realizations of finite and profinite special linear groups
**3.2.1 Definition**.: Let \(H\) be a profinite group. Then \(H\)_occurs regularly as Galois group over \(\mathbb{Q}(t)\)_ if there exists a continuous surjection \(\kappa:\mbox{Gal}(\overline{\mathbb{Q}(t)}/\mathbb{Q}(t))\to H\) such that the restriction of \(\kappa\) to \(\mbox{Gal}(\overline{\mathbb{Q}(t)}/\overline{\mathbb{Q}}(t))\) is surjective.
For an odd prime \(\ell\), let \(q=\ell^{k}\,(k\in\mathbb{N}_{>0}).\) Write \(\mathscr{O}_{q}\) for the valuation ring of the completion of \(\mathbb{Q}(\zeta_{q-1})\), w.r. to a valuation \(\lambda\) lying over \(\ell.\)
**3.2.2 Theorem**.: Let \(\ell\) be an odd prime number and let \(q=\ell^{k}\,(k\in\mathbb{N}_{>0}).\) If \(q>3\) then the special linear group \(\mbox{SL}_{n}(\mathscr{O}_{q})\) occurs regularly as Galois group over \(\mathbb{Q}(t)\) if \(n>8\varphi(q-1)+11.\)
**Proof:** By construction of the middle convolution for smooth sheaves (Def. 4.0.5), each \(\overline{\mathbb{Q}}_{\ell}\)-sheaf \(\mathscr{G}_{i,q-1,r}\,(i=1,\ldots,4)\) of Prop. 3.1.1 is of the form \(\widetilde{\mathscr{G}}_{i,q-1,r}(-1)\otimes_{\mathscr{O}_{q}}\overline{ \mathbb{Q}}_{\ell},\) where \(\widetilde{\mathscr{G}}_{i,q-1,r}\) is a smooth \(\mathscr{O}_{q}\)-sheaf (note the Tate twist by \(-1\)). Since \(\mathscr{G}_{i,q-1,r}\) is pure of weight \(2\) (since in each middle convolution step, and on each \(\mathbb{F}_{p}\)-fibre, the middle convolution is a higher direct image of an intermediate extension which is pure of weight \(0\), resp. \(1\), cf. [2], [1]), the sheaf \(\widetilde{\mathscr{G}}_{i,q-1,r}\) is pure of weight \(0.\)
By Prop. 3.1.1, the rank of \(\mathscr{G}_{i,q-1,r}\,(i=1,\ldots,4)\) is \(n_{1}=4r-9,n_{2}=4r-11,n_{3}=4r-10,n_{4}=4r-12\) (resp.). We now divide the proof into the dimensions \(n_{1},n_{2},n_{3},n_{4},\) beginning with \(n_{1}:\)
Since \(\mathscr{G}_{1,q-1,r}\) is geometrically irreducible, the sheaf \(\widetilde{\mathscr{G}}_{1,q-1,r}\) is also irreducible and the monodromy tuple of \(\widetilde{\mathscr{G}}_{1,q-1,r}\) generates an (absolutely) irreducible subgroup of \(\mbox{GL}_{n_{1}}(\mathscr{O}_{q}).\) It follows from Prop. 3.1.1 (i) that the components of the monodromy tuple of \(\widetilde{\mathscr{G}}_{1,q-1,r}\) are contained in the special linear group. Hence the determinant \(\det(\widetilde{\mathscr{G}}_{1,q-1,r})=\Lambda^{n_{1}}(\widetilde{\mathscr{G }}_{1,q-1,r})\) is a constant sheaf of rank \(1\) on \(\mathbb{A}_{R}^{1}\setminus\{z_{1},\ldots z_{r}\}\) (with \(R=\mathbb{Z}[\frac{1}{N}]\) for a large enough \(N\in\mathbb{N}\) as in Rem. 2.2.7).
It follows from Prop. 2.2.8 and from absence of nontrivial invariants of the other local monodromies of \(\mathscr{F}_{q-1,r}\) that on each \(\mathbb{F}_{p}\)-fibre \(\mathbb{A}_{\mathbb{F}_{p}}^{1}\setminus\bar{\mathbf{z}}\) (where \(p>N,p\neq\ell,\) and where \(\bar{\mathbf{z}}\) denotes the reduction of the omitted divisor \(\mathbf{z}:=\{z_{1},\ldots,z_{r}\}\)), the conditions of [3], Thm. 4.2.4, are fulfilled for \(F:=\mathscr{F}_{q-1,r}|_{\mathbb{A}_{\mathbb{F}_{p}}^{1}\setminus\bar{\mathbf{z}}}:\)
1. The local geometric monodromy of \(F\) at \(\infty\) is scalar, given by the quadratic character \(-\mathbf{1}:k^{\times}\to\overline{\mathbb{Q}}_{\ell}^{\times},\) but \(F\) is not geometrically isomorphic to \(\mathscr{L}_{-\mathbf{1}}.\)
2. The \(I_{s}^{t}\)-module \(\mbox{Gr}^{M}(F_{\bar{\eta}_{s}})\) (= the semisimplification of the tame geometric inertia at \(s\), see [3], Section 3.2) is self-dual for all \(s\) in \(\bar{\mathbf{z}}.\)
3. For any \(x\in|\mathbb{A}_{\mathbb{F}_{p}}^{1}|\) there exists an integer \(m\) such that \(\det(\mbox{Frob}_{x},(j_{*}F)_{\overline{x}})=\pm q^{m}\) (here: \(m=0\)).
It follows from [3], Thm. 4.2.4, that these conditions again hold for \(\operatorname{MC}_{\mathbf{-1}}(\mathscr{F}_{m,r})|_{\mathbb{A}^{1}_{\mathbb{F}_{ p}}\setminus\bar{\mathbf{z}}}.\) This implies that these conditions are also valid for \(\mathscr{N}_{1}\otimes\operatorname{MC}_{\mathbf{-1}}(\mathscr{F}_{m,r})|_{ \mathbb{A}^{1}_{\mathbb{F}_{p}}\setminus\bar{\mathbf{z}}},\) since after tensoring with \(\mathscr{N}_{1}\) there are no inertial invariants at \(\bar{z}_{r-2}\) and \(\bar{z}_{r}.\) Following the construction process of \(\mathscr{G}_{1,q-1,r}\) via middle convolution (as in Prop. 3.1.1 (i)), applying \(\operatorname{MC}_{\mathbf{-1}}\) and [3], Thm. 4.2.4 again (also noting that the tensor product with \(\mathscr{N}_{2}\) does not change the property of having at most quadratic determinant up to Tate twist and also noting that the underlying sheaf \(\widetilde{\mathscr{G}}_{1,q-1,r}\) has weight \(0\) on each \(\mathbb{F}_{p}\)-fibre), for each closed point \(x\in|\mathbb{A}^{1}_{\mathbb{F}_{p}}\setminus\bar{\mathbf{z}}|\) one has
\[\det(\operatorname{Frob}_{x},\widetilde{\mathscr{G}}_{1,q-1,r}|_{\mathbb{A}^{ 1}_{\mathbb{F}_{p}}\setminus\bar{\mathbf{z}}})=\pm 1.\]
Cebotarev's density theorem therefore implies that the determinant sheaf \(\det(\widetilde{\mathscr{G}}_{1,q-1,r})\) is the geometrically constant sheaf rank-one sheaf associated to an at most quadratic character
\[\pi_{1}(\operatorname{Spec}\left(R\right),\operatorname{Spec}\left(\overline {\mathbb{Q}}\right))\to\mathscr{O}_{q}^{\times}.\]
Since the dimension \(n_{1}=4r-9\) is odd, the full arithmetic monodromy group of the sheaf
\[\mathscr{H}_{1,q-1,r}:=\widetilde{\mathscr{G}}_{1,q-1,r}\otimes\det( \widetilde{\mathscr{G}}_{1,q-1,r})\]
is hence contained in the group \(\operatorname{SL}_{n_{1}}(\mathscr{O}_{q}).\)
Let \(H^{\operatorname{geo}}=\operatorname{Im}(\rho^{\operatorname{geo}}_{ \mathscr{H}_{1,q-1,r}^{\text{\rm{}}}})\leq\operatorname{SL}_{1}(\mathscr{O}_{q})\) be the geometric monodromy group of \(\mathscr{H}_{1,q-1,r}\) and let \(\overline{H}^{\operatorname{geo}}\leq\operatorname{SL}_{n_{1}}(\mathbb{F}_{q})\) denote its image under the residual map on the coefficients (well defined up to semisimplification). The middle convolution, as defined in [3], Def. 4.3.5, makes sense also over the coefficient field \(\mathbb{F}_{q}=\mathscr{O}_{q}/\lambda\) and the basic properties (like preservation of irreduciblity and the effect on local monodromy) hold also in this case (for the irreduciblity one uses the same arguments as in [3], Rem. 2.1.4, using \(\mod\,\lambda\)-coefficients, the effect of \(\operatorname{MC}_{\chi}\) on the semisimplification of the \(\mod\,\lambda\)-local monodromy used below follows from the compatibility of the cohomological construction of \(\operatorname{MC}_{\chi}\) with reduction \(\mod\,\lambda\)). Hence the group \(\overline{H}^{\operatorname{geo}}\) is an absolutely irreducible subgroup of \(\operatorname{SL}_{n_{1}}(\mathbb{F}_{q}),\) containing the negative of a reflection. Moreover, by [4], Prop. 6.6, \(\overline{H}^{\operatorname{geo}}\) is primitive.
Hence, by the results of Wagner, Serezkin and Zalesskii (as collected in [12], Thm. 2.4), \(\overline{H}^{\operatorname{geo}}\) contains a subgroup of type \(\operatorname{SU}_{1}(\mathbb{F}_{q^{\prime}}),\operatorname{SL}_{n_{1}}( \mathbb{F}_{q^{\prime}})\) or the derived group \(\Omega_{n_{1}}(\mathbb{F}_{q^{\prime}})\) of \(\operatorname{SO}_{n_{1}}(\mathbb{F}_{q^{\prime}})\) (with \(\mathbb{F}_{q^{\prime}}\) a subfield of \(\mathbb{F}_{q}\)) as a normal subgroup. Note that the underlying dimension \(n_{1}\) is \(>8\) since \(q>3,\) hence the exceptional cases in the list of Wagner, Serezkin and Zalesskii do not occur in our situation. Note also that since the middle convolution \(\operatorname{MC}_{\mathbf{-1}}\) preserves autoduality up to a Tate twist by Verdier duality, we can exclude the groups \(\Omega_{n_{1}}(\mathbb{F}_{q^{\prime}})\) since the group \(Q_{m},\) viewed as a subgroup of \(\operatorname{GL}_{2}(\mathbb{F}_{q})\) does not respect an orthogonal or a symplectic form. We can exclude the unitary groups \(\operatorname{SU}_{n_{1}}(\mathbb{F}_{q^{\prime}})\) because they do not contain a bireflection of type
\[T_{1}\mod\lambda=\operatorname{diag}(\zeta_{q-1},\zeta_{q-1}^{-1},1,\ldots,1) \mod\,\lambda.\]
Moreover, the Frobenius map \(\operatorname{Frob}_{q^{\prime}},\) for \(q^{\prime}\) a proper divisor of \(q,\) does not stabilize the conjugacy class of the bireflection \(T_{1}\mod\lambda.\) Therefore we have \(q^{\prime}=q\) and consequently \(\overline{H}^{\operatorname{geo}}=\operatorname{SL}_{n_{1}}(\mathbb{F}_{q}).\) Since the residual map \(\operatorname{SL}_{n_{1}}(\mathscr{O}_{q})\to\operatorname{SL}_{n_{1}}(\mathbb{F }_{q})\) has the Frattini property (see [16], Cor. A), we have
\[H^{\operatorname{geo}}=\operatorname{Im}(\rho^{\operatorname{geo}}_{\mathscr{H} _{1,q-1,r}})=\operatorname{SL}_{n_{1}}(\mathscr{O}_{q})=\operatorname{Im}( \rho_{\mathscr{H}_{1,q-1,r}}),\]
where the last equality follows trivially from the inclusion of \({\rm Im}(\rho_{\mathscr{H}_{1,q-1,r}})\) into \({\rm SL}_{n_{1}}(\mathscr{O}_{q})\). This proves the claim for \(n_{1}=4r-9\) since the absolute Galois group \({\rm Gal}(\overline{\mathbb{Q}(t)}/\mathbb{Q}(t))\) surjects onto the etale fundamental group \(\pi_{1}(\mathbb{A}_{R}^{1}\setminus\{z_{1},\ldots z_{r}\})\) appearing in the above monodromy representations.
The claim for \(n_{2}\) follows from exactly the same arguments using the sheaf \(\mathscr{G}_{2,q-1,r}\).
The claim for \(n_{3},n_{4}\) uses the sheaves \(\mathscr{G}_{3,q-1,r}\) and \(\mathscr{G}_{4,q-1,r}\) and the same arguments to reduce to the case where the geometric and arithmetic monodromy group of the analogs \(\widetilde{\mathscr{G}}_{i,q-1,r}\in{\rm LocSys}(\mathbb{A}_{R}^{1}\setminus \{z_{1},\ldots,z_{r}\},\mathscr{O}_{q})\,(i=3,4)\) of \(\widetilde{\mathscr{G}}_{1,q_{1}}\) are equal to the group
\[{\rm SL}_{n_{i}}^{\pm}(\mathscr{O}_{q})=\{A\in{\rm GL}_{n_{i}}(\mathscr{O}_{ q})\mid\det(A)=\pm 1\}\quad i=3,4.\]
Note that \({\rm SL}_{n_{i}}^{\pm}(\mathscr{O}_{q})\) contains the special linear group \({\rm SL}_{n_{i}}\mathscr{O}_{q})\) as a subgroup of index \(2\) and that the only local monodromy matrices \(T_{i}\) which do not lie in \({\rm SL}_{n_{i}}(\mathscr{O}_{q})\) are the elements \(T_{r-2}\) and \(T_{r-1}\), cf. Prop. 3.1.1. It follows therefore from the proof of [12], Thm. I.5.3, applied successively to the tower of coverings belonging to \(\rho_{\widetilde{\mathscr{G}}_{i,q-1,r}}\otimes_{\mathscr{O}_{q}}(\mathscr{O} _{q}/\lambda^{k})\), that the pullback \(\widehat{\mathscr{G}}_{i,q-1,r}\) of \(\widetilde{\mathscr{G}}_{i,q-1,r}\) to the quadratic cover
\[\mathbb{A}^{1}\setminus{\bf x}\to\mathbb{A}_{R}^{1}\setminus\{z_{1},\ldots,z_{ r}\},\quad x\mapsto(x-z_{r-1})(x-z_{r-2}),\]
has geometric and arithmetic monodromy group equal to \({\rm SL}_{n_{i}}(\mathscr{O}_{q})\), proving the claim for \(n_{3}\) and \(n_{4}\). \(\Box\)
**3.2.3 Corollary.**: Let \(\mathbb{F}_{q}\) be a finite field of odd order \(q>3.\) Then the special linear group \({\rm SL}_{n}(\mathbb{F}_{q})\) occurs regularly as Galois group over \(\mathbb{Q}(t)\) if \(n>8\varphi(q-1)+11\). \(\Box\)
## 4 Appendix: Arithmetic middle convolution
It is the aim of this section, which is basically a reformulation of [9], Chap. 4, to define an arithmetic version of the middle convolution which allows an application of the results of [3] to our situation, where the omitted singularities are not contained in the ground field \(\mathbb{Q}\).
**4.0.1 Proposition.**: Let \(S\) be an irreducible noetherian scheme, \(X/S\) smooth, and \(D\) in \(X\) a smooth \(S\)-divisor. For \(F\) smooth on \(X\setminus D\) and tame along \(D,\) and for \(j:X\setminus D\to X\) and \(i:D\to X\) denoting the inclusions, the following holds:
* formation of \(j_{*}F\) and of \(Rj_{*}F\) on \(X\) commutes with arbitrary change of base on \(S,\)
* the sheaf \(i^{*}j_{*}F\) on \(D\) is smooth, and formation of \(i^{*}j_{*}F\) on \(D\) commutes with arbitrary change of base on \(S.\)
**Proof:**[9], Lem. 4.3.8. \(\Box\)
Recall from [8] that a scheme is called _good_ if it admits a map of finite type to a base scheme \(S={\rm Spec}(R)\) which is regular of dimension at most one. For good schemes X and \(\ell\) a fixed prime number, invertible in \(X,\) one has the triangulated category \({\rm D}^{b}(X,\overline{\mathbb{Q}}_{\ell})\), which admits the full Grothendieck formalism of the six operations ([2], [8]).
Let \(R\) be a normal noetherian integral domain in which our fixed prime \(\ell\) is invertible so that \(S={\rm Spec}(R).\) Let \(\mathbb{A}_{R}^{1}={\rm Spec}\,(R[x])\) and let \(D\) denote a smooth \(S\)-divisor defined by the vanishing of a separable monic polynomial \(D(x)\in R[x]\) plus the divisor at \(\infty\).
One says that an object \(K\in{\rm D}^{b}_{c}(\mathbb{A}_{R}^{1},\overline{\mathbb{Q}}_{\ell})\) is _adapted to the stratification_\((\mathbb{A}^{1}\setminus D,D)\) if each of its cohomology sheaves is smooth when restricted either to \(\mathbb{A}_{R}^{1}\setminus D\) or to \(D\) ([9], (4.1.2), [8], (3.0)).
**4.0.2 Definition**.: Let \(\mathscr{T}(\mathbb{A}^{1},\overline{\mathbb{Q}}_{\ell})_{R,D}\) denote the category formed by the objects \(K\) in \(\mathrm{D}^{b}_{c}(\mathbb{A}^{1}_{R},\overline{\mathbb{Q}}_{\ell})\) of the form \(j_{*}F[1]\), where \(j:\mathbb{A}^{1}_{R}\setminus D\hookrightarrow\mathbb{A}^{1}_{R}\) denotes the inclusion and \(F\) is smooth on \(\mathbb{A}^{1}_{R}\setminus D\), such that the following holds:
1. For \(k\) an algebraically closed field and \(R\to k\) a ring homomorphism the restriction \(F|_{\mathbb{A}^{1}_{k}\setminus D_{k}}\) is smooth, irreducible and nontrivial.
2. The sheaf \(F|_{\mathbb{A}^{1}_{k}\setminus D_{k}}\) has at least three non-smooth points in \(D_{k}\) (including \(\infty\)).
Let \(\mathscr{T}(\mathbb{A}^{1},\overline{\mathbb{Q}}_{\ell})_{R}\) denote the category of sheaves \(F\) on \(\mathbb{A}^{1}_{R}\) for which there exists a \(D\) such that \(F\in\mathscr{T}(\mathbb{A}^{1},\overline{\mathbb{Q}}_{\ell})_{R,D}\).
By the previous result, each \(K\in\mathscr{T}(\mathbb{A}^{1},\overline{\mathbb{Q}}_{\ell})_{D,R}\) is adapted to the stratification \((\mathbb{A}^{1}\setminus D,D)\). Moreover, the restriction of \(K\in\mathscr{T}(\mathbb{A}^{1},\overline{\mathbb{Q}}_{\ell})_{R}\) to each geometric fiber \(\mathbb{A}^{1}_{k}\) is an intermediate extension of an irreducible smooth sheaf and is hence perverse (cf. [9], Chap. 4, and [3], Section 1.2).
**4.0.3 Remark**.: Let \(N\) be a natural number \(>1\) and let \(R\) be as above such that \(R\) contains a primitive \(N\)-th root of unity and such that \(N\) is invertible in \(R\). Consider the etale cover \(f:\mathbb{G}_{m,R}\to\mathbb{G}_{m,R},\,x\mapsto x^{N}\), with automorphism group \(\mu_{N}\) and let \(\chi:\mu_{N}\to\overline{\mathbb{Q}}_{\ell}^{\times}\) be a character. The latter data define a smooth sheaf \(\mathscr{L}_{\chi}\) on \(\mathbb{G}_{m,R}\), by pushing out the so obtained \(\mu_{N}\)-torsor by \(\chi^{-1}\).
Note that for the natural embedding
\[-\mathbf{1}:\mu_{2}=\{\pm 1\}\hookrightarrow\overline{\mathbb{Q}}_{\ell}^{\times}\]
one obtains in this way a smooth sheaf \(\mathscr{L}_{-\mathbf{1}}\) on \(\mathbb{G}_{m,\mathbb{Z}[1/(N\cdot\ell)]}\) for any even \(N\). Then on each \(\mathbb{F}_{q}\)-fibre (\(q\) prime to \(N\cdot\ell\)), the restriction \(\mathscr{L}_{\chi}|_{\mathbb{G}_{m,\mathbb{F}_{q}}}\) is obtained by the same procedure by first considering \(f_{\mathbb{F}_{q}}:\mathbb{G}_{m,\mathbb{F}_{q}}\to\mathbb{G}_{m,\mathbb{F}_{ q}},\,x\mapsto x^{2}\), with automorphism group \(\mu_{2}\) and by taking the same character \(-\mathbf{1}:\mu_{2}\to\overline{\mathbb{Q}}_{\ell}^{\times}\). By looking at Frobenius traces, the sheaf \(\mathscr{L}_{-\mathbf{1}}|_{\mathbb{G}_{m,\mathbb{F}_{q}}}\) coincides with the usual Kummer sheaf associated to the quadratic character of \(\mathbb{G}_{m}(\mathbb{F}_{q})\), see [3], Section 1.4, and [11].
Let \(j:\mathbb{A}^{1}_{R}\times\mathbb{A}^{1}_{R}\hookrightarrow\mathbb{P}^{1}_{R} \times\mathbb{A}^{1}_{R}\) denote the inclusion and let \(\overline{\mathrm{pr}}_{2}:\mathbb{P}^{1}_{R}\times\mathbb{A}^{1}_{R}\to \mathbb{A}^{1}_{R}\) be the second projection.
Following [9], for a nontrivial character \(\chi\) as above, define the _middle convolution_ of \(K\in\mathscr{T}(\mathbb{A}^{1},\overline{\mathbb{Q}}_{\ell})_{R}\) with \(j^{\prime}_{*}\mathscr{L}_{\chi}[1]\) as follows (where \(j^{\prime}\) denotes the inclusion of \(\mathbb{G}_{m}\) into \(\mathbb{A}^{1}\) and where \(\tau_{k}\) denotes the natural truncation functor), cf. [9] (4.3.2):
\[\mathrm{MC}_{\chi}(K)=R\overline{\mathrm{pr}}_{2*}(\tau_{\leq-2}Rj_{*}(\mathrm{ pr}^{*}_{1}K\boxtimes j^{\prime}_{*}\mathscr{L}_{\chi}(t-x)[1]))=R\overline{ \mathrm{pr}}_{2*}(j_{*}(\mathrm{pr}^{*}_{1}K\boxtimes j^{\prime}_{*}\mathscr{L}_ {\chi}(t-x)[1])), \tag{4.1}\]
where \(\mathscr{L}_{\chi}(t-x)\) denotes the pullback of \(\mathscr{L}_{\chi}\) along the map \(t-x\) (here the second equality holds by construction since, locally at the divisor at \(\infty\), the perverse sheaf \(\mathrm{pr}^{*}_{1}K\boxtimes j^{\prime}_{*}\mathscr{L}_{\chi}(t-x)[1]\) is a sheaf placed in cohomological degree \(-2\)).
**4.0.4 Theorem**.:
1. For \(K\in\mathscr{T}(\mathbb{A}^{1},\overline{\mathbb{Q}}_{\ell})_{R,D}\), the middle convolution \(\mathrm{MC}_{\chi}(K)\) is again an object of \(\mathscr{T}(\mathbb{A}^{1},\overline{\mathbb{Q}}_{\ell})_{R,D}\).
* Formation of \({\rm MC}_{\chi}\) commutes with arbitrary change of base. Especially, on each geometric fiber \({\mathbb{A}}^{1}_{k}\), with \(k\) either an algebraically closed field, one has \[{\rm MC}_{\chi}(K)|_{{\mathbb{A}}^{1}_{k}}={\rm MC}_{\chi}(K|_{{\mathbb{A}}^{1}_{ k}}),\] cf. [9], Prop. 2.9.2.
**Proof:** The second claim follows from the same arguments as in [9], (4.3.2)-(4.3.6). The first claim follows using the same arguments as in the proof of [9], Thm. 4.3.11. \(\Box\)
In view of the previous result, one can define \({\rm MC}_{\chi}\) also for constructible and smooth sheaves:
**4.0.5 Definition**.: Let \(R,D,\) and \(\chi\) be as above.
* Let \(G\) be a constructible \(\overline{{\mathbb{Q}}}_{\ell}\)-sheaf on \({\mathbb{A}}^{1}_{R}\) such that \(G[1]\in{\mathscr{T}}({\mathbb{A}}^{1},\overline{{\mathbb{Q}}}_{\ell})_{R,D}.\) Then the _middle convolution_ of \(G\) with respect to \(\chi\) is defined as the constructible sheaf (4.2) \[{\rm MC}_{\chi}(G)={\rm MC}_{\chi}(G[1])[-1]={\mathscr{H}}^{-1}({\rm MC}_{ \chi}(G[1]))\,.\] For \(R=k\) an algebraically closed field this is Katz' middle convolution functor \({\rm MC}_{\chi}\), see [9], (5.1.5).
* For \(F\) a smooth sheaf on \({\mathbb{A}}^{1}_{R}\setminus D\) such that \(j_{*}F[1]\in{\mathscr{T}}({\mathbb{A}}^{1},\overline{{\mathbb{Q}}}_{\ell})_{R, D}\) define then \({\rm MC}_{\chi}(F)\) to be the smooth sheaf (4.3) \[{\rm MC}_{\chi}(F)={\rm MC}_{\chi}(j_{*}F)|_{{\mathbb{A}}^{1}_{R}\setminus D}\,.\]
|
2309.03469 | Fast FixMatch: Faster Semi-Supervised Learning with Curriculum Batch
Size | Advances in Semi-Supervised Learning (SSL) have almost entirely closed the
gap between SSL and Supervised Learning at a fraction of the number of labels.
However, recent performance improvements have often come \textit{at the cost of
significantly increased training computation}. To address this, we propose
Curriculum Batch Size (CBS), \textit{an unlabeled batch size curriculum which
exploits the natural training dynamics of deep neural networks.} A small
unlabeled batch size is used in the beginning of training and is gradually
increased to the end of training. A fixed curriculum is used regardless of
dataset, model or number of epochs, and reduced training computations is
demonstrated on all settings. We apply CBS, strong labeled augmentation,
Curriculum Pseudo Labeling (CPL) \citep{FlexMatch} to FixMatch \citep{FixMatch}
and term the new SSL algorithm Fast FixMatch. We perform an ablation study to
show that strong labeled augmentation and/or CPL do not significantly reduce
training computations, but, in synergy with CBS, they achieve optimal
performance. Fast FixMatch also achieves substantially higher data utilization
compared to previous state-of-the-art. Fast FixMatch achieves between
$2.1\times$ - $3.4\times$ reduced training computations on CIFAR-10 with all
but 40, 250 and 4000 labels removed, compared to vanilla FixMatch, while
attaining the same cited state-of-the-art error rate \citep{FixMatch}. Similar
results are achieved for CIFAR-100, SVHN and STL-10. Finally, Fast MixMatch
achieves between $2.6\times$ - $3.3\times$ reduced training computations in
federated SSL tasks and online/streaming learning SSL tasks, which further
demonstrate the generializbility of Fast MixMatch to different scenarios and
tasks. | John Chen, Chen Dun, Anastasios Kyrillidis | 2023-09-07T03:34:51Z | http://arxiv.org/abs/2309.03469v1 | # Fast FixMatch: Faster Semi-Supervised Learning with Curriculum Batch Size
###### Abstract
Advances in Semi-Supervised Learning (SSL) have almost entirely closed the gap between SSL and Supervised Learning at a fraction of the number of labels. However, recent performance improvements have often come _at the cost of significantly increased training computation_. To address this, we propose Curriculum Batch Size (CBS), _an unlabeled batch size curriculum which exploits the natural training dynamics of deep neural networks_. A small unlabeled batch size is used in the beginning of training and is gradually increased to the end of training. A fixed curriculum is used regardless of dataset, model or number of epochs, and reduced training computations is demonstrated on all settings. We apply CBS, strong labeled augmentation, Curriculum Pseudo Labeling (CPL) (Zhang et al., 2021) to FixMatch (Sohn et al., 2020) and term the new SSL algorithm Fast FixMatch. We perform an ablation study to show that strong labeled augmentation and/or CPL do not significantly reduce training computations, but, in synergy with CBS, they achieve optimal performance. Fast FixMatch also achieves substantially higher data utilization compared to previous state-of-the-art. Fast FixMatch achieves between \(2.1\times\) - \(3.4\times\) reduced training computations on CIFAR-10 with all but 40, 250 and 4000 labels removed, compared to vanilla FixMatch, while attaining the same cited state-of-the-art error rate (Sohn et al., 2020). Similar results are achieved for CIFAR-100, SVHN and STL-10. Finally, Fast MixMatch achieves between \(2.6\times\) - \(3.3\times\) reduced training computations in federated SSL tasks and online/streaming learning SSL tasks, which further demonstrate the generalizability of Fast MixMatch to different scenarios and tasks.
## 1 Introduction
**Background on SSL.** Semi-Supervised Learning (SSL) has shown remarkable results in the past few years and has increasing significance due to the plethora of unlabeled data (Sohn et al., 2020; Zhang et al., 2021; Wang et al., 2021; Zhang et al., 2022). Oftentimes, for Computer Vision, Natural Language Processing and other applications, there exist significant amounts of unlabeled data, either self-generated, open-source, or on the internet (Russakovsky et al., 2014; Zhu et al., 2015; Overwijk et al., 2022). Obtaining labels in a large scale fashion is often difficult, due to monetary, scarcity, privacy or other reasons. As a result, there is a need to continuously improve the top-line performance of SSL algorithms.
FixMatch (Sohn et al., 2020) is a SSL algorithm which combines simplicity with performance. To set up the background for our discussion, consider the simple task of image classification based on the CIFAR-10 dataset. Within each unlabeled data batch, FixMatch applies a pseudo-labeling technique to weakly augmented these unlabeled samples. Then, it uses those labels to train with strongly augmented versions of the same unlabeled samples, based on the cross-entropy loss. FixMatch achieves exceptional results, such as 94.93% accuracy on CIFAR-10 with 250 labels, and 88.61% accuracy with only 40 labels.
**Challenges and potential solutions.** Yet, one of the main challenges of FixMatch is the computation required for training. For example, it takes almost \(8\cdot 10^{8}\) forward and backwards passes to reach the 94.93% accuracy on CIFAR-10 with 250 labels. This is _equivalent to performing approximately \(15\cdot 10^{3}\) epochs_ for the CIFAR-10 dataset, while state of the art accuracy for supervised learning on CIFAR-10 is achieved within a few hundreds of epochs.
One solution to this issue is by tackling the _data utilization ratio_. In particular, by maintaining a fixed threshold throughout training, many forward passes of the weakly augmented samples never reach the pre-defined threshold and, there
fore, never contribute to training. Previous work, such as FlexMatch (Zhang et al., 2021), attempted to address this issue by introducing a per-class dynamic threshold termed Curriculum Pseudo Label (CPL). However, while CPL increases data utilization and can improve final performance, it suffers a critical drawback: in early-mid training stages, a low threshold may mislead the model, and thus, it could be the case that _CPL does not reduce total training computations_. Overall, while progress has been made on the data utilization problem, there is still room for improvement for the overarching challenge of reducing training computations.
**Improving the data utilization ration using CBS.** This work introduces Curriculum Batch Size (CBS). CBS applies a curriculum to the unlabeled batch size, which exploits the natural training dynamics of deep neural networks. A small unlabeled batch size is used at the beginning of training, and is gradually increased towards the end of training. _A fixed curriculum is used regardless of dataset, model or number of epochs, and reduced training computations are demonstrated on all settings._ We apply CBS, strong labeled augmentation, and CPL (Zhang et al., 2021) to FixMatch (Sohn et al., 2020), and term the new SSL algorithm Fast FixMatch. While strong labeled augmentations and/or CPL do not significantly reduce training computations at all times, they have synergy with CBS and together produce the best results.
**Summary of contributions and main observations.** These are as follows:
* We propose the Curriculum Batch Size (CBS) schedule in the SSL setting. CBS introduces no extra hyperparameters to tune, and can be directly applied to FixMatch with no modifications across all tested settings. We apply CBS, strong labeled augmentation, and CPL (Zhang et al., 2021) to FixMatch (Sohn et al., 2020), and term the new SSL algorithm Fast FixMatch.
* We perform ablation studies to show that strong labeled augmentation and/or CPL do not significantly reduce training computations. Adding CBS to strong labeled augmentation and CPL together produces synergy and achieves optimal performance.
* We extend Fast FixMatch to Federated Self-supervised Learning scenario and online/streaming learning Self-supervised Learning scenario without introducing any extra parameters/computation cost.
We observe:
* CBS substantially increases the data utilization ratio, which allows more samples to be used effectively. In particular, CBS + CPL improves over CBS or CPL as much as CBS or CPL does over vanilla FixMatch.
* \(3.4\times\) on CIFAR-10 with all but 40, 250 and 4000 labels removed, compared to vanilla FixMatch, while attaining the same cited state-of-the-art error rate. Similar results are achieved for CIFAR-100, SVHN and STL-10.
* \(3.3\times\) on both CIFAR10 and CIFAR100 in Federated SSL with strong non-iid labeled and non-labeled local data. Similarly, Fast FixMatch reduces training computations by \(2.3\times\)
- \(2.8\times\) on both CIFAR10 and CIFAR100 in online/streaming SSL with streaming of unlabeled data.
## 2 Related Work
### Semi-supervised Learning
**Early Consistency Regularization.** Consistency regularization has been one of the main drivers for improvements in the SSL setting in recent years. Consistency regularization minimizes the difference between the outputs of augmentations of the same input, where defining the difference and choosing good augmentations have led to significant advances (Miyato et al., 2017; Berthelot et al., 2019; Sohn et al., 2020). The \(\pi\) model (Saijadi et al., 2016; Laine and Aila, 2017) adds a mean squared loss which minimizes differences on the output layer. Virtual Adversarial Training (Miyato et al., 2017) perturbs the input with an adversarial perturbation by backpropagating the gradient to the input. The authors additionally add an entropy minimization (Grandvalet and Bengio, 2005) loss which encourages confident predictions for the top-1 class. Mean Teacher (Tarvainen and Valpola, 2017) minimizes the difference between the current model and a model which is the exponential moving average of model weights.
**Strong Augmentation.** In the last few years, SSL have improved hand in hand with improved data augmentation. These include Mixup (Zhang et al., 2017) which generates convex combinations of both inputs and labels, CutOut (Devries and Taylor, 2017) which randomly removes parts of images, CutMix (Yun et al., 2019) which randomly replaces parts of an image with parts of another image and combining the label accordingly, AutoAugment (Cubuk et al., 2018) which uses a reinforcement learning approach to generate the best data augmentations, RandAugment (Cubuk et al., 2020) which significantly speeds up AutoAugment, and many more (Hendrycks et al., 2020; Chen et al., 2022; Lim et al., 2019).
MixMatch (Berthelot et al., 2019) utilizes Mixup, temperature sharpening and other changes. Interpolation Consistency Training (Verma et al., 2019) uses Mixup and an exponential moving average of model weights. UDA (Xie et al., 2019) demonstrates the effectiveness of RandAugment on SSL training.
**The case of Fixmatch.** FixMatch (Sohn et al., 2020) is the main focus of this paper and is one of the simplest and most effective methods in the line of SSL algorithms. FixMatch simplifies previous work in SSL by unifying pseudo
labeling methods and strong augmentation methods. In particular, for each sample and for each iteration, FixMatch produces a pseudo-label for a weak augmentation of the unlabeled sample if it crosses a fixed threshold; it then utilizes the pseudo-label as the label for a strong augmentation of the sample unlabeled sample. In doing so, the authors significantly simplified existing SSL methods and achieve state-of-the-art performance on a number of benchmarks such as 94.93% on CIFAR10 with only 250 labeled samples and 88.61% accuracy with only 40.
**Other SSL techniques.** There are also many other areas of SSL [19, 20, 22, 23, 24, 25, 26, 27, 28, 29] and related areas of self-supervised learning [2, 22, 23, 24, 25].
### Curriculum Learning
Curriculum learning improves the performance of deep learning training by ordering training data using certain schemes into a "curriculum" [10, 17]. Typically, this is achieved by presenting easy samples first and hard samples later. Much progress has been made in this field on the optimal definition of "easy" and "hard" [1, 19, 22, 26, 27]. We introduce Curriculum Learning based on whether the curriculum depends on the loss, label, feature space, or is fixed or entirely learnable.
**Loss-based Curricula.** In [19], the authors use confidence of the teacher to order training data, where the teacher is either a pre-trained teacher network, or the student itself trained on data without a curriculum. [26] revisits Pseudo-Labeling in SSL by devising a train and re-train curriculum, each time taking the top \(x\)% of most confident pseudolabels from 0 to 100 in increments of 20.
**Label-based Curricula.** In [19], the authors propose a curriculum threshold for FixMatch, which increases data utilization by lowering the threshold resulting in more pseudolabels. To address training on imbalanced data, [22] proposes a curriculum which downsamples or upsamples -depending on majority or minority classes-- and a parameter which balances between cross entropy loss and triplet loss.
**Feature Space-based Curricula.** CurriculumNet [22] produces a curriculum based on feature space density. The authors train a model, compute embeddings, then retrain the model from clean samples to noisy samples, where samples with few nearby embeddings are more noisy. Instead of using one-hot labels, [19] improves performance by defining a probability distribution over the classes based on class similarity given by inner product similarity of items of each class.
**Fixed-Curricula.** For Adaptive Curriculum Learning [23], the authors propose an Exponential Moving Average of the loss in addition to a bag of tricks. [10] applies Contrastive Learning to SSL in the domain of medical images by adding a weight to the contrastive loss which decreases over time. For noisy labels, [22] proposes to learn from clean data first then noisy data with pseudolabels from a time-ensemble of model and data augmentations. In [26], the authors add gaussian noise which decreases over time.
**Learnable Curricula.**[26] adds a learnable parameter for each sample and class which scales the logits and uses backpropagation to update the learnable parameter. [10] also adds a hyperparameter for Curriculum Learning which can be optimized through backpropagation for Convolutional Neural Networks. [26] proposes a "SuperLoss" which automatically downweights samples with a large loss, i.e. "harder" samples.
## 3 The Idea of Curriculum Batch Size
**What is the issue with FixMatch?** Within each batch of FixMatch [22], there are labeled samples - which FixMatch treats regularly with the cross entropy loss- and there are unlabeled samples. For the latter, FixMatch predicts the label based on the weak augmentation, and if it crosses a threshold, FixMatch uses this label during the strong augmentation phase. Since FixMatch uses a fixed batch size of 64 for labeled samples and 448 for unlabeled samples, this results in a maximum of \(64+448+448=960\) forward passes and \(64+448=512\) backwards passes per minibatch.
For cases such as CIFAR-10 with all but 250 labels, FixMatch spends most of the training procedure above 80% test accuracy, which means it spends close to the maximum number of forwards and backwards passes. For that setting, as shown in Table 2 later on, FixMatch requires \(2^{20}\) iterations (as in Sohn et al. [22]) to reach the cited error rate, which approximately equals to \(2^{20}\cdot(0.5\cdot(64+448+448)+0.5\cdot(64+448))\approx 770\)M forwards and backwards passes. In sequence, this implies \(\approx 770\)M\(/50000=15400\) training epochs, since there are 50K training samples in CIFAR-10. _The computational requirements are high compared to CIFAR-10 in the supervised learning scenario He et al. [19]_.
**The Data Utilization Issue.** One of the main challenges of FixMatch lies in data utilization, where the number of samples contributing to the loss is limited by the choice of threshold, often fixed at 0.95. _Data utilization is defined as the percentage of unlabeled samples above the threshold on
_either mini-batch level or across the whole training._
In Figure 1, we plot the maximum unlabeled data utilization (green curve), the current unlabeled data utilization of the batch (orange curve), and the unlabeled data predicted correctly (blue curve), on the batch level. To provide some perspective: \(i)\)_An orange curve that follows closely with the green curve indicates high unlabeled data utilization; \(ii)\)_A blue curve that closely follows the orange curve indicates an accurate model._
As a note here, our proposal -Curriculum Batch Size or CBS- increases the unlabeled data utilization ratio, given by the much closer green and orange curves. For vanilla FixMatch (leftmost plot), there is a large gap between the green and orange curves, particularly in the beginning of training but even late in training.
**Using Curriculum Pseudo Labeling.** The authors of [22] proposed a Curriculum Pseudo Labeling (CPL), which applies a curriculum threshold. The intuition behind CPL is as follows: early in training, the model is often not confident enough to reach the pre-defined threshold; however, predictions may still be accurate enough to aid training. The authors report improved final accuracy on a range of Computer Vision benchmarks including CIFAR10/100, SVHN, STL10 and ImageNet, by applying the curriculum to FixMatch in comparison to vanilla FixMatch. However, there is still room for improvements in data utilization (see Table 3). In particular, further optimizations can be made to exploit the natural progression of training where the model performs worse at the beginning of training and better at the end of training
### Curriculum Batch Size
The Curriculum Batch Size (or CBS) is motivated both by the observation of low data utilization and model performance progression. In a nutshell, CBS starts with a small unlabeled batch size and progressively increases the batch size following a curriculum.
In particular, let the labeled samples be given by \(\{(x_{i},y_{i})\}_{i=1}^{L}\). The unlabeled samples are given by \(\{x_{i}\}_{i=1}^{U}\). Following ordinary training implementations, on iteration \(t\), we select the next \(l_{t}\) labeled samples and \(u_{t}\) unlabeled samples. Here, \(l_{t}\) is fixed as \(l_{t}:=l=64\). For FixMatch, \(u_{t}:=u=448\).
**Schedule proposal.** In CBS, we propose the following schedule for \(u_{t}=\text{B-EXP}(u,t,T)\). Here, B-EXP stands for Bounded Exponential formulation, as shown in Figure 2, which allows for a smooth increase in batch size:
\[\text{B-EXP}=u\cdot\Bigg{(}1-\frac{1-\frac{t}{T}}{(1-\alpha)+\alpha\cdot \big{(}1-\frac{t}{T}\big{)}}\Bigg{)},\ \ \alpha=0.7.\]
Here, \(u\) is the original (or maximum) unlabeled batch size; \(\alpha\) is a fixed parameter; \(t\) is the current iteration; and \(T\) is the total iterations. We set the parameter \(\alpha=0.7\) to fix the shape of the unlabeled batch size curriculum. This curve was initially proposed in learning rate schedules [11, 12].
In addition, following the scaling of linear learning rates [10], we scale the \(\lambda\) coefficient of the unsupervised loss linearly with respect to the ratio of unlabeled batch size to labeled batch size. For example, if the current unlabeled batch size is 96 and the labeled batch size is 64, we use \(\lambda=96/64=1.5\). Thus, since the unlabeled batch size follows the curriculum, the \(\lambda\) coefficient also follows the curriculum.
**Algorithm.** Fast FixMatch is a combination of Curriculum Batch Size, labeled strong augmentation, and Curriculum Pseudo Labeling, given in Algorithm 1. These three methods have a certain synergy that performs better than a sum of parts, explained later in this paper. We also perform an ablation study further to understand the contributions, of which Curriculum Batch Size is a significant contributor.
Figure 1: Example of data utilization of FixMatch during training measured on the batch level, averaged over the last 10 iterations. The green curve (“—”) indicates maximum unlabeled data utilization. The orange curve (“—”) indicates the current unlabeled data utilization of the batch. The blue curve (“—”) indicates the amount of the confident unlabeled samples which are predicted correctly. Vanilla FixMatch utilizes a fixed batch size \(=\) flat green line. Alternatively, Curriculum Batch Size alters the unlabeled batch size during training which results in an increasing green curve. Plotted above is CIFAR10 with all but 250 labels removed. Left: FixMatch [10]. Middle Left: Curriculum Pseudo Labeling [22]. Middle Right: Curriculum Batch Size. Right: Curriculum Pseudo Labeling + Curriculum Batch Size.
```
1:Input: Labeled mini batch size \(l\). Unlabeled mini batch size \(u\). Maximum threshold Th. Total training steps \(N\). Total classes \(C\). Labeled training data \(\{(x_{i},y_{i})\}_{i=1}^{L}\). Unlabeled training data \(\{x_{i}\}_{i=1}^{U}\).
2:\(\hat{u}_{i}=-1:i\in[U]\)// Model predictions for CPL
3:for\(T=1,\ldots,T\)do
4:for\(c=1,\ldots,C\)do
5:\(T_{c}=\text{CPL}(c,\hat{u})\)// Dynamic threshold according to CPL
6:endfor
7:\(l_{t}=l\)// Labeled batch size is fixed
8:\(u_{t}=\text{B-EXP}(u,t,T)\)// Curriculum Batch Size for unlabeled batch size
9:\(X_{l},X_{u}\leftarrow\text{next }l_{t}\) labeled samples and \(u_{t}\) unlabeled samples
10: Apply FixMatch with strong labeled augmentation(\(X_{l}\), \(X_{u}\), Th)
11:for\(c=1,\ldots,C\)do
12: Update \(\hat{u}_{i}\).
13:endfor
14:endfor
```
**Algorithm 1**Fast FixMatch
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline Unlabeled & Curriculum & Labeled & Curriculum & \multicolumn{5}{c}{Epochs to X\% accuracy\({}_{\text{(speedup over FixMatch)}}\)} \\ Batch & Batch & Strong & Pseudo & & & & & & & & \\ Size & Size & Aug & Labeling & 30\% & 40\% & 50\% & 60\% & 70\% & 80\% & 85\% & 90\% \\ \hline
448 & - & - & - & - & 14 & 24 & 39 & 61 & 98 & 187 & 324 & 678 \\
448 & ✓ & - & - & 5\({}_{\text{(280)}}\) & 83\({}_{\text{(30)}}\) & 15\({}_{\text{(260)}}\) & 27\({}_{\text{(230)}}\) & 48\({}_{\text{(240)}}\) & 105\({}_{\text{(180)}}\) & 233\({}_{\text{(140)}}\) & 640\({}_{\text{(113)}}\) \\
448 & - & ✓ & - & 10\({}_{\text{(140)}}\) & 17\({}_{\text{(140)}}\) & 27\({}_{\text{(140)}}\) & 39\({}_{\text{(160)}}\) & 69\({}_{\text{(140)}}\) & 150\({}_{\text{(120)}}\) & 263\({}_{\text{(120)}}\) & 678\({}_{\text{(100)}}\) \\
445 & - & - & ✓ & 12\({}_{\text{(12)}}\) & 18\({}_{\text{(13)}}\) & 32\({}_{\text{(23)}}\) & 57\({}_{\text{(110)}}\) & 90\({}_{\text{(110)}}\) & 166\({}_{\text{(13)}}\) & 29\({}_{\text{(140)}}\) & 577\({}_{\text{(120)}}\) \\ \hline
448 & ✓ & ✓ & - & 4\({}_{\text{(3,5)}}\) & 6\({}_{\text{(40)}}\) & 9\({}_{\text{(43)}}\) & 13\({}_{\text{(47)}}\) & 27\({}_{\text{(6)}}\) & 70\({}_{\text{(27)}}\) & 137\({}_{\text{(24)}}\) & 317\({}_{\text{(21)}}\) \\
448 & - & ✓ & ✓ & 12\({}_{\text{(12)}}\) & 18\({}_{\text{(13)}}\) & 25\({}_{\text{(160)}}\) & 36\({}_{\text{(17)}}\) & 63\({}_{\text{(160)}}\) & 136\({}_{\text{(140)}}\) & 248\({}_{\text{(13)}}\) & 553\({}_{\text{(120)}}\) \\
448 & ✓ & - & ✓ & 4\({}_{\text{(3,5)}}\) & 8\({}_{\text{(3,6)}}\) & 15\({}_{\text{(260)}}\) & 26\({}_{\text{(230)}}\) & 43\({}_{\text{(230)}}\) & 97\({}_{\text{(19)}}\) & 203\({}_{\text{(160)}}\) & 434\({}_{\text{(160)}}\) \\
448 & ✓ & ✓ & ✓ & 5\({}_{\text{(280)}}\) & 7\({}_{\text{(34)}}\) & 10\({}_{\text{(36)}}\) & 13\({}_{\text{(47)}}\) & 29\({}_{\text{(44)}}\) & 62\({}_{\text{(30)}}\) & 121\({}_{\text{(27)}}\) & 282\({}_{\text{(240)}}\) \\ \hline \hline
**Overall speedup** & & & & **2.8x** & **3.4x** & **3.9x** & **4.7x** & **4.5x** & **3.0x** & **2.7x** & **2.4x** \\ \hline \hline \end{tabular}
\end{table}
Table 1: CIFAR10 with all but 250 labels removed. Line in lime is vanilla FixMatch; Light gray line is FixMatch with CBS; Light gray line is FixMatch with strong augmentation; Cyan line is FixMatch with CPL; Orange line is Fast FixMatch. Epoch is defined as 50,000 forward and backwards passes. Each entry is the number of total pre-defined epochs to reach a particular accuracy, and the computational decrease multiplier compared to vanilla FixMatch. Lower epochs is better.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{CIFAR-10} & \multicolumn{4}{c}{CIFAR-100} & SVHN & STL-10 \\ \cline{2-11} \cline{3
## 4 Results in Standard Ssl Scenario
**Training details.** We reproduce the settings from FixMatch [20] and utilize the same hyperparameters. For Fast FixMatch, we use the exact same hyperparameters as the original FixMatch hyperparameters, and further improvements may be possible with further tuning. For Curriculum Pseudo Labeling, we use the convex formulation which is the formulation used for the mainline results in the paper [15]. For strong labeled augmentation, we use AutoAugment [3] for CIFAR-10, CIFAR-100 and SVHN, and RandAugment [3] for STL-10. We use WRN-28-2 for CIFAR-10 and SVHN, WRN-28-8 for CIFAR-100, and WRN-37-2 for STL-10.
In our experiments, we use the same notation with FixMatch [20], where \(\mu\) is defined as the hyperparameter that determines the relative sizes of the labeled and unlabeled samples within a batch. In particular, when, e.g., \(\mu=2\), this indicates that the batch includes twice as many unlabeled examples, as compared to the labeled examples within the batch.
**Ablation study.** Fast FixMatch uses three components: Curriculum Batch Size, labeled strong augmentation, and Curriculum Pseudo Labeling. We performed an ablation to understand this synergy; see Table 1.
On their own, labeled strong augmentation, Curriculum Pseudo Labeling and Curriculum Batch Size all reduce computation for larger error targets, but the improvements diminish for smaller error targets. Labeled strong augmentation + Curriculum Pseudo Labeling reduces computation for larger error targets, but again do not reduce computation for smaller error targets. Both Curriculum Batch Size + labeled strong augmentation and Curriculum Batch Size + Pseudo Labeling produce substantial computational reduction for both smaller and larger error targets. In particular, Curriculum Batch Size + labeled strong augmentation is the best two method combination. Finally, combining all three methods improve over any one or two combinations; this is Fast MixMatch. Furthermore, _the improvement is more than naively multiplying the individual improvements which leads to a synergy that justifies Fast FixMatch as a combination of all three_, with Curriculum Batch Size as the main contributor.
**Acceleration Results.** We reproduce the results of FixMatch [20] across the CIFAR-10, CIFAR-100, SVHN and STL-10 settings, given in Table 2. The table shows the number of pre-defined epochs required to train the model to the cited error rate. Across all different settings, Fast FixMatch achieves between \(2\times-3.5\times\) speedup in the
\begin{table}
\begin{tabular}{c c c c} \hline \hline Method & Average unlabeled batch size as \% & Overall average batch size as \% & Error \\ \hline Fast FixMatch (\(\alpha=0.5\)) & 38.6\% & 40.0\% & 4.57\% \\ Fast FixMatch (\(\alpha=0.7\)) & 30.9\% & 36.8\% & **4.18\%** \\ Fast FixMatch (\(\alpha=0.9\)) & 17.3\% & 24.5\% & 4.58\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: CIFAR10 with all but 4000 labels removed. Comparing the average batch size reduction of different \(\alpha\) with labeled batch size of 64 and ratio of unlabeled/labeled data samples within batch \(\mu=7\). Each method run for 1357 epochs.
\begin{table}
\begin{tabular}{c c c} \hline \hline Method & CIFAR10 Data Utilization & SVHN Data Utilization \\ \hline FixMatch [20] & 63.6\% & 49.5\% \\ Curriculum Pseudo Labeling [15] & 75.0\% & 61.1\% \\ Curriculum Batch Size & 69.0\% & 59.2\% \\ Curriculum Batch Size + Pseudo Labeling & **78.7\%** & **72.1\%** \\ \hline \hline \end{tabular}
\end{table}
Table 3: CIFAR10 with all but 250 labels removed. SVHN with 250 labels and extra dataset as unlabeled. Data utilization of unlabeled data of different methods trained for 225 epochs measured across the whole training cycle. Higher data utilization is better.
Figure 2: B-EXP curve for unlabeled Curriculum Batch Size depending on \(\alpha\). Higher \(\alpha\) is more extreme.
number of computations, given by the definition in Section 3. Since Fast FixMatch uses an average overall batch size of about a third of FixMatch (see also Table 4), this means that Fast FixMatch and FixMatch take roughly the same number of iterations, but the batch size for Fast FixMatch is significantly smaller. These gains indicate that Fast FixMatch can be efficiently used in smaller GPUs or on the edge where smaller batch sizes directly lead to wall clock speedup.
Using a smaller batch size naively does not result in the improvements of Fast FixMatch over FixMatch (see Figure 3). A smaller batch size decreases compute initially for larger error targets, but the improvements disappear for smaller error targets.
**Data Utilization Results.** In Table 3, we show the data utilization calculated across the entire training cycle. Curriculum Batch Size significantly increases data utilization over FixMatch and directly synergizes with Curriculum Pseudo Labeling. Curriculum Batch Size + Pseudo Labeling achieves by far the highest data utilization, further justifying the inclusion of both in Fast FixMatch. At 78.7% for CIFAR-10, this means that 78.7% of unlabeled samples seen during training are above the threshold and contribute to the loss. Namely, Curriculum Batch Size + Pseudo Labeling improves 15.1% data utilization over FixMatch and 3.7% over either Curriculum Batch Size or Pseudo Labeling for CIFAR-10, and 22.6% data utilization over FixMatch and 11.0% over either Curriculum Batch Size or Pseudo Labeling for SVHN.
**Unlabeled Batch Size Curriculums** Since we fix the labeled batch size, there are diminishing returns for more extreme unlabeled batch size curriculums. In Table 4, we see that larger \(\alpha\) results in more extreme unlabeled batch size curriculum and that \(\alpha=0.7\) is a sweet spot. The curvature of these \(\alpha\) are displayed in Figure 2.
## 5 Results in Federated Self Supervised Learning
**Background on Federated Self Supervised Learning**. Federated Learning (FL) McMahan et al. (2017); Li et al. (2018); Karimireddy et al. (2019) is a distributed learning protocol that has witnessed fast development the past demi-decade. FL deviates from the traditional distributed learning paradigms and allows the integration of edge devices --such as smartphones Stojkovic et al. (2022), drones Qu et al. (2021), and IoT devices Nguyen et al. (2021)--in the learning procedure. However, in real life federated learning scenario, user data on local device might be largely unlabeled (such as picture with no caption). Therefore, in order to better utilize such unlabeled data, federated self-supervised learning has recently been one of the focus of SSL research and federated learning Makhija et al. (2022); Zhuang et al. (2022).
**Federated Self Supervised Learning Formulation**. Let \(S\) be the total number of clients in a distributed FL scenario. Each client \(i\) has its own local labeled data \(\mathcal{D}_{i}^{l}\) and unlabeled data \(\mathcal{D}_{i}^{u}\). In order to be closer to real life heterogeneous data across devices, we assume non-iid in both labeled data \(\mathcal{D}_{i}^{l}\) and unlabeled data \(\mathcal{D}_{i}^{u}\). Unlabeled data is created such that the whole dataset satisfies \(\mathcal{D}=\cup_{i}\mathcal{D}_{i}^{u}\), and \(\mathcal{D}_{i}^{u}\cap\mathcal{D}_{j}^{u}=\emptyset,\forall i\neq j\). Labeled data \(\mathcal{D}_{i}^{l}\) is random sampled subset of unlabeled data \(\mathcal{D}_{i}^{u}\), which also implies \(\mathcal{D}_{i}^{l}\cap\mathcal{D}_{j}^{l}=\emptyset,\forall i\neq j\) The goal of FL is to find a global model \(\mathbf{W}\) that achieves good accuracy on all data \(\mathcal{D}\), by minimizing the following optimization problem:
\[\mathbf{W}^{*}=\underset{\mathbf{W}\in\mathcal{H}}{argmin}\;\left\{\mathcal{L} (\mathbf{W}):=\frac{1}{S}\sum_{i=1}^{S}\ell^{l}\left(\mathbf{W},\mathcal{D}_{i }^{l}\right)+\ell^{u}\left(\mathbf{W},\mathcal{D}_{i}^{u}\right)\right\},\]
where \(\ell^{l}\left(\mathbf{W},\mathcal{D}_{i}^{l}\right)=\frac{1}{|\mathcal{D}_{i}^ {l}|}\sum_{\{\mathbf{x}_{j},y_{j}\}\in\mathcal{D}_{i}^{l}}\ell\left(\mathbf{W},\{\mathbf{x}_{j},y_{j}\}\right)\) while \(\ell^{u}\left(\mathbf{W},\mathcal{D}_{i}^{u}\right)=\frac{1}{|\mathcal{D}_{i}^ {u}|}\sum_{\{\mathbf{x}_{j},\hat{y}_{j}\}\in\mathcal{D}_{i}^{u}}\ell\left( \mathbf{W},\{\mathbf{x}_{j},\hat{y}_{j}\}\right)\). Here for labeled data we use standard supervised training loss with true label \(y_{j}\). For unlabeled data we follow FixMatch style algorithm to select unlabeled data \(\mathcal{D}_{i,t}^{u}\) and generate pseudo label \(\hat{y}_{j}\) accordingly. (Here we only consider about FixMatch style SSL algorithm) With a slight abuse of notation, \(\ell\left(\mathbf{W},\mathcal{D}_{i}\right)\) denotes the total _local_ loss function for user \(i\), associated with a local model \(\mathbf{W}_{i}\) (not indicated above), that gets aggregated with the models of other users.
Figure 3: CIFAR10 with all but 250 labels removed. Testing if naively reducing the unlabeled batch size can result in the same gains as Fast FixMatch. Top: Comparing FixMatch with \(\mu=7\) (unlabeled batch size \(=448\)) and \(\mu=2\) (unlabeled batch size \(=128\)). Bottom: Comparing Fast FixMatch with \(\mu=7\) (unlabeled batch size \(=448\)) and \(\mu=2\) (unlabeled batch size \(=128\)).
**Fast FixMatch in Federated SSL.** We extend Fast FixMatch to federated learning scenario, where each client has non-iid labeled data and non-iid unlabeled data to better represent realistic user data usage. As stated above, each client can only use its own local unlabeled data to generate pseudo label using (Fast) MixMatch and use local labeled data for supervised learning. In our experiments, we simulate 100 clients and sample 4 clients at every time. In order to ensure strong non-iid condition, we group all clients into 4 groups with strong non-iid class distributions and, at each round, we select one client from each group. We count the total number of labeled data in all clients. We use the final accuracy of Fast FixMatch as our target Accuracy to compare the speed up. As shown in Table 5, in both CIFAR10 and CIFAR100, Fast FixMatch achieves between 2.6 \(\times\) -3.3\(\times\) computation speedup. Since Fast FixMatch reduces the batch size, it also reduces the memory cost in memory-restricted edge devices in actual federated learning scenarios.
## 6 Results in Online/Streaming Learning SSL
**Background in Online/Streaming Learning SSL** Many autonomous applications would benefit from real-time, dynamic adaption of models to new data, which might come from online gathering during training. Currently, online learning1 has become a popular topic in deep learning. For instance, there are continual learning Lopez-Paz and Ranzato (2017), lifelong learning Aljundi et al. (2017), incremental learning Rebuffi et al. (2017) and streaming learning Hayes et al. (2019). In real life, unlabeled data may be easily collected through out the training process, which results in a streaming of new unlabeled data. Thus, Online/Streaming Learning SSL has been widely studied to fully utilize the incoming unlabeled data.
**Fast FixMatch in Online/Streaming Learning SSL.** We extend Fast Mixmatch to online/streaming learning SSL scenario with fixed small number of labeled data and streaming of unlabeled data. We restrict the initial set of unlabeled data to be significantly smaller (about 1/10 of original dataset) while we add new chunk of unlabeled data every 1/10 of total epochs. We use the final accuracy of Fast FixMatch as our target accuracy. As shown in Tabel 6, in both CIFAR10 and CIFAR100, Fast FixMatch achieves between 2.6 \(\times\) -2.8\(\times\) computation speedup.
## 7 Conclusion
In this paper, we introduce Curriculum Batch Size, a curriculum approach to batch size scaling in the SSL setting. Curriculum Batch Size uses a small batch size initially, and monotonically increases to the maximum unlabeled batch size. We use a Bounded Exponential (B-EXP) formulation to control the curriculum, and use \(\alpha=0.7\) as default. We propose Fast FixMatch, a combination of Curriculum Batch Size, labeled strong augmentation, and Curriculum Pseudo Labeling. Across CIFAR-10, CIFAR-100, SVHN and STL-10 settings, we demonstrate between 2-3.5x computation improvement of Fast FixMatch over the FixMatch baseline. We perform an ablation study to understand the contribution of different components of Fast FixMatch and show that Curriculum Batch Size is a critical component, and there exists synergy better than the sum of parts. We verify that data utilization is indeed increased with Curriculum Batch Size and furthermore in combination with Curriculum Pseudo Labeling. As shown in Tabel 5, in both CIFAR10 and CIFAR100, Fast FixMatch achieves between 2.3 \(\times\) -2.4\(\times\) computation speed up. Finally, we extend Fast FixMatch to Federated Self-supervised Learning scenario and online/streaming learning Self-supervised Learning scenario without introducing any extra parameters/computation cost. Fast MixMatch achieves between \(2.6\times\) - \(3.3\times\) reduced training computations in federated SSL tasks and online/streaming learning SSL tasks, which further demonstrate the generalizability of Fast MixMatch to different scenarios and tasks.
|
2306.00083 | Bell sampling from quantum circuits | A central challenge in the verification of quantum computers is benchmarking
their performance as a whole and demonstrating their computational
capabilities. In this work, we find a universal model of quantum computation,
Bell sampling, that can be used for both of those tasks and thus provides an
ideal stepping stone towards fault-tolerance. In Bell sampling, we measure two
copies of a state prepared by a quantum circuit in the transversal Bell basis.
We show that the Bell samples are classically intractable to produce and at the
same time constitute what we call a circuit shadow: from the Bell samples we
can efficiently extract information about the quantum circuit preparing the
state, as well as diagnose circuit errors. In addition to known properties that
can be efficiently extracted from Bell samples, we give several new and
efficient protocols: an estimator of state fidelity, a test for the depth of
the circuit and an algorithm to estimate a lower bound to the number of T gates
in the circuit. With some additional measurements, our algorithm learns a full
description of states prepared by circuits with low T-count. | Dominik Hangleiter, Michael J. Gullans | 2023-05-31T18:01:58Z | http://arxiv.org/abs/2306.00083v5 | # Bell sampling from quantum circuits
###### Abstract
A central challenge in the verification of quantum computers is benchmarking their performance as a whole and demonstrating their computational capabilities. In this work, we find a model of quantum computation, _Bell sampling_, that can be used for both of those tasks and thus provides an ideal stepping stone towards fault-tolerance. In Bell sampling, we measure two copies of a state prepared by a quantum circuit in the transversal Bell basis. We show that the Bell samples are classically intractable to produce and at the same time constitute what we call a _circuit shadow_: from the Bell samples we can efficiently extract information about the quantum circuit preparing the state, as well as diagnose circuit errors. In addition to known properties that can be efficiently extracted from Bell samples, we give two new and efficient protocols, a test for the depth of the circuit and an algorithm to estimate a lower bound to the number of \(T\) gates in the circuit. With some additional measurements, our algorithm learns a full description of states prepared by circuits with low \(T\)-count.
_Introduction._ As technological progress on fault-tolerant quantum processors continues, a central challenge is to demonstrate their computational advantage and to benchmark their performance as a whole. Quantum random sampling experiments serve this double purpose [1, 2, 3, 4] and have arguably surpassed the threshold of quantum advantage [5, 6, 7, 8, 9, 10]. However, this approach currently suffers several drawbacks. Most importantly, it can only serve its central goals--benchmarking and certification of quantum advantage--in the classically simulable regime. This deficiency arises because evaluating the performance benchmark, the so-called _cross-entropy benchmark_, requires a classical simulation of the ideal quantum computation. What is more, the cross-entropy benchmark suffers from various problems related to the specific nature of the physical noise in the quantum processor [11, 12, 9], and yields limited information about the underlying quantum state. More generally, in near-term quantum computing without error correction, we lack many tools for validating a given quantum computation just using its output samples.
In this work, we consider _Bell sampling_, a model of quantum computation in which two identical copies of a state prepared by a quantum circuit are measured in the transversal Bell basis, see Fig. 1. We show that, in this model, the outcomes are (i) simultaneously classically intractable to produce on average over universal random circuits under a standard assumption, (ii) yield diagnostic information about the underlying quantum state, and (iii) allow for detecting certain errors in the state preparation. Bell sampling from random universal quantum circuits thus overcomes the central practical problems of quantum random sampling as a means to benchmark and demonstrate the computational advantage of near-term quantum processors. Effectively, we may think of the Bell samples as classical _circuit shadows_, in analogy to the notion of state shadows coined by Aaronson [13] and Huang _et al._[14], since we can efficiently extract specific information about the generating circuit or a family of generating circuits from them.
Technically, we make the following contributions. We provide complexity-theoretic evidence for the classical intractability of Bell sampling from random universal quantum circuits, following an established hardness argument [15, 4, 16]. We introduce a new test to verify the depth of quantum circuits. Here, we make use of the fact that from the Bell basis samples one can compute correlation properties of the two copies and in particular a swap test on any subsystem. We then observe that we can compare the measured average subsystem entropy to the Page curve [17] of the random circuit family in order to derive a lower bound on the depth of the circuit. We further show that the Bell samples can be used to compute an efficient measure of magic, building on a result by Montanaro [18], who has shown that stabilizer states can be learned from Bell samples. We extend this idea to measure a lower bound on the \(T\)-count of a circuit preparing the quantum state from the dimension of a maximal set of linearly independent Pauli operators of which the state is a \(+1\)-eigenstate. We then give a protocol to efficiently learn a full description of a quantum state prepared by a circuit with low \(T\)-count. Finally, we give a protocol for detecting errors in the state preparation based only on the properties of the Bell samples. In this protocol, we exploit the maximal or typical entanglement generated by pure states for large subsystems.
Of course, the idea to sample in the Bell basis to learn about properties of quantum states is as old as the theory of quantum information itself and has found many applications in quantum computing, including learning stabilizer states [18], testing stabilizerness [19], measuring magic [20, 21], and quantum machine learning [22]. The novelty of our approach is to view Bell sampling as a computational model. We then ask the question: What can we learn from those Bell samples about the circuit preparing the underlying quantum state?
_Bell sampling._ We begin by defining the Bell sampling protocol and noting some simple properties that will be useful in the remainder of this work. Consider a quantum circuit \(C\) acting on \(n\) qubits, and define the Bell basis of two qubits as
\[\left|\sigma_{r}\right\rangle=(\sigma_{r}\otimes 1)\left|\Phi^{+}\right\rangle, \text{ where }\left|\Phi^{+}\right\rangle=(\left|00\right\rangle+\left|11 \right\rangle)/\sqrt{2}, \tag{1}\]
and for \(r\in\{0,1\}^{2}\) we identify
\[\sigma_{00}=1,\quad\sigma_{01}=X,\quad\sigma_{10}=Z,\quad\sigma_{11}=\sigma_ {10}\sigma_{01}=\mathrm{i}Y. \tag{2}\]
The Bell sampling protocol proceeds as follows, see Fig. 1.
1. Prepare \(\left|\mathcal{C}\right\rangle\coloneqq\left|C\right\rangle\otimes\left|C\right\rangle \coloneqq C\left|0^{n}\right\rangle\otimes C\left|0^{n}\right\rangle\).
2. Measure all qubit pairs \((i,i+n)\) for \(i\in[n]\coloneqq\{1,2,\ldots,n\}\) in the Bell basis, yielding an outcome \(r\in\{0,1\}^{2n}\).
It is easy to see that the distribution of the outcomes \(r\) can be written as
\[P_{C}(r)=\frac{1}{2^{n}}\left|\left\langle C\right|\sigma_{r}\left|\overline{C }\right\rangle\right|^{2} \tag{3}\]
where \(\sigma_{r}=\sigma_{r_{1}r_{n+1}}\otimes\sigma_{r_{2}r_{n+2}}\otimes\cdots \otimes\sigma_{r_{n}r_{2n}}\) is the \(n\)-qubit Pauli matrix corresponding to the outcome \(r=(r_{1},r_{2},\ldots,r_{2n})\), and \(\overline{C}\) denotes complex conjugation of \(C\). In order to perform the measurement in the Bell basis, we need to apply a depth-1 quantum circuit consisting of \(n\) transversal \(\mathtt{cnot}\)-gates followed by Hadamard gates on the control qubits and a measurement of all qubits in the computational basis.
In the following, we will often consider random quantum circuits \(C\), e.g., with gates drawn randomly from a two-qubit gate set in a brickwork layout, see Fig. 1. We will see that Bell sampling is classically intractable under certain complexity-theoretic assumptions. Crucially, we also show that we can use _the very same samples_ to efficiently infer properties of and detect errors in the state preparation. The Bell samples thus simultaneously act as a challenge to classical algorithms and a classical shadow of the quantum state preparation that may be used to characterize a quantum device.
_Computational complexity._ We now show that approximately sampling from the Bell sampling distribution \(P_{C}\) is classically intractable on average over random choices of the circuit \(C\), assuming certain complexity-theoretic conjectures are satisfied. When doing so, we are following a by now standard proof technique [15; 16; 23; 24; 25; 26] based on an algorithm due to Stockmeyer [27]. The key idea of this proof technique is that if the outcome probabilities \(P_{C}(r)\) are \(\mathtt{GapP}\)-hard to approximate up to a constant relative error [see \(4\), Sec. III] and the ensemble satisfies the so-called _hiding property_, then sampling from that distribution up to a constant total-variation distance (TVD) is classically intractable unless the so-called _polynomial hierarchy_ collapses to its third level; see Ref. [4] for details. While we cannot prove approximate average-case hardness for any circuit family at this point, we can give evidence for it. Such evidence comprises (i) worst-case hardness of approximating the outcome probabilities, (ii) near-exact average-case hardness of computing the outcome probabilities, and (iii) anticoncentration of the outcome probabilities. We show all of those properties for the Bell sampling distribution for universal random quantum circuits \(C\) with \(\Omega(n^{2})\) gates. Here, we briefly sketch the ideas and defer more detailed proofs to the Supplementary Material [28].
We begin by showing the hiding property. The hiding property asserts that the event that the all-\(0\)-outcome is observed given a state prepared by a random circuit \(C\), is equally probable as the event that some outcome \(r\) was observed given a circuit \(C_{r}\) that was chosen with equal probability as \(C\). To see that this property holds for Bell sampling from universal random circuits, we observe that
\[P_{C}(r)=|\left\langle\Phi^{+}\right|^{\otimes n}\sqrt{\sigma_{r}}C\otimes \sqrt{\sigma_{r}}^{T}C\left|0^{2n}\right\rangle|^{2}. \tag{4}\]
Now, we note that \(P_{C}\) is supported on \(\operatorname{supp}(P_{C})\subset\{r:\pi_{Y}(r)=0\}\), where we have defined the \(Y\)-parity \(\pi_{Y}(r)\coloneqq|\{i\colon(r_{i},r_{n+i})=11\}|\mod 2\). For such outcomes \(r\), \(\sqrt{\sigma_{r}}=\sqrt{\sigma_{r}}^{T}\) and we can define \(C_{r}=\sqrt{\sigma_{r}}C\) for all \(r\in\operatorname{supp}(P_{C})\). This shows the hiding property \(P_{C}(r)=P_{C_{r}}(0^{2n})\).
To show worst-case hardness of approximating outcome probabilities, we reduce computing the gap of an arbitrary efficiently computable Boolean function to computing outcome probabilities \(P_{C}(0^{2n})\). Let \(g\) be such an efficiently computable function and \(D\) a reversible classical circuit computing that function as \(D\left|x\right\rangle\left|b\right\rangle=\left|x\right\rangle\left|g(x) \right\oplus b\)). Then define \(f_{g}:\{0,1\}^{n+1}\rightarrow\{0,1\}\) as
\[f_{g}(y)=\begin{cases}1&\text{if }y=(x,g(x))\\ 0&\text{if }y=(x,\neg g(x)).\end{cases} \tag{5}\]
The \((n+2)\)-qubit quantum circuit \(C=(\mathbbm{1}_{n}\otimes\mathtt{cnot}(X\otimes\mathbbm{1}))(D^{\dagger} \otimes\mathbbm{1})\) computes \(f_{g}\) as \(C\left|y\right\rangle\left|0\right\rangle=\left|y\right\rangle\left|f_{g}(y)\right\rangle\). Defining \(C^{\prime}=\sqrt{Z_{n+1}Z_{n+2}}C\), the outcome amplitude \(\left\langle\sigma_{0^{2(n+2)}}C^{\prime}\right\rangle^{\otimes 2}=\operatorname{gap}(g)/2^{n -1}\) can be written in terms of the gap of \(g\), given by \(\operatorname{gap}(g)=|\{x:g(x)=1\}|-|\{x:g(x)=0\}|\). By Proposition 8 of Bremner _et al._[16], approximating \(|\left\langle\sigma_{0^{2(n+2)}}C^{\prime}\right\rangle^{\otimes 2}|^{2}\) up to any relative error \(<1/2\) or additive error \(1/2^{2(n+2)}\) is \(\mathtt{GapP}\)-hard.
We also show that the anticoncentration property holds for the Bell distribution for circuit ensembles which form a unitary \(4\)-design. Random quantum circuits containing \(\Omega(n^{2})\) random two-qubit gates drawn from a universal gate set (containing inverses) are known to form a unitary \(4\)-design due to a seminal result by Brandao _et al._[29]. We further conjecture that, similar to the case of computational-basis measurements [30; 31], anticoncentration will already set in for circuits containing \(\Omega(n\log(n))\) random gates.
Figure 1: **The Bell sampling protocol.** In the Bell sampling protocol we prepare the quantum state \(C\left|0^{n}\right\rangle\otimes C\left|0^{n}\right\rangle\) using a quantum circuit \(C\), and measure all qubits transversally in the Bell basis across the bipartition of the system (dotted line).
Finally, it is easy to see that any of the recently developed polynomial interpolation methods [24; 25; 26; 32] can be applied to show near-exact average-case hardness of approximating the outcome probabilities \(P_{C}(r)\). In the Supplementary Material [28] we detail this via the approach of Krovi [26], but emphasise that the result does not depend on the specific interpolation path. This provides the final piece of evidence towards the approximate average-case hardness conjecture for the Bell sampling distribution.
Altogether, the argument above puts the complexity-theoretic evidence for the hardness of Bell sampling from random quantum circuits in linear depth on a par with that for universal circuit sampling in the computational basis [24; 25; 26; 33; 34].
_Bell samples as classical circuit shadows._ Samples in the computational basis--while difficult to produce for random quantum circuits--yield very little information about the underlying quantum state. In particular, the problem of verification is essentially unsolved since the currently used methods require exponential computing time. In contrast, from the Bell samples, we can _efficiently_ infer many properties of the quantum state preparation \(\ket{C}\otimes\ket{C}\). Known examples include the overlap \(\operatorname{tr}[\rho\sigma]\) of a state preparation \(\rho\otimes\sigma\) via a swap test, the magic of the state \(\ket{C}\)[20], and the outcome of measuring any Pauli operator \(P\otimes P\)[35]. Here, we add two new properties to this family. We give efficient protocols for learning the depth of random low-depth quantum circuits with high probability, and for learning a quantum state prepared by a circuit with low \(T\)-count.
Let us begin by recapping how a swap test can be performed using the Bell samples, and observing some properties that are useful in the context of benchmarking random quantum circuits. To this end, write the two-qubit swap operator
\[\mathbb{S}=\underbrace{\left|\sigma_{00}\right\rangle\left\langle\sigma_{00} \right|+\left|\sigma_{01}\right\rangle\left\langle\sigma_{01}\right|+\left| \sigma_{10}\right\rangle\left\langle\sigma_{10}\right|}_{P_{\sqrt{2}}}- \underbrace{\left|\sigma_{11}\right\rangle\left\langle\sigma_{11}\right|}_{P_{ \sqrt{2}}} \tag{6}\]
as the projector onto the symmetric subspace \(P_{\sqrt{2}}\) minus the projector onto the antisymmetric subspace \(P_{\sqrt{2}}\). The overlap \(\operatorname{tr}[\rho\sigma]=\operatorname{tr}[(\rho\otimes\sigma)\mathbb{S}]\) can then be directly estimated up to error \(\epsilon\) from \(M\in O(1/\epsilon^{2})\) Bell samples as
\[\frac{1}{M}\left(\left|\{r:\pi_{Y}(r)=0\}\right|-\left|\{r:\pi_{Y}(r)=1\} \right|\right). \tag{7}\]
In particular, for quantum state preparations \(\rho\otimes\rho\), the overlap quantifies the purity \(\operatorname{tr}[\rho^{2}]\) of \(\rho\). We also observe that if \(\rho\) is accurately modeled by the white-noise approximation \(\rho_{C}(\eta)=(1-\eta)\ket{C}\bra{C}+\eta\,\mathbbm{1}/2^{n}\) that is used in arguments for the validity of cross-entropy benchmarking [5; 36], then the purity can be used to obtain an estimate of \(\eta\) and thereby the fidelity of the state preparation \(\bra{C}\rho_{C}(\eta)\ket{C}=1-\eta(1-1/2^{n})\). We also find that if the noise can be modelled as local single-qubit depolarizing noise with strength \(\epsilon\), giving rise to a state preparation \(\rho_{C}(\epsilon)\), the average purity at noise rate \(\epsilon\) exactly corresponds to the average fidelity with depolarizing noise strength \(\epsilon^{\prime}=2\epsilon-\epsilon^{2}\) as \(\mathbb{E}_{C}\operatorname{tr}[\rho_{C}(\epsilon)^{2}]=\mathbb{E}_{C}\bra{C }\rho_{C}(\epsilon^{\prime})\ket{C}\)[12]. In the following we will assume that the purity test has succeeded, and resulted in a value close to unity.
_Depth test._ We now describe a Bell sampling protocol to measure the depth of (random) quantum circuits \(C\) with high probability that are drawn from a family with a fixed architecture and number of gates. The basic idea underlying the depth test is to use swap tests on subsystems of different sizes in order to obtain estimates of subsystem purities. For a subsystem \(A\) of \([n]\), the subsystem purity is given by
\[P_{A}(\rho) =\operatorname{tr}[\rho_{A}^{2}]\] \[\approx\frac{\left|\{r:\pi_{Y}(r_{A})=0\}\right|-\left|\{r:\pi_ {Y}(r_{A})=1\}\right|}{M}, \tag{8}\]
where \(\rho_{A}=\operatorname{tr}_{A^{c}}[\rho]\) is the reduced density matrix on subsystem \(A\subset[n]\) and \(r_{A}=(r_{i},r_{n+i})_{i\epsilon A}\) the outcome string reduced to subsystem \(A\).
Our test is based on the observation that the amount of entanglement generated by local random circuits on half-cuts reaches a depth-dependent maximal value until it saturates at a circuit depth that depends on the dimensionality of the circuit architecture, see Fig. 2(a) for an illustration. In many cases, random quantum circuits almost saturate the maximal possible entanglement up to the so-called _Page correction_[17]. Indeed, for a given circuit family, we exploit average--and often typical [37; 38]--entanglement properties of random quantum circuits, known as the _Page curve_[17]. The Page curve represents the average entanglement over the choice of quantum states as a function of the subsystem size \(k\in[n]\). Famously, Page [17] conjectured the shape of the average subsystem entanglement of Haar-random quantum states, a conjecture that was soon after proven [39; 40; 41]. Since then, Page curves have been computed for many different random circuit ensembles including interacting quantum systems [42], and Gaussian fermionic [43] and bosonic states [44].
In order to measure the depth of a circuit family we choose the point of the Page curve at which the distinguishability between different depths is maximal. This is typically the case at half-cuts, because this is where the Renyi-2 entanglement
Figure 2: **Depth-dependent Page curves.** (a) The maximal subsystem entropy depends on the circuit architecture and depth (shades of blue) until the half-cut entanglement reaches its maximal value given by \(n/2\). We measure the subsystem entropy at half-cuts to obtain the maximal sensitivity to different circuit depths. (b) We detect errors in the Bell samples by checking near maximal subsystem entropies of all subsystems of size \(n-k\) for constant \(k\) and declare an error if the subsystem entropy exceeds the maximal achievable value.
entropy \(S_{A}(\rho)=-\log P_{A}(\rho)\) of deep quantum circuits approaches its maximal value of \(n/2\). To measure the depth \(d\) of circuits in a family \(\mathcal{C}_{d}\), we thus compute an empirical estimate of \(S_{A}(d)=\mathbb{E}_{C\sim\mathcal{C}_{d}}\big{[}S_{A}(\ket{C}\bra{C})\big{]}\) using the Bell samples. The estimation error scales as \(1/P_{A}(\rho)=2^{S_{A}(\rho)}\). Since \(S_{A}(d)\sim d|\partial A|\), where \(\partial A\) is the boundary of \(A\), we therefore obtain an \(\epsilon\)-estimate of \(S_{A}(d)\) from \(2^{O(d|\partial A|)}/\epsilon^{2}\) samples.
Clifford+\(T\) learning algorithm.Another simple primitive that can be exploited in property tests of quantum states using the Bell samples is the fact that for stabilizer states \(\ket{S}\), the Bell distribution is supported on a coset of the stabilizer group of \(\ket{S}\)[18]. Leveraging this property allows for efficiently learning stabilizer states [18], testing stabilizers [19], learning circuits with a single layer of \(T\)-gates [45] and estimating measures of magic [20; 21]. Here, we describe a simple, new protocol that allows us to efficiently learn states prepared by quantum circuits with a logarithmic number of \(T\)-gates, and efficiently estimate a certain measure of magic from the Bell samples.
Before we describe the protocol, let us recap some simple properties of the Bell samples from the stabilizer state \(\ket{S}\bra{S}=2^{-n}\sum_{\sigma\in\mathcal{S}}\sigma\) with \(n\)-dimensional stabilizer group \(\mathcal{S}\subset\mathsf{P}_{\mathsf{n}}\), i.e., a commuting subgroup of the \(n\)-qubit Pauli group \(\mathsf{P}_{\mathsf{n}}\). For stabilizer states \(\ket{S}\), the complex conjugation \(\ket{\overline{S}}=\sigma_{k}\ket{S}\) is described by a Pauli operator \(\sigma_{k}\) that depends on \(\ket{S}\)[18]. Let us denote by roman letters the binary symplectic subspace \(S\in\mathbb{F}_{2}^{2n}\) corresponding to a subgroup \(\mathcal{S}\) of \(\mathsf{P}_{\mathsf{n}}\), which includes all but the phase information. From Eq. (3) it immediately follows that the output distribution of Bell sampling from \(\ket{S}\otimes\ket{S}\) is supported on \(S\oplus k\coloneqq\{s\oplus k:s\in S\}\). We can therefore learn \(S\) from differences of the Bell samples \(b_{i}\oplus b_{j}\in S\), and the missing phases of the stabilizers from a measurement of the corresponding stabilizer operators [18].
In order to learn a quantum state \(\ket{\psi}\) prepared by a circuit with \(t\)\(T\)-gates, we observe that we can write
\[\ket{\psi}\bra{\overline{\psi}}=\sum_{l\in L}\lambda_{l}\sigma_{l}\Pi_{ \mathcal{C}}\sigma_{k} \tag{9}\]
as a linear combination over cosets of a stabilizer code with stabilizer \(\mathcal{C}\subset\mathsf{P}_{\mathsf{n}}\), shifted by Pauli operators from a 'logical subgroup' \(\mathcal{L}\subset\mathsf{P}_{\mathsf{n}}\) and some \(\sigma_{k}\in\mathsf{P}_{\mathsf{n}}\) related to the complex conjugate. We denote by \(\Pi_{\mathcal{C}}=\sum_{\sigma\in\mathcal{C}}\sigma/2^{\dim(\mathcal{C})}\) the projector onto the code space of \(\mathcal{C}\), and note that when writing \(\sigma_{l}\) for \(l\in L\) we fix its phase to be \(+1\), absorbing all phases into the coefficients \(\lambda_{l}\). The dimensions of \(C\) and \(L\) are linked to the number of \(T\)-gates preparing \(\ket{\psi}\) as \(\dim(C)\geq n-t\) and \(\dim(L)\leq 2t\). \(n-\dim(C)\) thus gives a lower bound to the stabilizer rank of \(\ket{\psi}\). It is nonincreasing under Clifford transformations and zero for stabilizer states and is therefore a natural measure of magic.
From the decomposition (9) it immediately follows that the Bell samples from \(\ket{\psi}\otimes\ket{\psi}\) are elements of \(\mathrm{span}(L,C)\oplus k=\{b\in\mathbb{F}_{2}^{2n}:b=l\oplus s\oplus k,l\in L,s\in S\}\). Our learning algorithm proceeds in two steps. In the first step, we 'compress' the non-Clifford part of the circuit, similarly to Refs. [46; 47]. To this end, given \(M\) Bell samples \(b_{0},\ldots,b_{M}\) we form the Bell differences \(b_{i}^{(j)}=b_{j}\oplus b_{i}\) and find an orthogonal basis of the subspace \(G^{\prime}=\mathrm{span}(\{b_{i}^{(j)}\}_{i,j})\) generated by those samples. We then find the maximal commuting subspace \(C^{\prime}\) of \(G^{\prime}\). With high probability that subspace satisfies \(C^{\prime}\supset C\). Let \(s_{1},\ldots,s_{\dim(C^{\prime})}\) be a set of generators of \(C^{\prime}\). We now find the Clifford transformation \(U_{C^{\prime}}\) which maps the Pauli matrices \(\sigma_{s_{i}}\) (the generators of \(C^{\prime}\) up to a \(\pm 1\) phase) to \(U_{C^{\prime}}\sigma_{s_{i}}U_{C^{\prime}}^{\dagger}=\pm Z_{i}\). Since \(C\subset C^{\prime}\), the generators of \(C\) are mapped to computational-basis states of some of the first \(n-\dim(C^{\prime})\) qubits. In the second step, we characterize the resulting state. To this end, we first identify those \(\dim(C)\) qubits by measuring each of the first \(\dim(C^{\prime})\) qubits in the computational basis. We accept a qubit if all outcomes are equal, obtaining a bitstring \(x\in\{0,1\}^{k}\) for \(\dim(C)\leq k\leq\dim(C^{\prime})\). The remaining \(n-k\) qubits are in an arbitrary state that captures the non-Clifford part of the circuit. We perform pure state tomography on those qubits, obtaining an estimate \(\ket{\hat{\varphi}}\). The output of the algorithm is then given by a classical description of \(\ket{\hat{\psi}}=U_{C^{\prime}}\ket{x}\otimes\ket{\hat{\varphi}}\). Using Clifford+\(T\) simulators [e.g. 48; 49] we can now produce samples from and compute outcome probabilities of \(\ket{\hat{\psi}}\).
In the Supplementary Material [28], we show that the protocol succeeds with high probability in learning an \(\epsilon\) approximation to the quantum state \(\ket{\psi}\) in fidelity using \(M\in O(n\log n/\epsilon)\) Bell samples and \(O(n\log n/\epsilon)+O(2^{t}/\epsilon^{2})\) measurements to perform tomography of \(\ket{x}\otimes\ket{\hat{\varphi}}\). It is easy to see that the runtime of the algorithm is polynomial in \(n\) and \(2^{t}\), since finding \(C^{\prime}\) and \(U_{C^{\prime}}\) can be achieved via the stabilizer formalism of Aaronson and Gottesman [50], and we perform quantum state tomography on at most \(t\) qubits in the last step, using for instance the scheme of Ref. [51].
From only \(O(n\log n)\) Bell samples we can directly estimate a lower bound \(n-\dim(C^{\prime})<n-\dim(C)\). To the best of our knowledge this is the most efficient way of measuring the magic of a quantum state to date.
To summarize, we have given efficient ways to extract properties of the circuit \(C\)--its depth and an efficient circuit description for circuits with low \(T\)-count--using only a small number of Bell samples from \(\ket{\mathcal{C}}\). Further properties of \(\ket{C}\) that can be efficiently extracted from the Bell samples include \(\bra{C}\sigma\ket{C}^{2}\) for any Pauli operator \(\sigma\in\mathsf{P}_{\mathsf{n}}\) and different measures of magic [20]. The Bell samples thus serve as an efficient classical shadow of \(C\).
Error DetectionIn the last part of this paper, we discuss another appealing feature of Bell samples: we can perform error detection. The idea that redundantly encoding quantum information in many copies of a quantum state allows error detection goes back to the early days of quantum computing. Already in 1996, Barenco _et al._[52] have shown that errors can be reduced by symmetrizing many copies of a noisy quantum state. In our two-copy setting, some simple error detection properties follow immediately from the tests in the previous section.
First, we observe that an outcome in the antisymmetric subspace, i.e., an outcome \(r\) with \(\pi_{Y}(r)=1\), is certainly due to an error. We can thus reduce the error in the sampled distribution by discarding such outcomes. Second, we can consider subsystem purities for large subsystems of size \(n-k\) for constant \(k\). While mixed states will have a high subsys
tem entropy equal to the subsystem size \(n-k\), pure states will have entropy less than \(k\), see Fig. 2(b). This implies that a \((1-2^{-k})/2\) fraction of the samples fall into the antisymmetric subspace. Given a sample \(r\), we now compute \(\pi_{Y}(r_{A})\) for all subsystems \(A\) of size \(n-k\). If \(\pi_{Y}(r_{A})=1\) for more than a \((1-2^{-k})/2+\epsilon\) fraction of the subsystems we declare an error, and discard the sample. For random circuits, we can further refine the error detection in case the depth-dependent Page curve is typical. In this case, we choose to reject a sample if more than a \((1-(P_{A}(d)-\delta))/2+\epsilon\) fraction of the samples for some \(\delta>0\) has odd \(Y\)-parity \(\pi_{Y}(A)=1\). Assuming that the subsystem \(Y\)-parity is independently distributed across the subsystem, this strategy willl succeed in identifying an erroneous sample with failure probability \(\exp(-2\epsilon^{2}\binom{n}{k})\) by the Hoeffding bound. This assumption holds if the noisy state can be modelled by the white-noise approximation, which is valid for local noise rates on the order of \(1/n\)[36].
_Discussion and outlook._ In this work, we have considered Bell sampling as a model of quantum computation. We have shown that many properties of the quantum circuit preparing the underlying state can be extracted efficiently, and that in particular certain errors in the state preparation can be detected from single shots. Based on this, we have argued that the Bell samples act as classical circuit shadows. Our results spawn a wide range of exciting conceptual as well as technical questions. We close by briefly discussing some of them.
Bell sampling is realistic. Since the Bell basis measurement requires only transversal cnot and single-qubit gates, it can be naturally implemented in unit depth on various quantum processor architectures with long-range connectivity. These include in particular ion traps [53] and Rydberg atoms in optical tweezers [54]. It is more challenging to implement Bell sampling in geometrically local architectures such as superconducting qubits [5]. In such architectures, one can interleave the two copies in a geometrically local manner such that the Bell measurement is a local circuit; however, this comes at the cost of additional layers of SWAP gates for every unit of circuit depth. Alternatively, one can use looped pipeline architectures to implement the Bell measurement [55].
But is Bell sampling also practical in the near term? To satisfactorily answer this question, various sources of noise need to be analyzed in detail--tasks we defer to future work but mention here. Some of our protocols, including the purity test and the error detection protocols explicitly include noise in the state preparation. But how severely does measurement noise affect the outcomes? In other instances, including the depth and magic test, and the low-\(T\) count learning algorithms we have restricted ourselves to (nearly) pure state preparations. We can at least certify that these algorithms are applicable because the purity of the state preparation can be independently checked. But in currently realistic scenarios, the state preparation of deep circuits will never be pure. An important question is therefore whether we can formulate noise-robust versions of these protocols.
We have shown that classically simulating the Bell sampling protocol with universal random circuits is classically intractable. An exciting question in this context is whether the complexity of Bell sampling might be more noise robust than computational-basis sampling in the asymptotic scenario. For universal circuit sampling in the computational basis Gao and Duan [56] and Aharonov _et al._[57] developed an algorithm that simulates sufficiently deep random circuits with a constant noise rate in polynomial time. In the Supplementary Material [28] we give some initial evidence that this simulation algorithm fails for Bell measurements. If the hardness of Bell sampling indeed turns out to be robust to large amounts of circuit noise, we face the exciting prospect of a scalable quantum advantage demonstration with classical validation and error mitigation.
Finally, it is an open question whether Bell sampling defines a universal model of quantum computation. If this was the case, then we would be able to perform universal quantum computations while at the same time validating the computation and diagnosing some circuit errors.
_Note:_ While finalizing this work, we became aware of Refs. [58; 59], where the authors independently report algorithms similar to the one we present above for learning quantum states generated by circuits with low \(T\)-count.
_Acknowledgements_ D.H. warmly thanks Abhinav Deshpande and Ingo Roth for helpful discussions that aided in the proofs of Lemma 1 and Lemma 2, respectively. We are also grateful to Bill Fefferman, Soumik Ghosh, Alexey Gorshkov, Voitech Havicek, Markus Heinrich, Marcel Hinsche, Marios Ioannou, Mikhail Lukin and Brayden Ware for discussions. This research was supported in part by NSF QLCI grant OMA-2120757 and Grant No. NSF PHY-1748958 through the KITP program on "Quantum Many-Body Dynamics and Noisy Intermediate-Scale Quantum Systems." D.H. acknowledges funding from the US Department of Defense through a QuICS Hartree fellowship.
|
2302.14501 | Temporal evolution of the extreme excursions of multivariate $k$th order
Markov processes with application to oceanographic data | We develop two models for the temporal evolution of extreme events of
multivariate $k$th order Markov processes. The foundation of our methodology
lies in the conditional extremes model of Heffernan & Tawn (2004), and it
naturally extends the work of Winter & Tawn (2016,2017) and Tendijck et al.
(2019) to include multivariate random variables. We use cross-validation-type
techniques to develop a model order selection procedure, and we test our models
on two-dimensional meteorological-oceanographic data with directional
covariates for a location in the northern North Sea. We conclude that the
newly-developed models perform better than the widely used historical matching
methodology for these data. | Stan Tendijck, Philip Jonathan, David Randell, Jonathan Tawn | 2023-02-28T11:31:30Z | http://arxiv.org/abs/2302.14501v1 | Temporal evolution of the extreme excursions of multivariate _k_th order Markov processes with application to oceanographic data
###### Abstract
We develop two models for the temporal evolution of extreme events of multivariate _k_th order Markov processes. The foundation of our methodology lies in the conditional extremes model of Heffernan and Tawn (2004), and it naturally extends the work of Winter and Tawn (2016, 2017) and Tendijck et al. (2019) to include multivariate random variables. We use cross-validation-type techniques to develop a model order selection procedure, and we test our models on two-dimensional meteorological-oceanographic data with directional covariates for a location in the northern North Sea. We conclude that the newly-developed models perform better than the widely used historical matching methodology for these data.
**Keywords:**_extreme value theory, time-series, Markov processes, oceanography_
## 1 Introduction
Farmers, stock brokers and sailors have one thing in common: they or their businesses are most heavily affected by extreme events like droughts and rainfall, stock market crashes, or extreme winds and waves, respectively. Understanding the statistical behaviour of such events as a whole is crucial for risk analyses. To make this more precise, if we let \((\mathbf{X}_{t})_{t\in\mathbb{Z}}\) be a stationary \(d\)-dimensional random process of interest, then we seek to model excursions of the process in and out of a set \(E\subset\mathbb{R}^{d}\) in time, i.e., the behaviour of
\[\{\mathbf{X}_{i}:\ i=a,\ldots,b;\ \mathbf{X}_{i}\in E;\ \mathbf{X}_{a-1}, \mathbf{X}_{b+1}\not\in E\}, \tag{1}\]
where \(E\) is associated with extreme events of the random variable \(\mathbf{X}\) which is identically distributed to any \(\mathbf{X}_{j}\), \(j\in\mathbb{Z}\). Moreover, we assume that the random process consists of multiple components that can be extreme. To solve this task, we assume that the multivariate random process is a realisation of a _k_th order Markov chain.
We use extreme value theory, a subfield of statistics, to characterise excursions. There is considerable attention to this area in the literature, but most of extreme value theory for stationary Markov chains dates back over 20 years. Rootzen (1988) and Perfekt (1997) develop limiting results for univariate Markov chains and multivariate Markov chains, respectively. Smith (1992) calculates the extremal index (Leadbetter et al., 1983) for a univariate Markov chain and Smith et al. (1997) use parametric bivariate transition distributions to model the extremes of a univariate first order Markov process. Finally, Yun (2000) develops asymptotic
theory for functionals of univariate \(k\)th order Markov extreme events. All of these authors derive results under the assumption of asymptotic dependence (Joe, 1997), i.e., for a stationary process \((X_{t})_{t\in\mathbb{Z}}\) satisfying suitable long-range mixing conditions, under the assumption that for any lag \(l=1,2,\ldots\)
\[\lim_{u\to x^{*}}\mathbb{P}(X_{t+l}>u|X_{t}>u)>0,\]
where \(x^{*}\) is the right upper end point of the distribution of \(X_{t}\). This early work doesn't consider what happens when asymptotic independence is present, i.e., when this limiting probability converges to \(0\) for some \(l\). The first paper which considers such processes is Bortot and Tawn (1998) who assume a first order Markov model, with Ledford and Tawn (2003) considering a general framework for the modelling of asymptotic independent processes, and key recent probabilistic developments given by Papastathopoulos et al. (2017) and Papastathopoulos et al. (2023).
Randell et al. (2015) speculate that a statistical model for the evolution of (multivariate) trajectories would be a valuable enhancement of description of ocean storm events. The first statistical work the current authors are aware of, that defines a model for the distribution of all observations during an excursion is Winter and Tawn (2016), who assume a flexible univariate first order Markov process exhibiting either asymptotic independence or asymptotic dependence across lags. Winter and Tawn (2017) incorporate higher order dependence model to give \(k\)th order Markov processes with \(k>1\). Finally, Tendijck et al. (2019) extend that model to a \(k\)th order univariate Markov process with a directional covariate. We remark that their work cannot be considered to model the extremes of bivariate Markov processes since the associated directional covariate does not take on extreme values. Feld et al. (2015) use a sophisticated covariate model for the most extreme observation (the most extreme value of the dominant variable) in an excursion, combined with a historical matching approach for the intra-excursion trajectory; in Section 3.4 we adopt a version of this methodology as a benchmark for our case study. Finally, we mention well-established literature on multivariate time series, e.g., Tiao and Tsay (1989), which is not directly applicable to modelling environmental extremes because such models are only designed to model typical behaviours. Financial time-series models, e.g., Bauwens et al. (2006), are also not applicable because these are specifically tailored to model data exhibiting volatility, with tail switching during extreme events (Bortot and Coles, 2003).
In this work, we present a natural extension to Tendijck et al. (2019) by defining two multivariate \(k\)th order Markov models that exhibit both asymptotic (in)dependence across variables and/or at some lags. The work is motivated by our case study in which we model excursions of meteorological-oceanographic (met-ocean) data: significant wave height, wind speed, and their associated directions, for a location in the northern North Sea.
We use the following set up. Assume that at each time \(t\in\mathbb{Z}\), the distribution of the \(d\)-dimensional random variable \(\mathbf{X}_{t}\) is stationary through time; that is, \(\mathbf{X}_{t}\) has the same distribution as some \(\mathbf{X}=(X_{1},\ldots,X_{d})\) with distribution function \(F_{\mathbf{X}}\). For \(1\leq j\leq d\), write \(F_{X_{j}}\) as the \(j\)th marginal distribution of \(F_{\mathbf{X}}\). The distribution functions \(F_{X_{j}}\) are unknown and must be estimated. For extreme arguments of \(F_{X_{j}}\), we use univariate extreme value theory to motivate a class of parametric tail forms. More precisely, we assume that for each \(1\leq j\leq d\), the excesses tail above some high level \(u_{j}\in\mathbb{R}\) of the marginal distribution \(F_{X_{j}}\) are approximated with a generalised Pareto distribution (Davison and Smith, 1990). For non-extreme arguments \(x<u_{j}\) of the function \(F_{X_{j}}\), an empirical model usually suffices.
In multivariate extreme value theory, it is common to consider the marginals and the dependence of random variables separately, such that the usually-dominant marginal effect does not influence the modelling of a possibly complex dependence structure. So given the marginal models as discussed above, we transform the random process \((\mathbf{X}_{t})_{t\in\mathbb{Z}}\) onto standard Laplace margins \((\mathbf{Y}_{t})_{t\in\mathbb{Z}}\) using the transformation: \(X_{j}\mapsto Y_{j}:=F_{L}^{-1}(F_{X_{j}}(X_{j}))\), where \(F_{L}^{-1}\) is the inverse of the standard Laplace distribution function. Here the choice of Laplace margins is made to allow for the modelling of potential negative dependence at certain lags or across components (Keef et al., 2013).
For multivariate random processes, there are many ways of defining an extreme event. In our case study, we take the met-ocean variable significant wave height \(H_{S}\) as the excursion-defining component. We follow Winter and Tawn (2017) and Tendijck et al. (2019) in adopting the conditional extremes model of Heffernan and Tawn (2004), see also Section 2.2, as the foundation of our approach. Without loss of generality, we
first define the component \(X_{1}\) of \(\mathbf{X}\) as the defining variable for the extreme events. So, we set our excursion set \(E=E_{u}:=(F_{X_{1}}^{-1}\{F_{L}(u)\},\infty)\times\mathbb{R}^{d-1}\) for some high threshold \(u\in\mathbb{R}_{+}\) and rewrite our definition of an excursion as
\[\{\mathbf{Y}_{i}:\ i=a,\ldots,b;\ Y_{i,1}>u;Y_{a-1,1}\leq u,\ Y_{b+1,1}\leq u\} \tag{2}\]
for \(a,b\in\mathbb{Z}\), indices for the start and the end time points of the excursion, respectively. In shorthand, the excursion is then \(\mathbf{Y}_{a:b}\). We remark that in this definition, we accept that multiple excursions can occur close together in time, and thus these cannot be considered independent. The reason for this choice is that imposing a minimal separation of excursions would complicate the modelling significantly. We recognize that this is a feature of the current approach which can be improved upon in future work.
The remaining part of this paper is organised as follows. In Section 2, we present our strategy for modelling excursions by defining time intervals corresponding to so-called "pre-peak", "peak" and "post-peak" periods, and we present our \(k\)th order Markov models for each of these time periods. In Section 3, we apply the two Markov model forms we propose to met-ocean data for a location in the northern North Sea. We compare the model performance with a baseline historical matching approach by assessing their respective performance in estimating the tails of the distributions of complex structure variables (Coles and Tawn, 1994), corresponding to approximations of the response of hypothetical offshore or coastal facilities to extreme met-ocean environments. We find that in general the new models are preferred.
## 2 The models
### Modelling strategy
To model excursions as in definition (2), two types of approaches have been proposed in the literature of univariate extremes: a forward model (Rootzen, 1988) and a peak model (Smith et al., 1997). Both of these are two-step approaches by nature. The forward model first describes the distribution of a random exceedance \(Y_{t}>u\) with a univariate extremes model and a conditional model for the distribution for any
Figure 1: Illustration of the periods of the pre-peak, peak and post-peak periods for two excursions from a Markov model with order \(k=3\).
\(j\geq 1\) of \(Y_{t+j}|(Y_{t+j-i}=y_{t+j-i}\;i=1,\ldots,j)\) where \(y_{t}>u\). Even though this approach does not directly model the univariate equivalent of excursions in formulation (2), estimates of some extremal properties of the process \((Y_{t})_{t\geq 1}\), such as the extremal index (Leadbetter et al., 1983), can still be obtained by allowing the excursion threshold to be significantly lower than the cluster threshold used in extremal index estimators. Notably, Winter and Tawn (2016, 2017) use the forward approach in their work.
The peak model, on the other hand, does model excursions as defined here. This method relies on a univariate extremes model for the largest observation of an excursion, e.g., Eastoe and Tawn (2012), and a conditional model for observations before and after the excursion maximum. Winter and Tawn (2016) use this approach for their first order model but not for their _k_th order model (Winter and Tawn, 2017). They avoid this method explicitly because of difficulties that arise in preserving model characteristics in forward and backward simulations near the excursion maximum (i.e., the time point at which the defining variate \(X_{1}\) achieves its maximum value during the excursion).
Tendijck et al. (2019) use the peak method, but they do not address the issues associated with forward and backward simulation under the method. Because the excursion maximum is usually the most important observation of an excursion for risk assessments, we also use the peak method in the current work, but with consideration of backward and forward models. We separate the modelling of excursions into three stages: the modelling of the period of the peak, and the modelling of the pre-peak and post-peak periods; see Figure 1 in which the three time periods are illustrated for \(k=3\). Without loss of generality, let \(t=0\) be the time point at which the first component \(Y_{t,1}\) takes its maximum value within an excursion such that \(Y_{0,1}>u\) for the threshold \(u\). The period of the peak \(\mathcal{P}^{k}_{0}\) of an excursion of a \(k\)th order model is then defined as the set of \(2k-1\) observations: \(\mathcal{P}^{k}_{0}:=\{\mathbf{Y}_{t}:\;-(k-1)\leq t\leq k-1\}\) with \(Y_{0,1}>u\). The pre-peak \(\mathcal{P}^{\mathrm{pre}}\) and post-peak \(\mathcal{P}^{\mathrm{post}}\) periods are defined as the sets of observations that include the excursion maximum and the observations before and after, respectively:
\[\mathcal{P}^{\mathrm{pre}}:=\{\mathbf{Y}_{t}:\;t^{\prime}\leq t\leq 0,\;\text{ with }t^{\prime}=\min\{s<0:\;\min_{i=s,\ldots,0}\{Y_{i,1}\}>u\}\}\]
and
\[\mathcal{P}^{\mathrm{post}}:=\{\mathbf{Y}_{t}:\;0\leq t\leq t^{\prime},\; \text{with }t^{\prime}=\max\{s>0:\;\min_{i=0,\ldots,s}\{Y_{i,1}\}>u\}\},\]
so each of them intersects with \(\mathcal{P}^{k}_{0}\). The length of \(\mathcal{P}^{k}_{0}\) can be longer or shorter than the length of an excursion if the excursion ends within the period of the peak. We choose to define the period \(\mathcal{P}^{k}_{0}\) in this manner so that the pre-peak and post-peak parts of the excursion are both initialized with \(k\) observations.
We then model an excursion as follows: (i) we model the excursion maximum \(Y_{0,1}\) using a generalised Pareto distribution; (ii) we model the period of the peak \(\mathcal{P}^{k}_{0}\) conditional on the storm maximum \(Y_{0,1}\) using the model described in Section 2.2; (iii-a) if \(\min_{j=1,\ldots,k-1}Y_{j,1}<u\) (\(\min_{j=1,\ldots,k-1}Y_{-j,1}<u\)), then the period \(\mathcal{P}^{\mathrm{post}}\) (\(\mathcal{P}^{\mathrm{pre}}\)) of the excursion has ended; (iii-b) if \(\min_{j=1,\ldots,k-1}Y_{j,1}\geq u\) (\(\min_{j=1,\ldots,k-1}Y_{-j,1}\geq u\)), then the remaining part of the excursion is modelled with our time-series models from Sections 2.3-2.4 until there exist a \(j_{1},j_{2}>0\) such that \(Y_{j_{1},1}<u\) and \(Y_{-j_{2},1}<u\); (iv) if \(\max_{-j_{2}\leq i\leq j_{1}}Y_{i,1}>Y_{0,1}\), then the model for the excursion contradicts the definition of the period of the peak of an excursion, and so we reject such occurrences.
In the next sections, we discuss forward models that are applicable to model the post-peak period \(\mathcal{P}^{\mathrm{post}}\). We model the pre-peak period \(\mathcal{P}^{\mathrm{pre}}\) using the forward models applied to \((Y_{-t})_{t\in\mathbb{Z}}\) (with potentially different parameters, although these would be the same if the process was time reversible). Importantly, we do not impose consistency in the forward and backward models to yield a \(k\)th order Markov chain, e.g., in the case of asymptotic dependent Markov chains the precise dependence conditions between the forward and backward hidden tail chains are given by Janssen and Segers (2014). We make this choice for two reasons: (i) for environmental applications, such as in this work, the pre-peak and post-peak period have different distributions, see for example the asymmetry in Figure 5, which is due to different physics in the growth and decay of a storm; (ii) the assumption of a \(k\)th order Markov process is an approximation for the process that generates our data. Thus, imposing forward and backward consistency for a \(k\)th order Markov chain is likely to yield worse results for our application. So, we consider the violating of this assumption as a benefit more than a limitation as it can yield more flexible descriptions of excursions.
### The conditional extremes model
We introduce the conditional extreme value model of Heffernan and Tawn (2004), henceforth denoted the HT model, with notation specific to modelling the period of the peak \(\mathcal{P}_{0}^{k}\). The HT model is widely studied and applied to extrapolate tails of multivariate distributions, e.g., in oceanography (Ross et al., 2020), finance (Hilal et al., 2011), spatio-temporal extremes (Simpson and Wadsworth, 2021), and multivariate spatial extremes (Shooter et al., 2022). The HT model is a limit model and its form was originally motivated by deriving possible limiting forms for numerous theoretical examples.
Let
\[\mathbf{Y}_{-(k-1):(k-1)}:=\begin{pmatrix}Y_{-(k-1),1}&\cdots&Y_{-(k-1),d}\\ \vdots&&\vdots\\ Y_{k-1,1}&\cdots&Y_{k-1,d}\end{pmatrix}\]
be a random matrix on \(\mathbb{R}^{(2k-1)\times d}\) with standard Laplace margins (Keef et al., 2013), and define the irregular random matrix \(\underline{\mathbf{Y}}\) to be \(\mathbf{Y}_{-(k-1):(k-1)}\) without the \((k,1)\)th element \(Y_{0,1}\). That is, we define the irregular matrix \(\underline{\mathbf{x}}\in\mathbb{R}^{(2k-1)d-1}\) as follows:
\[\underline{\mathbf{x}}=\begin{pmatrix}x_{-k+1,1}&x_{-k+1,2}&\cdots&x_{-k+1,d} \\ \vdots&\vdots&&\vdots\\ x_{-1,1}&x_{-1,2}&\cdots&x_{-1,d}\\ &x_{0,2}&\cdots&x_{0,d}\\ x_{1,1}&x_{1,2}&\cdots&x_{1,d}\\ \vdots&\vdots&&\vdots\\ x_{k-1,1}&x_{k-1,2}&\cdots&x_{k-1,d}\end{pmatrix},\]
such that \(\underline{\mathbf{x}}\) does not contain the \((k,1)\)th element. Equivalently, we can write \(\underline{\mathbf{x}}=\mathbf{x}_{-(k,1)}\) for \(\mathbf{x}\in\mathbb{R}^{(2k-1)\times d}\). Additionally, we assume that the joint density of \(\mathbf{Y}_{-(k-1):(k-1)}\) exists.
The conditional extremes model for \(\underline{\mathbf{Y}}\), conditional on \(Y_{0,1}\), assumes that irregular parameter matrices \(\underline{\boldsymbol{\alpha}}\in[-1,1]^{(2k-1)d-1}\), \(\underline{\boldsymbol{\beta}}\in(-\infty,1)^{(2k-1)d-1}\) and a distribution function \(H\) with non-degenerate marginals on \(\mathbb{R}^{(2k-1)d-1}\) (the space of irregular matrices) exist, such that for all irregular matrices \(\underline{\mathbf{z}}\in\mathbb{R}^{(2k-1)d-1}\) the limit
\[\lim_{u\to\infty}\mathbb{P}\left(\frac{\underline{\mathbf{Y}}-\underline{ \boldsymbol{\alpha}}Y_{0,1}}{Y_{0,1}^{\underline{\boldsymbol{\beta}}}}\leq \underline{\boldsymbol{z}},\ Y_{0,1}-u>y\ \bigg{|}\ Y_{0,1}>u\right)\]
exists, assuming component-wise operations, and that
\[H(\underline{\mathbf{z}}):=\lim_{y\to\infty}\mathbb{P}\left(\frac{\underline {\mathbf{Y}}-\underline{\boldsymbol{\alpha}}Y_{0,1}}{Y_{0,1}^{\underline{ \boldsymbol{\beta}}}}\leq\underline{\boldsymbol{z}}\ \bigg{|}\ Y_{0,1}=y\right) \tag{3}\]
exists, where \(\alpha_{i,j}\), \(\beta_{i,j}\) and \(z_{i,j}\) are the \((i,j)\)th elements of \(\underline{\boldsymbol{\alpha}}\), \(\underline{\boldsymbol{\beta}}\) and \(\underline{\mathbf{z}}\), respectively. This then implies, according to l'Hopital's rule, that for \(y>0\), \(\underline{\boldsymbol{z}}\in\mathbb{R}^{(2k-1)d-1}\)
\[\lim_{u\to\infty}\mathbb{P}\left(\frac{\underline{\mathbf{Y}}-\underline{ \boldsymbol{\alpha}}Y_{0,1}}{Y_{0,1}^{\underline{\boldsymbol{\beta}}}}\leq \underline{\boldsymbol{z}},\ Y_{0,1}-u>y\ \bigg{|}\ Y_{0,1}>u\right)=H(\underline{\boldsymbol{z}})\exp(-y). \tag{4}\]
Limit (4) in turn has the interpretation that as \(u\) tends to infinity, \((\underline{\mathbf{Y}}-\underline{\boldsymbol{\alpha}}Y_{0,1})Y_{0,1}^{- \underline{\boldsymbol{\beta}}}\) and \((Y_{0,1}-u)\) are independent conditional on \(Y_{0,1}>u\), and are distributed as \(H\) and a standard exponential, respectively.
In practice, we exploit these results by assuming they hold exactly above some high finite threshold \(u>0\). So, we approximate the conditional distribution of \(\underline{\mathbf{Y}}|Y_{0,1}=y\) for \(y>u\), \(\underline{\mathbf{y}}\in\mathbb{R}^{(2k-1)d-1}\) as
\[\mathbb{P}(\underline{\mathbf{Y}}\leq\underline{\mathbf{y}}\ |\ Y_{0,1}=y)=H\left(\frac{\underline{\mathbf{Y}}-\underline{ \boldsymbol{\alpha}}y}{y\underline{\boldsymbol{\beta}}}\right), \tag{5}\]
and we assume independence of \((\underline{\mathbf{Y}}-\underline{\boldsymbol{\alpha}}Y_{0,1})Y_{0,1}^{-\underline {\boldsymbol{\beta}}}\) and \(Y_{0,1}\). There is no finite-dimensional parametric form for \(H\), so non-parametric methods are typically applied. However, we remark that there are applications of the conditional extreme value model where the copula \(H\) is assumed to be Gaussian (Towe et al., 2019) or a Bayesian semi-parametric model is used (Lugrin et al., 2016). For inference, see Section 2.5.
### Multivariate Markov extremal model
For ease of presentation, we present the multivariate Markov extremal model (MMEM) of order \(k\) only for a two-dimensional time-series \((\mathbf{Y}_{t})_{t\in\mathbb{Z}}\) such that \(\mathbf{Y}_{t}=(Y_{t,1},Y_{t,2})\) in the notation of Section 1, i.e., \(\mathbf{Y}_{t}\) has standard Laplace margins. We only describe a forward model that is applicable to the post-peak period \(\mathcal{P}^{\mathrm{post}}\), since the backward model has a similar construction. As mentioned in Section 2.1, we apply a different forward MMEM model to \((\mathbf{Y}_{-t})_{t\in\mathbb{Z}}\) to yield the backward model for the pre-peak period \(\mathcal{P}^{\mathrm{pre}}\). Concisely put, the MMEM exploits the HT model to estimate the distribution for \(\mathbf{Y}_{t+k}\) conditional on \((\mathbf{Y}_{t},\ldots,\mathbf{Y}_{t+k-1})\) when \(Y_{t,1}>u\) for a large threshold \(u>0\). As in Section 2.2, for each \(t\in\mathbb{Z}\), we define \(\tilde{\mathbf{x}}_{t}\in\mathbb{R}^{k}\times\mathbb{R}^{k+1}\) to be an irregular matrix with \(k+1\) rows and \(2\) columns without the element that is on the first row and first column:
\[\tilde{\boldsymbol{x}}_{t}=\begin{pmatrix}x_{t,2}\\ x_{t+1,1}&x_{t+1,2}\\ \vdots&\vdots\\ x_{t+k,1}&x_{t+k,2}\end{pmatrix}.\]
Then, we assume that for a large threshold \(u>0\), there exist parameters \(\tilde{\boldsymbol{\alpha}}_{0}\in[-1,1]^{k}\times[-1,1]^{k+1}\), \(\tilde{\boldsymbol{\beta}}_{0}\in(-\infty,1)^{k}\times(-\infty,1)^{k+1}\), and a residual random variable \(\tilde{\boldsymbol{\varepsilon}}_{t}\) on \(\mathbb{R}^{k}\times\mathbb{R}^{k+1}\) with non-degenerate marginals such that for \(t\in\mathbb{Z}\)
\[\tilde{\mathbf{Y}}_{t}|(Y_{t,1}>u)=\tilde{\boldsymbol{\alpha}}_{0}Y_{t,1}+Y_{ t,1}^{\tilde{\boldsymbol{\beta}}_{0}}\tilde{\boldsymbol{\varepsilon}}_{t}.\]
Similar to Winter and Tawn (2017), for \(t\in\mathbb{Z}\), \(j\geq 1\) when \(Y_{t+j,1}>u\), we then get
\[[Y_{t+k+j,1}\ Y_{t+k+j,2}]|(\mathbf{Y}_{t+j:t+k+j-1},Y_{t+j,1}>u)=[\alpha_{k,1 },\ \alpha_{k,2}]Y_{t+j,1}+Y_{t+j,1}^{[\beta_{k,1},\ \beta_{k,2}]}\cdot \boldsymbol{\varepsilon}_{k,1:2}^{C},\]
where \(\boldsymbol{\varepsilon}_{k,1:2}^{C}\) is short-hand notation for \([\boldsymbol{\varepsilon}_{k,1},\boldsymbol{\varepsilon}_{k,2}]\) conditional on \((\boldsymbol{\varepsilon}_{1:k-1,1},\boldsymbol{\varepsilon}_{0:k-1,2})\). For inference, we refer to Section 2.5.
### Extremal vector autoregression
Here, we introduce extremal vector autoregression (EVAR) for extremes of the process \((\mathbf{Y}_{t})_{t\geq 1}\). This model combines the HT model with a vector autoregressive model for the joint evolution of the time-series at high levels. Here we focus on the post-peak period, but note that the pre-peak period is modelled analogously. We define an EVAR model of order \(k\) with parameters \(\Phi^{(i)}\in\mathbb{R}^{d}\times\mathbb{R}^{d}\) for \(i=1,\ldots,k\) and \(\mathbf{B}\in(-\infty,1)^{d}\) as
\[\mathbf{Y}_{t+k}|(\mathbf{Y}_{t},\ldots,\mathbf{Y}_{t+k-1})=\sum_{i=1}^{k} \Phi^{(i)}\mathbf{Y}_{t+k-i}+y^{\mathbf{B}}\boldsymbol{\varepsilon}_{t}, \tag{6}\]
with \(Y_{t,1}=y\) for \(y>u\), where \(u>0\) is a large threshold and \(\boldsymbol{\varepsilon}_{t}\) is a \(d\)-dimensional multivariate random variable that has non-degenerate margins and is independent of \((\mathbf{Y}_{t},\ldots,\mathbf{Y}_{t+k-1})\). Usually for a vector autoregressive model, parameter constraints would be imposed so that the resulting process is stationary. In the current extreme value context, stationarity is not of concern to us, since we reject trajectories that exceed the excursion maximum, and stop the process once the first component dips below threshold \(u\). We define EVAR\({}_{0}\) as a special case of EVAR corresponding to \(\mathbf{B}=\mathbf{0}\). EVAR\({}_{0}\) therefore has clear similarities with a regular vector autoregressive model (Tiao and Box, 1981), yet we emphasise that there is considerable difference between the two, since the parameters of EVAR\({}_{0}\) do not need to yield a stationary process, and the parameters of EVAR\({}_{0}\) are estimated using only extreme observations. To estimate the EVAR model,
we adopt the same approach as that used to estimate the HT model, see Section 2.5. As explained in Appendix A, the resulting parameter estimators \(\hat{\Phi}^{(i)}\) are highly correlated. Hence a reparameterisation is introduced to reduce this correlation, and improve inference efficiency and computation.
For practical applications, an advantage of EVAR over MMEM is that it provides a lower-dimensional residual distribution when \(k>1\) (with dimensions \(d\) and \(kd\), respectively). As a consequence, the EVAR residual distribution is less affected by the curse of dimensionality. A drawback of EVAR is that it might be insufficiently flexible to describe complex dependence well.
### Inference for conditional models
We discuss inference for each of the conditional extremes, MMEM and EVAR models with parameter vector \(\boldsymbol{\theta}\). We discuss these together because they can be summarized in the same form. Specifically, let \(\mathbf{W}=(W_{1},\ldots,W_{d})\) be a \(d\)-dimensional random variable and assume that for some high threshold \(u>0\),
\[\mathbf{W}_{2:d}|(W_{1}>u)=\mathbf{g}_{1}(W_{1};\boldsymbol{\theta})+\mathbf{g }_{2}(W_{1};\boldsymbol{\theta})\boldsymbol{\varepsilon} \tag{7}\]
for some parametric functions \(\mathbf{g}_{1}(\,\cdot\,;\boldsymbol{\theta}):\mathbb{R}\to\mathbb{R}^{d-1}\) and \(\mathbf{g}_{2}(\,\cdot\,;\boldsymbol{\theta}):\mathbb{R}\to\mathbb{R}_{>0}^{d-1}\), where
\[\mathbf{g}_{1}(x,\boldsymbol{\theta}):=(g_{1,2}(x,\boldsymbol{\theta}),\ldots,g_{1,d}(x,\boldsymbol{\theta})),\text{ and }\mathbf{g}_{2}(x,\boldsymbol{\theta}):=(g_{2,2}(x, \boldsymbol{\theta}),\ldots,g_{2,d}(x,\boldsymbol{\theta})),\text{ for }x\in \mathbb{R}\]
where \(\boldsymbol{\varepsilon}=(\varepsilon_{2},\ldots,\varepsilon_{d})\) is a \((d-1)\)-dimensional multivariate random variable that is non-degenerate in each margin and independent of \(W_{1}\). As an example, for MMEM, \(g_{1,j}(x)=\alpha_{j}x\) for some \(\alpha_{j}\) and \(g_{2,j}(x)=x^{\beta_{j}}\) for some \(\beta_{j}\).
Next, assume that we have \(n\) observations \(\mathcal{D}:=\{\mathbf{w}_{1},\ldots,\mathbf{w}_{n}\}\) of the conditional random variable \(\mathbf{W}|W_{1}>u\), where \(\mathbf{w}_{i}=(w_{i1},\ldots,w_{id})\) with \(w_{i1}>u\) for \(i=1,\ldots,n\). We then infer \(\boldsymbol{\theta}\) by calculating the likelihood of model (7) by temporarily assuming that the \(\boldsymbol{\varepsilon}\) has a multivariate normal distribution with unknown mean \(\boldsymbol{\mu}=(\mu_{2},\ldots,\mu_{d})\) and unknown diagonal covariance matrix \(\Sigma=\boldsymbol{\sigma}^{2}I\) where \(\boldsymbol{\sigma}^{2}=(\sigma_{2}^{2},\ldots,\sigma_{d}^{2})\). These assumptions imply that the mean and the variance of \(\boldsymbol{\varepsilon}\) are estimated simultaneously with the model parameters. The likelihood is then evaluated as
\[L(\boldsymbol{\theta},\boldsymbol{\mu},\boldsymbol{\sigma}^{2};\mathcal{D})= \prod_{i=1}^{n}\prod_{j=2}^{d}\frac{1}{\sqrt{2\pi}\sigma_{j}g_{2,j}(w_{i1}; \boldsymbol{\theta})}\exp\left\{-\frac{1}{2\sigma_{j}^{2}}\left(\frac{w_{ij}-g _{1,j}(w_{i1})-\mu_{j}g_{2,j}(w_{i1};\boldsymbol{\theta})}{g_{2,j}(w_{i1}; \boldsymbol{\theta})}\right)^{2}\right\}.\]
Finally, the parametric assumption on the distribution of \(\boldsymbol{\varepsilon}\) is discarded and estimated conditional on the parametric estimate \(\hat{\boldsymbol{\theta}}\) for \(\boldsymbol{\theta}\), with a kernel density \(\hat{h}_{2:d}\) using the 'observations' \(\{\boldsymbol{\varepsilon}_{i}:\ i=1,\ldots,n\}\) where \(\boldsymbol{\varepsilon}_{i}=(\varepsilon_{i2},\ldots,\varepsilon_{id})\) and
\[\varepsilon_{ij}:=\frac{w_{ij}-\hat{g}_{1,j}(w_{i1};\hat{\boldsymbol{\theta}}) }{\hat{g}_{2,j}(w_{i1};\hat{\boldsymbol{\theta}})}\]
for \(i=1,\ldots,n\), \(j=2,\ldots,d\). In case of MMEM, we additionally require estimates for the density of a conditional random variable \(\boldsymbol{\varepsilon}_{l+1:d|2:l}=(\varepsilon_{l+1},\ldots,\varepsilon_{d} )|(\varepsilon_{2},\ldots,\varepsilon_{l})\) for some \(l\in\{2,\ldots,d-1\}\). Given the same set of observations, we estimate its conditional density \(h_{l+1:d|2:l}\) as
\[\hat{h}_{l+1:d|2:l}(\varepsilon_{l+1},\ldots,\varepsilon_{d}|\varepsilon_{2}, \ldots,\varepsilon_{l})=\frac{\hat{h}_{2:d}(\varepsilon_{2},\ldots,\varepsilon _{d})}{\hat{h}_{2:l}(\varepsilon_{2},\ldots,\varepsilon_{l})},\]
where \(h_{2:l}\) is estimated as the \((l-1)\)-dimensional marginal of \(\hat{h}_{2:d}\).
## 3 Case Study - Northern North Sea
### Overview
We apply MMEM, EVAR and a historical matching procedure (introduced in Section 3.4, henceforth referred to as HM) to characterise excursions of significant wave height \(H_{S}\) and wind speed \(W_{s}\) with directional
covariates for a location in the northern North Sea. Our goal is to estimate parsimonious predictive models for the joint evolution of \(H_{S}\) and \(W_{s}\) time-series conditional on \(H_{S}\) being large.
In Section 3.2, we describe the available met-ocean data. In Section 3.3, we outline a model for the evolution of storm direction that is needed for our time-series models. Section 3.4 then summarises the HM procedure, and in Section 3.5, we introduce structure variable responses that approximate fluid drag loading on a marine structure such as a wind turbine or coastal defence. Finally, in Section 3.6, we compare the predictive performance of MMEM and EVAR (over a set of model orders) with the HM method in estimating structure variables for withheld intervals of time-series.
### Data
We have 53 years of hindcast data
\[\mathcal{D}:=\{(H_{S,i},W_{s,i},\theta_{i}^{H},\theta_{i}^{W}):\ i\in\mathcal{ T}\}\]
indexed with finite \(\mathcal{T}\subset\mathbb{Z}_{\geq 1}\) consisting of time-series for four three-hourly met-ocean summary statistics at a location in the northern North Sea (Reistad et al., 2009): significant wave height (\(H_{S,i}\) in metres), wind speed (\(W_{s,i}\) in metres per second), wave direction (\(\theta_{i}^{H}\) in degrees) and wind direction (\(\theta_{i}^{W}\) in degrees) for each \(i\in\mathcal{T}\). To use MMEM and EVAR, we transform significant wave height and wind speed onto Laplace marginals: \(H_{S,i}|\theta_{i}^{H}\mapsto H_{S,i}^{\mathrm{L}}\) and \(W_{s,i}|\theta_{i}^{W}\mapsto W_{s,i}^{\mathrm{L}}\), e.g., using directional marginal extreme value models for the tails (Chavez-Demoulin and Davison, 2005), but ignoring seasonality. This part of the analysis has been reported on numerous occasions, see for example Randell et al. (2015). Because the marginal transformation includes direction as a covariate and because direction is not constant during an excursion, we also establish a model for the directional evolution of excursions in order to transform them between standard and original margins, see Section 3.3.
Let \(D^{L}\) be the collection of the transformed data
\[\mathcal{D}^{L}:=\{(H_{S,i}^{L},W_{s,i}^{L},\theta_{i}^{H},\theta_{i}^{W}):\ i\in \mathcal{T}\}.\]
To define excursions in \(\mathcal{D}^{L}\), we set the excursion threshold \(u\) equal to the \(95\%\) percentile of a standard Laplace distribution, i.e., \(u\approx 2.3\), yielding \(1,467\) observations of extreme excursions \(\mathcal{E}_{u}\). This choice of threshold is not unusual as similar conclusions are drawn for excursion thresholds that are slightly different from our original choice.
Figure 2 shows four intervals of the time-series chosen to contain the observations corresponding to the \(100\%\), \(95\%\), \(90\%\) and \(85\%\) sample percentiles of the set of excursion maximum significant wave heights, on original and standard Laplace margins, with directional covariates. Excursions are centred around extreme events. There is a large dependence of \(H_{S}\) and \(W_{s}\) on both original and standard margins. Moreover, variables associated to significant wave height, i.e., \(H_{S}\), \(H_{S}^{L}\) and \(\theta^{H}\), are much smoother than their wind speed counterparts. Additionally, the directional covariates \(\theta^{H}\) and \(\theta^{W}\) centre around each other with no large deviations during extreme events.
In Figure 3, we visualize the (across variable joint) dependence of key variables \(H_{S}^{L}\) and \(W_{s}^{L}\) on Laplace scale at time lags up to lag \(4\) using a series of scatterplots where a unit of lag corresponds to three hours of observation time. The figure illustrates the complex dependence of the bivariate time-series of significant wave height and wind speed on Laplace margins. As expected, we observe (slow) convergence to an independent variable model as lag increases. Most notably, we observe a similar level of dependence of \((H_{S,t}^{L},W_{s,t+4}^{L})\) and \((W_{s,t}^{L},W_{s,t+4}^{L})\) which suggests counter-intuitively that \(H_{S,t}^{L}\) would be a better predictor for \(W_{s,t+4}^{L}\) than \(W_{s,t}^{L}\).
In Figure 4, we plot (cross) correlation functions for these variables, and also for the change in directional covariates at various lags. These show that the dependence of \((H_{S,t}^{L},H_{S,t+\tau}^{L})\) decays relatively slowly as \(\tau\) grows to \(90\) hours, and that indeed the cross dependence between \((H_{S,t}^{L},W_{s,t+\tau}^{L})\) is larger than the dependence of \((W_{s,t}^{L},W_{s,t+\tau}^{L})\) for large \(\tau\). Finally, the correlation plot of the change in directional covariates \(\Delta\theta_{S,i}^{H}:=(\theta_{S,i+1}^{H}-\theta_{S,i}^{H},\ \mathrm{mod}\ 360)\) and \(\Delta\theta_{s,i}^{W}:=(\theta_{s,i+1}^{W}-\theta_{s,i}^{W},\ \mathrm{mod}\ 360)\) on the right shows that a first order model for these covariates is appropriate since the correlations nearly vanish at lag \(2\) (for wind and wave) or \(6\) hours (for all other combinations).
Figure 2: Intervals of oceanographic time-series: (top) key variables: significant wave height \(H_{S,i}\) and wind speed \(W_{s,i}\) on original margins; (middle) on Laplace margins; (bottom) covariates: wave direction \(\theta_{i}^{H}\) and wind direction \(\theta_{i}^{W}\). The four columns correspond to time periods that contain the 100%, 95%, 90% and 85% empirical percentiles of \(H_{S,i}\), respectively.
Figure 4: Estimated correlation and cross-correlation at various time lags of: (left) the key variables on Laplace margins: \(H_{S,i}^{L}\) and \(W_{s,i}^{L}\); (right) the covariates: change in wave direction \(\Delta\theta_{i}^{H}:=(\theta_{i+1}^{H}-\theta_{i}^{H},\mod 360)\), change in wind direction \(\Delta\theta_{i}^{W}:=(\theta_{i+1}^{W}-\theta_{i}^{W},\mod 360)\) and \(\gamma_{i}\), see definition (9).
Figure 3: Matrix plot of observed \(H_{S,i}^{\mathrm{L}}\) and \(W_{s,i}^{\mathrm{L}}\) at various time lags up to lag 4 (corresponding to 12 hours in real time) including cross dependence.
### Directional model
We model wave direction \(\theta_{i}^{H}\) in a similar fashion as Tendijck et al. (2019), summarised as follows. Let \(\mathcal{I}\subset\mathcal{T}\) be the set of indices of the original data that correspond to all observations of any excursion. Next, let \(\{d(\theta_{i+1}^{H},\theta_{i}^{H}):\ i\in\mathcal{I}\}\) be the set of changes in wave directions, where \(d(\theta,\theta^{\prime})=(\theta-\theta^{\prime}+180;\mod 360)-180\in[-180,180)\) denotes the circular difference of \(\theta\) and \(\theta^{\prime}\) in degrees. In our application, the set of changes in wave directions during excursions do not contain values close to \(-180\) or \(180\). In particular, all of the observed changes centre around \(0\).
For \(i\in\mathcal{I}\), we transform observations \(d(\theta_{i+1}^{H},\theta_{i}^{H})\mapsto\delta_{i}^{H}:=\Phi^{-1}(\hat{F}(d( \theta_{i+1}^{H},\theta_{i}^{H})))\) on Gaussian margins, where \(\hat{F}\) denotes the empirical distribution function of the set of changes in wave directions. Assume that \(\{\delta_{i}^{H}\,:\ i\in\mathcal{I}\}\) are realisations of the random variables \(\{\Delta_{i}^{H}\,:\ i\in\mathcal{I}\}\). We estimate the following autoregressive model for \(\Delta_{t}^{H}\) of order \(p_{1}=1,2,3,\dots\) with parameters \(\varphi_{j}^{\mathrm{H}}\in\mathbb{R}\) for \(j=1,\dots,p_{1}\) as
\[\Delta_{t}^{H}|(\Delta_{t-1}^{H},\dots,\Delta_{t-p_{1}}^{H})=\sum_{j=1}^{p_{1} }\varphi_{j}^{\mathrm{H}}\Delta_{t-j}^{H}+\zeta(H_{S,t})\varepsilon_{t}, \tag{8}\]
where \(\varepsilon_{t}\) is a standard Gaussian random variable, and standard error \(\zeta(h)\) is given by
\[\zeta^{2}(h)=\lambda_{1}+\lambda_{2}\exp(-\lambda_{3}h)\]
with \(\lambda_{j^{\prime}}>0\) for \(j^{\prime}=1,2,3\), see Tendijck et al. (2019). In particular, the standard error \(\zeta(h)\) decays as \(h\) grows due to the significantly larger amounts of energy needed to change the direction of more severe sea states. The parameters of this model are inferred with maximum likelihood, and in contrast to the inference discussed in Section 2.5, we do not reject the assumption that \(\varepsilon_{t}\) is a standard Gaussian. In practice, we use \(p_{1}=1\) in line with Tendijck et al. (2019).
Given model (8), we propose the following model
\[\theta_{t}^{W}=\theta_{t}^{H}+\gamma_{t}\mod 360 \tag{9}\]
for wind direction \(\theta_{t}^{W}\) conditional on wave direction \(\theta_{t}^{H}\), where \(\gamma_{t}\) is a zero-mean stationary AR(\(p_{2}\)) process. That is, there exist parameters \(\varphi_{j}^{\mathrm{W}}\in\mathbb{R}\), \(1\leq j\leq p_{2}\), and a non-degenerate residual distribution \(r_{t}\) independent of \(\gamma_{t-j}\) for \(j\geq 1\), such that
\[\gamma_{t}|(\gamma_{t-1},\dots,\gamma_{t-p_{2}})=\sum_{j=1}^{p_{2}}\varphi_{j} ^{\mathrm{W}}\gamma_{t-j}+r_{t},\]
and such that the polynomial \(1-\sum_{j=1}^{p_{2}}\varphi_{j}^{\mathrm{W}}z^{j}\) has roots outside the unit circle. The model parameters and the distribution of \(r_{t}\) are inferred as described in Section 2.5 conditional on the model order \(p_{2}\), which is selected by investigating the correlation function in Figure 4 and the partial autocorrelation function of \(\gamma_{t}\) (not reported). In our application, we conclude that \(p_{2}=1\) is sufficient.
### Historical matching
An empirical method for simulating excursions is described in Feld et al. (2015) and termed historical matching (HM) in this work. They model trajectories of significant wave height, wave direction, season and wave period during extreme events. The key assumption they make is that storm trajectory (or excursion) profiles are not independent of storm maximum conditions. Specifically, the HM approach is a composition of four models: (i) a model for storm maximum wave direction; (ii) a model for storm maximum significant wave height conditional on storm maximum wave direction; (iii) a model that selects at random a historical storm trajectory with similar storm maximum characteristics to that simulated; (iv) a model that adjusts the historical storm trajectory by matching storm maximum characteristics of simulated and historical storms.
Specific details of the individual models are as follows, but this level of detail is not required for understanding the impact of the core methodology developments in Section 3. For model (i), we simply sample at
random from the observed wave directions associated with storm maximum significant wave height (excursion maximum). In model (ii), storm maximum significant wave height are modelled using a generalised Pareto distribution conditional on the sampled storm maximum wave direction using a generalised additive model with the parameters as B-splines conditional on directional covariates (Chavez-Demoulin and Davison, 2005). In model (iii), we use a distance measure to calculate the dissimilarity between pairs of storm maximum significant wave heights and storm maximum wave directions for simulated and historical trajectories. Here, we use the heuristic recommended by Feld et al. (2015) ensuring that a difference of 5 degrees in storm maximum wave direction corresponds to the same dissimilarity as \(0.5m\) of difference in storm maximum significant wave height; one of the closest 20 matching storms is then selected at random for associated with the simulated storm maximum. In model (iv), we match the variables of the chosen historical trajectory as follows: (a) the historical significant wave height series are multiplied by the ratio of the simulated maximum significant wave height to the maximum of the historical significant wave height; (b) the historical wave directions are shifted such that the storm maximum wave directions of simulated and historical trajectories coincide; (c) the associated historical wind directions are rotated in the exact same way as wave direction; (d) for the full set of historical storm maxima, storm maximum associated wind speed \(W_{s}^{M}\) (namely the value of wind speed at the time point corresponding to the storm maximum event) conditional on storm maximum significant wave height \(H_{S}^{M}\) is described using linear regression with parameters \(\beta_{0},\beta_{1}\in\mathbb{R}\), \(\sigma>0\):
\[W_{s}^{M}|H_{S}^{M}=\beta_{0}+\beta_{1}H_{S}^{M}+\sigma\varepsilon\]
with \(\varepsilon\) a standard normal random variable; (e) wind speed for the selected historical trajectory is scaled linearly such that it agrees with the storm maximum associated wind speed from (d).
Perhaps the main deficiencies of the HM approach are (i) it does not provide a means for modelling the extremal temporal dependence characteristics of excursions, and the extremal dependence between different components of the time-series for excursions to levels beyond those observed in the historical sample, and (ii) it does not provide a model framework for the assessment of fit or uncertainty propagation.
### Response variable
To measure the practical impact of extreme met-ocean excursions, we define structure response variables for a simple hypothetical marine offshore facility. A structure response variable is a function of the met-ocean variables, key to assessing the integrity of the design of a physical structure of interest. Specifically, we consider a structure in the form of a unit cube standing above the water, supported by thin rigid legs, with vertical cube faces aligned with cardinal directions. Only wave and wind impact on the cube itself is of interest to us, and we neglect the effects of other oceanic phenomena such as swell, surge, tide, and potential climate non-stationarity. For simplicity, we also assume that when \(H_{S}<h\), for some known value \(h>0\), the wave impact on the structure is negligible, and structural response is dominated by wind. When \(H_{S}\geq h\), we assume that wave impact increases cubically with \(H_{S}\) and quadratically with \(W_{s}\) (see Morison et al. 1950 and Ma and Swan 2020 for supporting literature). Hence, the impact of an extreme excursion on the structure is defined by the instantaneous response variable \(R\)
\[R(H_{S},W_{s},\theta^{H},\theta^{W};c,h)=\left\{\begin{aligned} & c\cdot I_{W}^{2}(W_{s}, \theta^{H}-\theta^{W})&&\text{for }H_{S}<h,\\ & c\cdot I_{W}^{2}(W_{s},\theta^{H}-\theta^{W})+A(\theta^{H}) \cdot(H_{S}-h)\cdot H_{S}^{2}&&\text{for }H_{S}\geq h,\end{aligned}\right.\]
where \(I_{W}:\mathbb{R}_{>0}\times[-180,180)\rightarrow\mathbb{R}\) is the inline wind-speed, defined below, \(A:[-180,180)\rightarrow[1,\sqrt{2}]\) is the exposed cross-sectional area of the cube, see below, and the parameter \(c>0\) is specified such that both significant wave height and wind speed have an approximately equal contribution to the largest values of \(R\). Here both \(c\) and \(h\) are values that can be changed by altering structural features. The exposed cross-sectional area \(A(\theta)\in[1,\sqrt{2}]\) of the cube is given by
\[A(\theta^{H}):=1/\cos([(\theta^{H}+45;\text{ mod}90)-45]\cdot\pi/180)\]
for a given wave direction \(\theta^{H}\). The inline wind-speed \(I_{W}\) is the component of the wind speed in the direction of the wave given by
\[I_{W}(W_{s},\theta^{H}-\theta^{W})=W_{s}\cos((\theta^{H}-\theta^{W})\cdot\pi/180).\]
To simplify notation, we write \(R_{i}(c,h):=R(H_{S,i},W_{s,i},\theta_{i}^{H},\theta_{i}^{W};c,h)\) for \(i\in\mathcal{T}\). To define a structure response for a complete excursion \(\mathcal{E}_{u}\), we write
\[\mathcal{E}_{u}:=\{(H_{S,i},W_{s,i},\Theta_{i}^{H},\Theta_{i}^{W}):\ a\leq i \leq b\},\]
for some \(a<b\) such that for a threshold \(u>0\) (on Laplace margins) \(H_{S,i}^{L}>u\) for \(a\leq i\leq b\) and \(H_{S,a-1}^{L},H_{S,b+1}^{L}\leq u\). Next, let \(i^{*}:=i^{*}(\mathcal{E}_{u})\) be the time of the excursion maximum, i.e., \(H_{S,i^{*}}\) is the maximum of \(H_{S,i}\) over \(\mathcal{E}_{u}\). We define two natural structure response variables representing the maximum impact of an excursion \(\max_{\{a\leq i\leq b\}}R_{i}(c,h)\), and the cumulative impact of an excursion \(\sum_{\{a\leq i\leq b\}}R_{i}(c,h)\), respectively. For our application, we consider slight alterations \(R^{\max}(c,h,\mathcal{E}_{u})\) and \(R^{\text{sum}}(c,h,\mathcal{E}_{u})\)
\[R^{\max}(c,h,\mathcal{E}_{u}):=\max_{\{a\leq i\leq b,\ |i-i^{*}|>2\}}R_{i}(c,h), \qquad R^{\text{sum}}(c,h,\mathcal{E}_{u}):=\sum_{\{a\leq i\leq b,\ |i-i^{*}|>2\}}R_{i}(c,h).\]
That is, we consider responses that do not depend directly on the characteristics of the excursion near to the excursion maximum, to exaggerate the dependence of the structure variables on pre-peak and post-peak periods compared to the period of the peak, and hence the importance of estimating good models for the pre-peak and post-peak periods using MMEM or EVAR. Moreover, we define \(R^{\max}(c,h)\) and \(R^{\text{sum}}(c,h)\) as the random structure responses related to a random excursion.
### Model comparisons
Here, we use our time-series models to characterise extreme excursions for the met-ocean data \(\mathcal{D}\) of Section 3.2 with structure responses \(R^{\max}\) and \(R^{\text{sum}}\). First, we investigate the model fits, then we describe our model comparison procedure, and finally we assess model performance using a visual diagnostic.
We fit EVAR, EVAR\({}_{0}\) and MMEM with model orders \(k=1,2,\ldots,6\) to data \(\mathcal{D}^{L}\). The fitting of these 18 models is a two-stage procedure. In the first part, we fit (six) conditional extremes models for the period of the peak \(\mathcal{P}_{0}^{k}\) for each \(k\). In the second part, we fit \(2\cdot 18=36\) models to the pre-peak \(\mathcal{P}^{\text{pre}}\) and post-peak \(\mathcal{P}^{\text{post}}\) periods. In Table 1, we report parameter estimates of the period of the peak model, and in Tables 2-3, we report parameter estimates of MMEM on \(\mathcal{P}^{\text{post}}\) and \(\mathcal{P}^{\text{pre}}\), respectively. Finally, we report parameter estimates of EVAR on \(\mathcal{P}^{\text{post}}\) and \(\mathcal{P}^{\text{pre}}\) in Tables 4-5, respectively. These indicate that all models agree on some level of asymptotic independence at each lag (coefficients of \(\tilde{\boldsymbol{\alpha}}_{0}\) are less than 1) with decreasing levels of dependence as lag increases, which can be seen by decreasing coefficients of \(\tilde{a}_{0}\) for entries further down the table. We remark that for EVAR(2) on \(\mathcal{P}^{\text{pre}}\), the coefficient of \(H_{S}\) at time \(t+1\) (\(0.96\)) is larger than the coefficient of \(W_{s}\) at time \(t+1\) (\(0.50\)) for estimating \(W_{s}\) at time \(t+2\). This has the interpretation that significant wave height might be a better predictor for wind speed than wind speed itself, also suggested by Figure 4.
For each of the 18 models and HM, we simulate \(20,000\) excursions to estimate model properties. First, we illustrate model characteristics for EVAR(4) in Figure 5 by plotting simulated excursions such that the excursion maximum significant wave height takes on values between 11.5m and 12.5m (left). We visually compare these with observed excursions for the same interval of excursion maxima (middle). On the right, we summarize simulated and observed excursions in terms of the median, the 10% and 90% percentiles of the sampling distribution at each time period. Finally, in the bottom panel we plot
\[\mathbb{P}\left(\min\{H_{S,i}^{L}:\ i=\min(0,\tau),\ldots,\max(0,\tau)\}>u\ \Big{|}\ H_{S,0}\in[11.5,12.5]\right), \tag{10}\]
for \(\tau\in\mathbb{Z}\), i.e., we plot the survival probability for an excursion relative to the time of the excursion maximum, conditional on the excursion maximum taking a value between 11.5m and 12.5m for both the
\begin{table}
\begin{tabular}{c c c} \(\underline{\boldsymbol{\alpha}}\) & \(\boldsymbol{\beta}\) & \\ \hline
0.54 (0.53,0.55) & 0.58 (0.56,0.59) & 0.68 (0.59,0.72) & 0.36 (0.31,0.52) \\
0.67 (0.66,0.68) & 0.75 (0.73,0.76) & 0.76 (0.66,0.82) & 0.34 (0.32,0.41) \\
0.86 (0.85,0.87) & 0.91 (0.90,0.93) & 0.82 (0.61,1.00) & 0.08 (0.07,0.14) \\ & & 0.27 (0.20,0.29) \\
0.75 (0.52,0.96) & 0.46 (0.33,0.47) & 0.75 (0.52,0.96) & 0.46 (0.33,0.47) \\
0.64 (0.62,0.80) & 0.46 (0.35,0.56) & 0.46 (0.35,0.56) \\
0.49 (0.48,0.66) & 0.16 (-0.03,0.35) \\ \end{tabular}
\end{table}
Table 1: Estimates of model parameters \(\boldsymbol{\alpha}\) and \(\boldsymbol{\beta}\) for the period of the peak \(\mathcal{P}_{0}^{k}\) with model order \(k=4\). Also shown in parentheses are 90% bootstrap confidence intervals. The structure of the irregular matrix estimates of \(\boldsymbol{\alpha}\) and \(\boldsymbol{\beta}\) is explained in Section 2.2.
\begin{table}
\begin{tabular}{c c c} \(\underline{\boldsymbol{\alpha}}\) & \(\boldsymbol{\beta}\) & \\ \hline
0.54 (0.53,0.55) & 0.58 (0.56,0.59) & 0.68 (0.59,0.72) & 0.36 (0.31,0.52) \\
0.67 (0.66,0.68) & 0.76 (0.66,0.82) & 0.34 (0.32,0.41) \\
0.82 (0.61,1.00) & 0.08 (0.07,0.14) & 0.82 (0.61,1.00) \\ & & 0.27 (0.20,0.29) \\
0.75 (0.52,0.96) & 0.46 (0.33,0.47) & 0.75 (0.52,0.96) & 0.46 (0.33,0.47) \\
0.64 (0.62,0.80) & 0.46 (0.35,0.56) & 0.46 (0.35,0.56) \\
0.49 (0.48,0.66) & 0.16 (-0.03,0.35) & 0.49 (0.48,0.66) \\ \end{tabular}
\end{table}
Table 4: Estimates of EVAR model parameters (Section 2.4) with model order \(k=1\) (left), 2 (right) for \(\mathcal{P}^{\mathrm{post}}\). Also shown in parentheses are 90% bootstrap confidence intervals.
\begin{table}
\begin{tabular}{c c c} \(\underline{\boldsymbol{\alpha}}\) & \(\boldsymbol{\beta}\) & \\ \hline
0.68 (0.59,0.72) & 0.36 (0.31,0.52) \\
0.76 (0.66,0.82) & 0.34 (0.32,0.41) \\
0.82 (0.61,1.00) & 0.08 (0.07,0.14) \\ & & 0.27 (0.20,0.29) \\ & & 0.75 (0.52,0.96) & 0.46 (0.33,0.47) \\
0.64 (0.62,0.80) & 0.46 (0.35,0.56) \\
0.49 (0.48,0.66) & 0.16 (-0.03,0.35) \\ \end{tabular}
\end{table}
Table 2: Estimates of MMEM model parameters \(\tilde{\boldsymbol{\alpha}}_{0}\) and \(\tilde{\boldsymbol{\beta}}_{0}\) with model order \(k=4\) for \(\mathcal{P}^{\mathrm{post}}\). Also shown in parentheses are 90% bootstrap confidence intervals. The structure of the irregular matrix estimates of \(\tilde{\boldsymbol{\alpha}}\) and \(\tilde{\boldsymbol{\beta}}\) is explained in Section 2.3.
simulated excursions and the observed excursions. We observe good agreement in the distribution of the length of an excursion with respect to the excursion maximum as both estimates are close to each other.
In the supplementary material, we produce analogous plots for each of the 18 models considered and HM. We observe that EVAR(4) characterizes the period of the peak, and also the pre-peak and post-peak periods of the excursion well. Moreover, EVAR(4) also reproduces the observed excursion survival probability.
Next, in Figure 6, we plot estimates of conditional probabilities \(\chi_{H}(u,l):=\mathbb{P}(H^{L}_{S,t+l}>u\mid H^{L}_{S,t}>u)\), \(\chi_{HW}(u,l):=\mathbb{P}(W^{L}_{s,t+l}>u\mid H^{L}_{S,t}>u)\), and \(\chi_{W}(u,l):=\mathbb{P}(W^{L}_{S,t+l}>u\mid W^{L}_{S,t}>u)\) using EVAR, MMEM and HM with model orders 1 and 4, and we compare these with empirical estimates.1 We make the following observations: HM is significantly worse at characterizing each of \(\chi_{H}\), \(\chi_{W}\) and \(\chi_{HW}\) compared to EVAR and MMEM. Moreover, estimates obtained from EVAR of large enough order, e.g., \(k\geq 4\), agree well with empirical estimates. MMEM, on the other hand, yields estimators that are slightly positively biased. In particular, larger model orders yield considerable improvements.
Footnote 1: We leave out EVAR\({}_{0}\) in this analysis for conciseness since its estimates are very similar to the estimates obtained using EVAR of the same model order.
In Figure 6, we discuss goodness-of-fit of each of the models. To compare MMEM and EVAR with each other and with HM, we take a similar approach to Gandy et al. (2022), who adjust standard cross-validation techniques to extreme value applications by taking a small training set and a larger test set. We select at random 25% of the observed excursions for our training sample; the remaining 75% forms our test sample. Below, we calculate performance statistics for the response variables by averaging over 50 such random partitions of the sample.
For training, we fit EVAR, EVAR\({}_{0}\) and MMEM with model orders \(k=1,2,\ldots,6\) as explained in the second paragraph of this section. For each of the 18 models and HM, we simulate \(20,000\) excursions, calculate structure response variables \(R^{\max}\) and \(R^{\text{sum}}\), and compare distributions of simulated structure response variables with those corresponding to the withheld test data. This is achieved by defining a dissimilarity distance function \(D\) that measures the level of difference in tails of distribution functions. We select 20 equidistant percentiles \(p_{1},\ldots,p_{20}\) ranging from 97% to 99.9% corresponding to moderately extreme to very extreme levels with respect to the (smaller) training sample but not too extreme for the (larger) withheld data. We define the distance \(D\) of distribution functions \(F_{M}\) (of model \(M\)) and \(F_{E}\) (an empirical distribution function) as the mean absolute relative error over these percentiles, i.e.,
\[D(F_{M},F_{E};p_{1},\ldots,p_{20})=\frac{1}{20}\sum_{j=1}^{20}\left|\frac{F_ {E}^{-1}(p_{j})-F_{M}^{-1}(p_{j})}{F_{E}^{-1}(p_{j})}\right|.\]
We remark that in the above definition, we never divide by zero because we only use \(D\) to measure the dissimilarity of distributions of positive random variables.
In Figure 7, we show the results for the 50 random partitions of the original sample by plotting the average distance \(D\) (with 80% confidence intervals) for each model together with HM for four different structure response variables corresponding to two choices of \(c\) and \(h\) for each of \(R^{\max}\) and \(R^{\text{sum}}\). Note that similar studies for other values of \(c\) and \(h\) for \(R^{\max}\) and \(R^{\text{sum}}\) were examined, and general findings are consistent with those illustrated in Figure 7. For legibility, we omit confidence bands for EVAR\({}_{0}\) since the
Figure 5: Excursions of \(H_{S}\) and \(W_{s}\) from EVAR(4) model (left; black), and data (middle; right) on original margins such that storm peak significant wave height is in \([11.5,12.5]\); (right) summaries of the data (black) and EVAR(4) (red) excursions: median (solid), and the 10% and 90% quantiles (dashed). In the bottom panel, we plot survival probabilities for observed (black) and EVAR(4) (red) excursions relative to the time of the excursion maximum, see equation (10).
Figure 6: Estimates of measures of extremal dependence across time lags 1 and 4, and variables given by \(\chi_{H}\), \(\chi_{HW}\) and \(\chi_{W}\) (left, middle, and right respectively) for each of the models: EVAR (red), MMEM (blue), HM (green), data (grey). For EVAR and MMEM, we plot these estimates for different model orders \(k=1\) and \(k=4\) with line types: one (solid), four (dotted). Moreover, the grey region depicts the confidence bounds for empirical estimates of these extremal dependence measures from the data.
difference with EVAR is minimal. Model selection now involves choosing the model that yields the smallest average dissimilarity \(D\) whilst keeping the model order as low as possible.
We make a number of observations. For the \(R^{\max}\) response, EVAR and MMEM clearly outperform HM regardless of model order. However, for the \(R^{\text{sum}}\) response, high order (e.g., \(k=4,5,6\)) EVAR and MMEM are necessary to be competitive with HM. We observe also that performance of EVAR and MMEM does not significantly improve or worsen for \(k>4\). This finding is further supported with an unpublished study with Markov model orders of \(k\leq 10\). We note that illustrations of excursions in the supplementary material demonstrate that MMEM(1) does not explain the variability of the pre-peak and post-peak periods well.
By looking at the average relative errors in \(R^{\max}\) and \(R^{\text{sum}}\) of our proposed selection of methods, we conclude that a third or fourth order MMEM and a fourth order EVAR are competitive models within their class. Since these models have similar performance, we prefer EVAR(4) because of its simpler two-dimensional residual distribution.
## 4 Conclusions and discussion
In this paper, we provide models for extreme excursions of multivariate time-series. Excursions are characterized by a three-stage modelling procedure for the period of the peak, the pre-peak and the post-peak periods. We model the period of the peak using the conditional extremes framework (Heffernan and Tawn, 2004), and for the pre-peak and post-peak periods, we define two classes of time-series models: MMEM, motivated by the Markov extremal model of Winter and Tawn (2017); and EVAR, an extreme-value extension of a vector autoregressive model. We compare these excursion models with a baseline historical matching method, motivated by Feld et al. (2015). We find that the excursion models - for a reasonably informed choice of \(k\), the order of the Markov process - are at least competitive with historical matching and often outperform it in the estimation of the tail of a range of notional structure response variables for a met-ocean application in the northern North Sea.
Figure 7: Average mean relative errors of HM, EVAR, EVAR\({}_{0}\) and MMEM (dashed/dotted) and 80% confidence regions (shaded) for estimating the distribution of structure responses using 25% of data for training and 75% of data for testing. For details, see the text.
Statistical modelling of extreme excursions of multivariate time-series is difficult as it requires the estimation of complex model forms. MMEM requires the estimation of the conditional distribution of high-dimensional residual random variables and EVAR is highly parameterized. Nevertheless, for realistically sized directional samples of significant wave height and wind speed time-series, we found that MMEM(3), MMEM(4) and EVAR(4) perform well. Even when the empirical historical matching procedure is competitive, adoption of an excursion model is advantageous because it allows for rigorous uncertainty quantification. We expect that our excursion models are applicable more generally, e.g., for the modelling of higher-dimensional met-ocean time-series and spatial fields.
We model wind speed and significant wave height marginally conditional on directional covariates. However, we did not investigate the explicit effect of the directional components on the dependence models. Since, we remove the marginal effect of direction before modelling the dependence, we do not expect this covariate to have a significant impact on the dependence. However, it would be very interesting to adapt our models to be able to investigate this further in future research.
## Appendix A Reparameterization of EVAR
As opposed to inference for vector autoregressive models, we cannot estimate the EVAR parameters by least squares due to the presence of the \(Y_{t,1}^{\mathbf{B}}\) term. Instead, we apply the inference methodology discussed in Section 2.5. Not surprisingly, the parameter estimates \(\hat{\Phi}^{(i)}\) for \(i=1,\ldots,k\) are highly intercorrelated because of the linear dependence between the components of \(\mathbf{Y}_{t-1},\ldots,\mathbf{Y}_{t-k}\). Reparameterization to reduce the correlation between parameter estimators is therefore attractive.
To reparameterize the model, we proceed as follows. First, we assume that the conditional extremes model is applicable to \(Y_{t-i,j}\) conditional on \(Y_{t-k,1}\) for each \(i=0,\ldots,k\) and \(j=1,\ldots,d\) apart from \((i,j)=(k,1)\), i.e., there exist parameters \(\alpha_{i,j}\in[-1,1]\) and \(\beta_{i,j}<1\) such that
\[\lim_{y\to\infty}\mathbb{P}\left(\frac{Y_{t-i,j}-\alpha_{i,j}y}{y^{\beta_{i,j} }}\leq x\ \Big{|}\ Y_{t-k,1}=y\right)=H_{i,j}(x),\]
where \(H_{i,j}\) is a non-degenerate distribution function. Following the EVAR model (6), we now must have
\[Y_{t+k,1} =\Phi^{(1)}_{1,1}Y_{t+k-1,1}+\cdots+\Phi^{(1)}_{d,1}Y_{t+k-1,d}+ \cdots+\Phi^{(k)}_{1,1}Y_{t,1}+\cdots+\Phi^{(k)}_{d,1}Y_{t,d}+Y^{B_{1}}_{t,1} \varepsilon_{t,1}\] \[=\left(\Phi^{(1)}_{1,1}\alpha_{k-1,1}+\cdots+\Phi^{(1)}_{d,1} \alpha_{k-1,d}+\cdots+\Phi^{(k)}_{1,1}+\cdots+\Phi^{(k)}_{d,1}\alpha_{0,d} \right)Y_{t,1}+o_{p}(Y_{t,1})\]
conditional on \(Y_{t,1}>v\) as \(v\) tends to infinity. On the other hand, we have \(Y_{t+k,1}|(Y_{t,1}>v)=\alpha_{0,1}Y_{t,1}+o_{p}(Y_{t,1})\). So,
\[\alpha_{0,1}=\Phi^{(1)}_{1,1}\alpha_{k-1,1}+\cdots+\Phi^{(1)}_{d,1}\alpha_{k- 1,d}+\cdots+\Phi^{(k)}_{1,1}\cdot 1+\cdots+\Phi^{(k)}_{d,1}\alpha_{0,d},\]
which explains the collinearity of the estimators. We now propose the following reparameterization \((\mathbf{B},\tilde{\Phi}^{(1)},\ldots,\tilde{\Phi}^{(k)})\). For each \(1\leq l\leq d\), we acquire \(\tilde{\Phi}^{(k-i)}_{j,l}\), i.e., the \((j,l)\)th element of \(\tilde{\Phi}^{(k-i)}\), inductively with \(0\leq i\leq k-1\), \(1\leq j\leq d\).
\[\Phi^{(k-i)}_{j,l}=\left\{\begin{array}{ll}\hat{\alpha}_{0,l}+\tilde{\Phi}^{ (k)}_{1,l},&\mbox{ for }i=0,\ j=1,\\ -\tilde{\Phi}^{(k-i)}_{j-1,l}\cdot\hat{\alpha}_{i,j-1}/\hat{\alpha}_{i,j}+ \tilde{\Phi}^{(k-i)}_{j,l},&\mbox{ for }i=0,\ldots,k-1,\ j=2,\ldots,d,\mbox{ conditional on }\tilde{\Phi}^{(k-i)}_{1,l},\\ -\tilde{\Phi}^{(k-i+1)}_{d,l}\cdot\hat{\alpha}_{i-1,d}/\hat{\alpha}_{i,1}+ \tilde{\Phi}^{(k-i)}_{1,l},&\mbox{ for }i=1,\ldots,k-1,\ j=1\mbox{ conditional on }\tilde{\Phi}^{(k-i+1)}_{d,l}.\end{array}\right.\]
where \(\hat{\alpha}_{i,j}\) is the maximum likelihood estimate for \(\alpha_{i,j}\). Under this reparametrization, estimators of \(\tilde{\Phi}^{(i)}_{j,k}\) are less correlated, which we demonstrated in unreported experiments comparing the dependence of the original and the reparameterized parameters using adaptive MCMC methodology (Roberts and Rosenthal, 2009). |
2309.16064 | Masked Autoencoders are Scalable Learners of Cellular Morphology | Inferring biological relationships from cellular phenotypes in high-content
microscopy screens provides significant opportunity and challenge in biological
research. Prior results have shown that deep vision models can capture
biological signal better than hand-crafted features. This work explores how
self-supervised deep learning approaches scale when training larger models on
larger microscopy datasets. Our results show that both CNN- and ViT-based
masked autoencoders significantly outperform weakly supervised baselines. At
the high-end of our scale, a ViT-L/8 trained on over 3.5-billion unique crops
sampled from 93-million microscopy images achieves relative improvements as
high as 28% over our best weakly supervised baseline at inferring known
biological relationships curated from public databases. Relevant code and
select models released with this work can be found at:
https://github.com/recursionpharma/maes_microscopy. | Oren Kraus, Kian Kenyon-Dean, Saber Saberian, Maryam Fallah, Peter McLean, Jess Leung, Vasudev Sharma, Ayla Khan, Jia Balakrishnan, Safiye Celik, Maciej Sypetkowski, Chi Vicky Cheng, Kristen Morse, Maureen Makes, Ben Mabey, Berton Earnshaw | 2023-09-27T23:11:35Z | http://arxiv.org/abs/2309.16064v2 | # Masked Autoencoders are Scalable Learners of Cellular Morphology
###### Abstract
Inferring biological relationships from cellular phenotypes in high-content microscopy screens provides significant opportunity and challenge in biological research. Prior results have shown that deep vision models can capture biological signal better than hand-crafted features. This work explores how self-supervised deep learning approaches scale when training larger models on larger microscopy datasets. Our results show that both CNN- and ViT-based masked autoencoders significantly outperform weakly supervised baselines. At the high-end of our scale, a ViT-L/8 trained on over 3.5-billion unique crops sampled from 93-million microscopy images achieves relative improvements as high as 28% over our best weakly supervised baseline at inferring known biological relationships curated from public databases. Relevant code and select models released with this work can be found at: [https://github.com/recursionpharma/maes_microscopy](https://github.com/recursionpharma/maes_microscopy).
## 1 Introduction
A fundamental challenge in biological research is quantifying complex cellular phenotypes and relating them across genetic and chemical perturbations [41, 53]. Image-based profiling has proven to be a powerful approach for exploring cellular phenotypes induced by genetic and chemical perturbations [3]. These experiments use _high content screening_ (HCS) systems combining automated microscopy with high throughput technologies to assay perturbations on a massive scale. Recent public releases of HCS image sets, like RxRx3 [19] and JUMP-CP [9], consist of millions of cellular images across 100,000s of unique perturbations and demonstrate the scalability of this approach.
HCS image sets are often analyzed with customized cell segmentation, feature extraction, and downstream analysis pipelines [4]. Despite the many discoveries made using this approach [3], developing robust segmentation and feature extraction pipelines using open-source software packages [6, 47] remains challenging [8]. Alternatively, representation learning approaches do not require prior knowledge of cellular morphology and perform significantly better on practical biological research objectives, e.g. inferring relationships between perturbations [7]. In contrast to previous approaches employing weakly supervised pretraining [37], in this work we train masked autoencoders (MAEs) [24] on progressively larger HCS image sets and show that these models are scalable learners of
cellular morphology, outperforming previous state-of-the-art methods at inferring known biological relationships in whole-genome HCS screens.
## 2 Related Work
**Supervised learning on HCS image sets.** Deep learning models have been successfully trained to perform cell segmentation [52; 36; 48] and phenotype classification [31; 32; 39; 18], however these supervised learning tasks require the costly creation of segmentation masks and other labels. Inspired by the successful use of embeddings obtained from ImageNet-trained models for other datasets and tasks [42], researchers used models trained on natural images to to featurize HCS data with varying results [1; 40]. Others [37; 49; 44] have trained convolutional networks to classify labels obtained from experimental metadata (e.g., perturbation class), a technique called _weakly supervised learning_ (WSL) [57]. Despite obtaining SOTA results when trained on small, highly-curated image sets, we show that the performance of WSL models does not necessarily improve on larger datasets.
**Self-supervised learning.** Vision models pretrained with self-supervised learning (SSL) often outperform supervised models on downstream tasks [24; 5; 10]. Unlike supervised pretraining [30], SSL is readily applied to large datasets where labels are lacking or heavily biased. This is useful for HCS datasets, as they contain a wide range of cellular phenotypes that are difficult for human experts to interpret and annotate. For example, DiNO [5] is an SSL method that has been applied to HCS [12; 23; 45; 29; 15] data, however it relies on augmentations inspired by natural images, which may not be applicable to HCS image sets. Alternatively, masked autoencoders [24] are trained by reconstructing masked patches from unmasked patches of an image (Fig. 1). MAEs have been successfully applied to images [24], audio [28], video [20] and multimodal audio-video datasets [27]. However, previous attempts to train MAEs on HCS datasets have had limited success [55; 29], due in part to limitations in compute resources and dataset size. The present work shows that MAE training scales with both model and training set size.
## 3 Methods
**Datasets.** We investigate the scaling properties [56] of cellular image sets by evaluating models trained on the following four microscopy datasets. **RxRx1**[49] is a publicly-available proprietary Cell Painting dataset with 125,510 images of 4 human cell types under 1,108 different siRNA perturbations across 51 experimental batches. **RxRx3**[19] is a publicly-available proprietary Cell Painting dataset with over 2.2 million images of HUVEC cells under 17,063 CRISPR knockouts (over 6 guides) or 1,674 compounds across 180 experimental batches. **RPI-52M** and **RPI-93M** (Recursion Phenomics Imageset) are private datasets with 52 million and 93 million proprietary Cell Painting and Brightfield images, respectively. To our knowledge, these are the largest HCS datasets collected for model training purposes. All evaluations are performed on RxRx3, which is the largest publicly available whole-genome HCS image set.
**Weakly supervised learning.** As a baseline, we employ the 28-million parameter DenseNet-161 backbone implemented in [49], trained to predict cellular perturbations and producing 128-dimensional embeddings, with and without adaptive batch normalization (AdaBN) [34].
Figure 1: Visualizing reconstructions from masked random _validation_ images for different MAEs.
**U-Nets**. We adapt U-Nets [43] for masked autoencoding (MU-Nets) by training to reconstruct masked sections of input images. We train MU-Nets as described in Xun et al. [55] and report results for MU-Net-M and MU-Net-L, which have 52- and 135-million parameters, respectively. MU-Net-M's downsampling scale is set to 32/64/128/256/512. MU-Net-L incorporates an additional scale of 1024. In each case, the decoder mirrors the encoder's scale configuration. After an initial hyperparameter search (see A.1.2), we trained both models with a mask ratio of 25% and kernel size of 5.
**Vision transformers.** We train vision transformers (ViTs) [16; 46; 14; 56] as MAEs following the implementation in He et al. [24]. We report results for ViT-S, ViT-B, and ViT-L encoders [16], containing 22-, 86-, and 304-million parameters, respectively, and producing 384-, 768-, and 1024-dimensional embeddings, respectively. We explore the use of 8x8 and 16x16 patch sizes and 75% and 25% mask ratios (Fig. 1). A 25-million parameter decoder [24] is used for patch reconstructions. Note that 8x8 patches induce a sequence length 4 times greater than 16x16 patches and is thus more computationally expensive.
**Training.** Models were trained on Recursion's HPC cluster, BioHive-1, for up to 100 epochs on as many as 128 80GB-A100 GPUs, depending on the size of the model and dataset. 256 x 256 x 6 image crops were randomly sampled from 2048 x 2048 x 6 images, augmenting with random horizontal and vertical flips. For each dataset, we use a validation set of center-cropped images from full experiments unseen during training.
**Scaling to ViT-L/8+.** We scale training based on the results of smaller models trained on smaller datasets [14; 25; 38; 56], as visualized in Figure 2 (total FLOps is based on Touvron et al. [51]). Our largest model, ViT-L/8+, was trained for over 20,000 GPU hours, learning from over 3.5 billion image crops sampled from RPI-93M. Inspired by [54], we added a term to the loss function to prevent divergence and improve texture reconstruction.
**Inference.** The metrics of Section 4 are calculated on the gene knockout experiments of RxRx3 [19], requiring the embedding of ~140 million image crops for each encoder. See A.2 for details.
## 4 Results
An important use of HCS data is the accurate inference of biological relationships amongst genetic and chemical perturbations. We evaluate each model's ability to capture known relationships using the multivariate metrics described in Celik et al. [7]. Briefly, each model's embeddings are first aligned across experimental batches using TVN (typical variation normalization) [1], fitted to the negative experimental controls across all batches. Following TVN, we correct for possible chromosome arm biases known to exist in CRISPR-Cas9 HCS data [33]. We compute the embedding of each perturbation by taking the spherical mean over its replicate embeddings. We use the cosine similarity of a pair of perturbation representations as a relationship metric, setting the origin of the space to the mean of negative experimental controls. We compare these similarities with the annotated
relationships found in the following public databases: CORUM [22], hu.MAP [17], Reactome [21], and StringDB [50] (with >95% combined score).
Table 1 reports the recall of known relationships amongst the top and bottom 5% of all cosine similarities between CRISPR knockout representations in RxRx3. Note how both recall and image reconstruction (see Fig. 1) improve with larger models, larger training sets, smaller patches, and larger mask ratio. In Figure 2 we see that recall strongly correlates with training FLOps, a function of both model and training set size (see A.3 for similar results trends on other databases). Figure 3 shows similar trends in recall for other similarity percentiles. In contrast, the performance of re-implemented WSL baselines [49] decreases when the dataset is scaled from RxRx1 to RxRx3, which could be due to the chromosome arm bias present in CRISPR-Cas9 systems [33] or other factors such as the increased size of the label set.
We compare these models with recent results from an alternative HCS platform combining pooled CRISPR screening with Cell Painting [45]. Table 2 reports recall at 5% FPR in StringDB on three gene sets defined in Sivanandan et al. [45]. The ViT-L/8+ MAE trained on RPI-93M yields a minimum 20% relative improvement in gene set performance over CP-DiNO 1640 (a ViT-S/8), which was trained on ~1.5 million single-cell images. We note the significant differences in assay technology, cell lines, and modeling methodology between the two platforms, making their direct comparison impossible using this metric. Nonetheless, we hope this comparison brings the field closer to an accepted set of benchmarks for evaluating models trained on HCS datasets.
## 5 Conclusion
This work demonstrates that scaling properties [56] apply to learning representations of cellular morphology that can accurately infer known biological relationships. Unlike previous approaches that use weakly supervised learning [37; 49] on small, curated datasets, we showed that the performance of MAEs on biologically meaningful benchmarks scales to massive HCS image sets. In future work, we will continue to scale model and training set sizes even further. We will also explore new applications of this technology beyond predicting biological relationships, with the ultimate goal of creating general-purpose foundation models of cellular biology.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model backbone** & RxRx1 [49] & RxRx3 [19] & RPI-52M & RPI-93M \\ \hline DenseNet-161 &.38/.31/.19/.33 &.36/.27/.17/.32 & – & – \\ DenseNet-161 w/ AdaBN &.48/.35/.23/.42 &.46/.30/.19/.38 & – & – \\ \hline MU-Net-M & – &.56/.38/.23/.42 & – & – \\ MU-Net-L & – &.57/.37/.23/.43 &.58/.39/.24/.44 &.58/.39/.25/.44 \\ MAE ViT-S/16 & – &.52/.37/.23/.41 &.51/.36/.22/.40 & – \\ MAE ViT-B/16 & – &.57/.39/.23/.43 &.54/.37/.23/.42 & – \\ MAE ViT-B/8 & – & – &.60/.40/.25/.46 & – \\ MAE ViT-L/16 & – &.56/.37/.23/.43 &.61/.41/.26/.46 & – \\ MAE ViT-L/8+ & – & – &.61/.42/.27/.47 & **.62/.44/.27/.48** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Recall of known relationships in top and bottom 5% of cosine similarities by model backbone and training set, with results for each database (CORUM/hu.MAP/Reactome/StringDB). DenseNet-161 backbones are trained via WSL, all others via SSL. See Fig. 3 for recall at other percentiles.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline
**Training dataset** & **Model backbone** & PoC-124 & MoA-300 & DG-1640 \\ \hline RxRx1 [49] & WSL DenseNet-161 w/ AdaBN &.79 & **.24** &.15 \\ RxRx3 [19] & MAE ViT-S/16 &.74 &.19 &.14 \\ RPI-52M & MU-Net-L &.79 &.20 &.15 \\ RPI-93M & MAE ViT-L/8+ & **.80** &.23 & **.17** \\ \hline CP-1640 [45] & DiNO ViT-S/8 &.53 &.12 &.14 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Recall (at 5% false positive rate) of StringDB relationships for select models on three different gene sets defined in Sivanandan et al. [45].
### Acknowledgements
This work reflects the combined efforts of many current and former Recursion employees. Special thanks to the Recursion lab team for design and execution of the HCS experiments which fueled our datasets. Additional thanks to the Recursion HPC team for their dedicated support in keeping our cluster, BioHive-1, running effectively. We would especially like to thank the following individuals for their contributions toward this work: Dominique Beaini, Jordan Christensen, Joshua Fryer, Brent Gawryluik, Imran Haque, Jason Hartford, Alex Timofeyev, and John Urbanik.
|
2309.16070 | Negative type and bi-lipschitz embeddings into Hilbert space | The usual theory of negative type (and $p$-negative type) is heavily
dependent on an embedding result of Schoenberg, which states that a metric
space isometrically embeds in some Hilbert space if and only if it has
2-negative type. A generalisation of this embedding result to the setting of
bi-lipschitz embeddings was given by Linial, London and Rabinovich. In this
article we use this newer embedding result to define the concept of distorted
p-negative type and extend much of the known theory of p-negative type to the
setting of bi-lipschitz embeddings. In particular we show that a metric space
$(X; d_X)$ has $p$-negative type with distortion $C$ $(0 \le p < \infty$, $1
\le C < 1$) if and only if $(X; d^{p/2}_X$) admits a bi-lipschitz embedding
into some Hilbert space with distortion at most $C$. Analogues of strict
$p$-negative type and polygonal equalities in this new setting are given and
systematically studied. Finally, we provide explicit examples of these concepts
in the bi-lipschitz setting for the bipartite graphs $K_{m,n}$ and the Hamming
cube $H_n$. | Gavin Robertson | 2023-09-27T23:41:56Z | http://arxiv.org/abs/2309.16070v1 | # Negative type and bi-Lipschitz embeddings into Hilbert space
###### Abstract.
The usual theory of negative type (and \(p\)-negative type) is heavily dependent on an embedding result of Schoenberg, which states that a metric space isometrically embeds in some Hilbert space if and only if it has \(2\)-negative type. A generalisation of this embedding result to the setting of bi-lipschitz embeddings was given by Linial, London and Rabinovich. In this article we use this newer embedding result to define the concept of distorted \(p\)-negative type and extend much of the known theory of \(p\)-negative type to the setting of bi-lipschitz embeddings. In particular we show that a metric space \((X,d_{X})\) has \(p\)-negative type with distortion \(C\) (\(0\leq p<\infty\), \(1\leq C<\infty\)) if and only if \((X,d_{X}^{p/2})\) admits a bi-lipschitz embedding into some Hilbert space with distortion at most \(C\). Analogues of strict \(p\)-negative type and polygonal equalities in this new setting are given and systematically studied. Finally, we provide explicit examples of these concepts in the bi-lipschitz setting for the bipartite graphs \(K_{m,n}\) and the Hamming cube \(H_{n}\).
## 1. Introduction
The theory of embeddings of metric spaces has a long and rich history, with some of the most classical results in this area dating back to Cayley [7] in the \(19^{\text{th}}\) century. Since then embeddings of metric spaces have been the subject of much study with classical results in this area given by Menger [25, 26], Schoenberg [29, 30], Enflo [13, 14] and Gromov [16] (to name a few). One main theme that can be seen throughout their work is that many difficult problems in mathematical analysis and otherwise can be solved quickly and easily by constructing a suitable embedding of a metric space into a more tractable host space. Because of this the theory of metric embeddings has a plethora of applications in areas such as functional analysis, evolutionary biology, combinatorial optimisation and theoretical computer science. Specific examples include the Sparsest Cut Problem in combinatorial optimisation [3, 33] and the study of phylogenetic trees in evolutionary biology [34, 1, 31].
In this paper we will study isometric and bi-lipschitz embeddings of metric spaces through the theory of \(p\)-negative type. The concept of \(p\)-negative type was originally developed by Schoenberg [30] to study isometric embeddings of powers of metric spaces into Hilbert space. Recently \(p\)-negative type has been the topic of much research in areas such as mathematical analysis and theoretical computer science (see [9, 11, 18, 22, 15] for a list of some of the applications of \(p\)-negative type to the isometric theory of metric spaces). The definition of \(p\)-negative type is as follows (note that the definition of strict \(p\)-negative type is due to Li and Weston in [22]).
**Definition 1.1**.: _Let \((X,d_{X})\) be a semi-metric space1 and \(0\leq p<\infty\). Then \(X\) is said to have \(p\)-negative type if_
Footnote 1: Here we use the term semi-metric space to mean a ‘metric’ space except that we drop the requirement that the triangle inequality holds. All results in this paper will hold for semi-metric spaces since they do not require the use of the triangle inequality anywhere in their proofs.
\[\sum_{i,j=1}^{n}d_{X}(x_{i},x_{j})^{p}\xi_{i}\xi_{j}\leq 0\]
_for all distinct \(x_{1},\ldots,x_{n}\in X\), \(\xi_{1},\ldots,\xi_{n}\in\mathbb{R}\) with \(\sum_{i=1}^{n}\xi_{i}=0\) and \(n\geq 2\). Furthermore, \(X\) is said to have strict \(p\)-negative type if equality holds only for \(\xi_{1}=\cdots=\xi_{n}=0\)._
Central to the theory of \(p\)-negative type and its applications are the two following classical results of Schoenberg [29, 30] (note that the first statement below provides an isometric characterisation of subsets of Hilbert space).
**Theorem 1.2**.: _Let \((X,d_{X})\) be a semi-metric space and \(0\leq p<\infty\)._
1. \(X\) _has_ \(p\)_-negative type if and only if_ \((X,d_{X}^{p/2})\) _isometrically embeds in some Hilbert space._
2. _If_ \(0\leq q<p\) _and_ \(X\) _has_ \(p\)_-negative type then_ \(X\) _also has strict_ \(q\)_-negative type._
Note that the second statement in the above theorem implies that all of the values of \(p\)-negative type that a semi-metric space \((X,d_{X})\) possesses are encoded in the largest value \(s\geq 0\) such that \(X\) has \(s\)-negative type. Thus one is led to consider the quantity
\[\wp_{X}=\sup\{p\geq 0:X\text{ has $p$-negative type}\}\]
which is referred to as the supremal \(p\)-negative type of \(X\). Note that since all the sums that appear in the definition of \(p\)-negative type are finite sums a simple limiting argument shows that if \(\wp_{X}<\infty\) then \(X\) also has \(\wp_{X}\)-negative type. Consequently the set of values \(s\geq 0\) such that a semi-metric space \((X,d_{X})\) has \(s\)-negative type is always of the form \([0,\wp_{X}]\) or \([0,\infty)\).
It follows from the work of Schoenberg [29, 30] that if \(0<p\leq 2\) then \(X=L^{p}\) has \(\wp_{X}=p\). In fact, a celebrated theorem of Bretagnolle, Dacunha-Castelle and Krivine [6] provides a partial converse to this result, which characterises linear subspaces of \(L^{p}\) (\(1\leq p\leq 2\)) up to linear isometry. Namely, if \(1\leq p\leq 2\) then a real normed space \(X\) is linearly isometric to a subspace of some \(L^{p}\) space if and only if \(X\) has \(p\)-negative type. The case when \(2<p<\infty\) is radically different. It follows from Koldobsky [20] that if \(n\geq 3\) and \(2<p<\infty\) then \(X=\ell_{n}^{p}\) has \(\wp_{X}=0\), and also that \(Y=\ell_{2}^{p}\) has \(\wp_{Y}=1\). The variation for \(n=2\) can be explained by the well-known result that every \(2\)-dimensional normed space is linearly isometric to a subspace of \(L_{1}\) (see [38]). Results on non-commutative \(L^{p}\) spaces have also been obtained by Dahma and Lennard [8]. Here they showed that if \(X=S_{p}\) is the Schatten \(p\)-trace class then \(\wp_{X}=2\) when \(p=2\) and \(\wp_{X}=0\) if \(0<p<\infty\) and \(p\neq 2\).
More recently, Linial, London and Rabinovich [24] have provided a similar characterisation of those finite metric spaces that admit a bi-lipschitz embedding into some Hilbert space with a given level of distortion. Bi-lipschitz embeddings of metric spaces, and finite metric spaces in particular, have been studied extensively
over the last half-century or so. Originally such embeddings were studied from the perspective of geometric analysis and Banach space theory. However, more recently the theory of bi-lipschitz embeddings has been examined from the perspective of combinatorial optimisation and theoretical computer science. This is in part due to Linial, London and Rabinovich [24] who showed that many problems in optimisation could be solved efficiently by considering bi-lipschitz embeddings of certain metric spaces into larger host spaces, such as Hilbert or Banach spaces.
Let us now recall the definition of a bi-lipschitz embedding.
**Definition 1.3**.: _Let \((X,d_{X})\) and \((Y,d_{Y})\) be semi-metric spaces and \(1\leq C<\infty\)._
1. _A map_ \(f:X\to Y\) _is said to have distortion at most_ \(C\) _if there exists a (scaling constant)_ \(s>0\) _such that_ \[sd_{X}(x,y)\leq d_{Y}(f(x),f(y))\leq sCd_{X}(x,y)\] _for all_ \(x,y\in X\)_. The smallest such_ \(C\) _for which this holds is denoted by_ \(\operatorname{dist}(f)\) _(and if no such_ \(C\) _exists we set_ \(\operatorname{dist}(f)=\infty\)_). If_ \(\operatorname{dist}(f)<\infty\) _then we say that_ \(f\) _is a bi-lipschitz embedding._
2. _We denote by_ \(c_{(Y,d_{Y})}(X,d_{X})\) _(or simply by_ \(c_{Y}(X)\)_) the infimum of all constants_ \(1\leq C\leq\infty\) _such that there exists a map_ \(f:X\to Y\) _with_ \(\operatorname{dist}(f)\leq C\) _(where again we allow the possibility that_ \(c_{(Y,d_{Y})}(X,d_{X})=\infty\)_). When_ \(Y=\ell^{2}\)_, we denote_ \(c_{Y}(X)\) _simply by_ \(c_{2}(X)\)_._
**Remark 1.4**.: For our purposes, it is useful to say that a map \(f:(X,d_{X})\to(Y,d_{Y})\) is an isometry if there exists some \(s>0\) such that \(d_{Y}(f(x),f(y))=sd_{X}(x,y)\) for all \(x,y\in X\). Note that this is slightly more general than the usual definition of isometries in the literature. However, one may note that the defintion of (strict) \(p\)-negative type is invariant under this more general definition of isometry too. Moreover, with this definition one has that \(\operatorname{dist}(f)=1\) if and only if \(f\) is an isometry. In this way we may think of bi-lipschitz embeddings as a generalisation of isometries.
One of the main problems that one faces when dealing with bi-lipschitz embeddings of semi-metric spaces is how one may embed the given space into the larger space with as little distortion as possible. The most classical result along these lines is perhaps a result of Bourgain [5], which states that any \(n\) point metric space2 admits a bi-lipschitz embedding into \(\mathbb{R}^{n}\) with distortion at most \(O(\log n)\). In [24] Linial, London and Rabinovich provided an algorithmic proof of this result and were able to show that this bound is attained for constant degree expander graphs (and hence cannot be improved in general).
Footnote 2: The use of the word ‘metric’ is necessary here, since the result fails to hold in general for \(n\) point semi-metric spaces.
To state one of the main results of [24] we need to introduce a certain class of matrices that will frequently appear throughout this paper. We will use \(M_{n}(\mathbb{R})\) to denote the space of all real-valued \(n\times n\) matrices. Using \(A^{T}\) to denote the usual matrix transpose of \(A\) and \(\langle\cdot,\cdot\rangle\) to denote the standard inner product on \(\mathbb{R}^{n}\), we then set
\[M_{n}^{+}(\mathbb{R})=\{A\in M_{n}(\mathbb{R}):A^{T}=A\text{ and }\langle A \xi,\xi\rangle\geq 0,\forall\xi\in\mathbb{R}^{n}\}.\]
That is, \(M_{n}^{+}(\mathbb{R})\) is simply the set of all positive semidefinite matrices in \(M_{n}(\mathbb{R})\). We will also use \(\mathbb{1}\) to denote the vector in \(\mathbb{R}^{m}\) whose entries are all \(1\), where \(m\) may
depend on the context. Finally, we put
\[\mathcal{O}_{n}(\mathbb{R})=\{Q\in M_{n}^{+}(\mathbb{R}):Q\,\mathbb{1}=0\}.\]
The result we shall be using from [24] is the following one.
**Theorem 1.5**.: _Let \((X,d_{X})=(\{x_{1},\dots,x_{n}\},d_{X})\) be a finite semi-metric space and \(C\geq 1\). Then \(X\) admits a bi-lipschitz embedding into \(\mathbb{R}^{n}\) with distortion at most \(C\) if and only if_
\[\sum_{q_{ij}>0}d_{X}(x_{i},x_{j})^{2}q_{ij}+C^{2}\sum_{q_{ij}<0}d_{X}(x_{i},x_{ j})^{2}q_{ij}\leq 0\]
_for all \(Q=(q_{ij})_{i,j=1}^{n}\in\mathcal{O}_{n}(\mathbb{R})\)._
It is worth remarking at this time that a simple consequence of the above theorem is the following expression for the Euclidean distortion of a finite semi-metric space.
**Theorem 1.6**.: _Let \((X,d_{X})=(\{x_{1},\dots,x_{n}\},d_{X})\) be a finite semi-metric space and \(C\geq 1\). Then_
\[c_{2}(X,d_{X})^{2}=\max\bigg{\{}\frac{\sum_{q_{ij}>0}d_{X}(x_{i},x_{j})^{2}q_{ ij}}{-\sum_{q_{ij}<0}d_{X}(x_{i},x_{j})^{2}q_{ij}}:Q\in\mathcal{O}_{n}(\mathbb{R}),Q\neq 0\bigg{\}}.\]
Our starting point for constructing an analogue of \(p\)-negative type that is compatible with bi-lipschitz embeddings is Theorem 1.5. Indeed, we are now in a position to provide our definition of distorted \(p\)-negative type, which is the main object of study in this article.
**Definition 1.7**.: _Let \((X,d_{X})\) be a semi-metric space, \(0\leq p<\infty\) and \(1\leq C<\infty\). Then \(X\) is said to have \(p\)-negative type with distortion \(C\) if_
\[\sum_{q_{ij}>0}d_{X}(x_{i},x_{j})^{p}q_{ij}+C^{2}\sum_{q_{ij}<0}d_{X}(x_{i},x_ {j})^{p}q_{ij}\leq 0\]
_for all distinct \(x_{1},\dots,x_{n}\in X\), \(Q=(q_{ij})_{i,j=1}^{n}\in\mathcal{O}_{n}(\mathbb{R})\) and \(n\geq 2\). Furthermore, \(X\) is said to have strict \(p\)-negative type with distortion \(C\) if equality holds only for \(Q=0\)._
In Section 2 we prove that with the above definition of distorted \(p\)-negative type Theorem 1.5 allows us to mimic the isometric embedding property of the usual \(p\)-negative type in the bi-lipschitz setting. That is, a semi-metric space \((X,d_{X})\) has \(p\)-negative type with distortion \(C\) if and only if \((X,d_{X}^{p/2})\) embeds in a Hilbert space with distortion at most \(C\). We also show that distorted \(p\)-negative type is really a generalisation of the usual \(p\)-negative type. That is, for any \(0\leq p<\infty\), a semi-metric space \((X,d_{X})\) has \(p\)-negative type with distortion \(C=1\) if and only if it has \(p\)-negative type (in the sense of Definition 1.1). We also provide an analogue of the nesting result for the usual \(p\)-negative type (see the second statement in Theorem 1.2 above) for distorted \(p\)-negative type.
In Section 3 we study the concepts of strict distorted \(p\)-negative type through the lens of polygonal equalities. The usual definition of a polygonal equality is due to Li and Weston in [22], where they define a \(p\)-polygonal equality to be an equality of the form
\[\sum_{i,j=1}^{n}d_{X}(x_{i},x_{j})^{p}\xi_{i}\xi_{j}=0\]
for some \(n\geq 2\), distinct \(x_{1},\ldots,x_{n}\in X\) and \(\xi_{1},\ldots,\xi_{n}\in\mathbb{R}\) with \(\sum_{i=1}^{n}\xi_{i}=0\).
It was proved in [22] that it is possible for an infinite semi-metric space \(X\) to have either strict \(\wp_{X}\)-negative type or nonstrict \(\wp_{X}\)-negative type. However, if \(X\) is a finite semi-metric space it was also proved in [22] that \(X\) must have nonstrict \(\wp_{X}\)-negative type (provided that \(\wp_{X}<\infty\)). That is, when \(|X|<\infty\) and \(\wp_{X}<\infty\), \(X\) always admits a non-trivial \(\wp_{X}\)-polygonal equality. This also means that if \(X\) is a finite semi-metric space then \(X\) has strict \(p\)-negative type if and only if \(p<\wp_{X}\).
In light of Definition 1.7, in the distorted setting we must define our polygonal equalities slightly differently. Here we will define a \(p\)-polygonal equality with distortion \(C\) to be an equality of the form
\[\sum_{q_{ij}>0}d_{X}(x_{i},x_{j})^{p}q_{ij}+C^{2}\sum_{q_{ij}<0}d_{X}(x_{i},x_{ j})^{p}q_{ij}=0\]
for some distinct \(x_{1},\ldots,x_{n}\in X\), \(Q=(q_{ij})_{i,j=1}^{n}\in\mathcal{O}_{n}(\mathbb{R})\) and \(n\geq 2\). The connection between our new definition of polygonal equalities and the definition due to Li and Weston is explained below in Section 3. It suffices to say here that in Section 3 we prove that for all \(p\geq\wp_{X}\), a finite metric space admits a non-trivial \(p\)-polygonal equality with distortion \(c_{2}(X,d_{X})^{p/2}\) (see Corollary 3.7). Using this we are able to conclude that a finite semi-metric space \(X\) has strict \(p\)-negative type with distortion \(C\) if and only if \(p<\wp_{X}\) or \(C>c_{2}(X,d_{X}^{p/2})\) (see Corollary 3.8).
Finally, in Section 4 we provide explicit examples of optimal distortion Euclidean embeddings of powers of the bipartite graph \(K_{m,n}\) and the Hamming cube \(H_{n}\). In doing so we are also able to provide explicit examples of distorted polygonal equalities for these spaces as well as determine for which values of \(p\) and \(C\) these spaces have (strict) \(p\)-negative type with distortion \(C\).
## 2. Distorted \(p\)-negative Type
Our first point of call is to show that when \(C=1\), the definition of (strict) \(p\)-negative type with distortion \(C\) coincides with usual definition of (strict) \(p\)-negative type (see Definition 1.1).
**Proposition 2.1**.: _Let \((X,d_{X})\) be a metric space and \(p\geq 0\). Then the following are true._
1. \(X\) _has_ \(p\)_-negative type if and only if_ \(X\) _has_ \(p\)_-negative type with distortion_ \(1\)_._
2. \(X\) _has strict_ \(p\)_-negative type if and only if_ \(X\) _has strict_ \(p\)_-negative type with distortion_ \(1\)_._
Proof.: We will prove only the second statement, since the proof of the first statement is more or less identical. First suppose that \(X\) has strict \(p\)-negative type with distortion \(1\). By definition, this means that
\[\sum_{i,j=1}^{n}d_{X}(x_{i},x_{j})^{p}q_{ij}<0\]
for all distinct \(x_{1},\ldots,x_{n}\in X\), all nonzero \(Q=(q_{ij})_{i,j=1}^{n}\in\mathcal{O}_{n}(\mathbb{R})\) and \(n\geq 2\). Now, suppose that \(n\geq 2\), \(x_{1},\ldots,x_{n}\in X\) are distinct and that \(\xi_{1},\ldots,\xi_{n}\in\mathbb{R}\) (not all zero) with \(\sum_{i=1}^{n}\xi_{i}=0\). Setting \(Q=(\xi_{i}\xi_{j})_{i,j=1}^{n}\) it is a simple matter to check that
\(Q\neq 0\) and \(Q\in\mathcal{O}_{n}(R)\). Hence, applying the above inequality to this particular choice of \(Q\) gives that
\[\sum_{i,j=1}^{n}d_{X}(x_{i},x_{j})^{p}\xi_{i}\xi_{j}=\sum_{i,j=1}^{n}d_{X}(x_{i}, x_{j})^{p}q_{ij}<0\]
which shows that \(X\) has strict \(p\)-negative type. Conversely, suppose that \(X\) has strict \(p\)-negative type. Take distinct \(x_{1},\dots,x_{n}\in X\) and a nonzero \(Q=(q_{ij})_{i,j=1}^{n}\in\mathcal{O}_{n}(\mathbb{R})\) with \(\operatorname{rank}(Q)=r\). Since \(Q\) is positive semi-definite with \(Q\mathbb{1}=0\), we may write it as \(Q=\sum_{k=1}^{r}R_{k}\) where each \(R_{k}\) is positive semi-definite with \(\operatorname{rank}(R_{k})=1\) and \(R_{k}\mathbb{1}=0\). By basic linear algebra, for each \(1\leq k\leq r\) we can find \(\xi_{1}^{(k)},\dots,\xi_{n}^{(k)}\in\mathbb{R}\) (not all zero) with \(\sum_{i=1}^{n}\xi_{i}^{(k)}=0\) such that \(R_{k}=(\xi_{i}^{(k)}\xi_{j}^{(k)})_{i,j=1}^{n}\). Putting this all togther, we have that
\[q_{ij}=\sum_{k=1}^{r}\xi_{i}^{(k)}\xi_{j}^{(k)}\]
for all \(1\leq i,j\leq n\). Hence
\[\sum_{i,j=1}^{n}d_{X}(x_{i},x_{j})^{p}q_{ij}=\sum_{k=1}^{r}\sum_{i,j=1}^{n}d_{ X}(x_{i},x_{j})^{p}\xi_{i}^{(k)}\xi_{j}^{(k)}<0\]
which shows that \(X\) has strict \(p\)-negative type with distortion \(1\).
Next we remark that the bi-lipschitz embedding theorem of Linial, London and Rabinovich [24] can easily be extended to deal with infinite metric spaces also. This is due to the following classical result which states that Hilbertian distortion is finitely determined. A proof of the following proposition can be found in [35], for example.
**Proposition 2.2**.: _Let \((X,d_{X})\) be a semi-metric space and \(1\leq C<\infty\). Then \((X,d_{X})\) embeds in a Hilbert space with distortion at most \(C\) if and only if every finite subset \(Y\subseteq X\) embeds in \(\ell^{2}\) with distortion at most \(C\)._
Combined with Theorem 1.5, this allows us to say the following.
**Theorem 2.3**.: _Let \((X,d_{X})\) be a semi-metric space, \(0<p<\infty\) and \(1\leq C<\infty\). Then \(X\) has \(p\)-negative type with distortion \(C\) if and only if \((X,d_{X}^{p/2})\) embeds in a Hilbert space with distortion at most \(C\)._
It is worth noting that for finite semi-metric spaces the definition of distorted \(p\)-negative type simplifies somewhat, since there is no need to vary over all distinct \(x_{1},\dots,x_{n}\in X\). This is a direct consequence of Theorem 1.5 and Theorem 2.3.
**Theorem 2.4**.: _Let \((X,d_{X})=(\{x_{1},\dots,x_{n}\},d_{X})\) be a finite semi-metric space, \(0\leq p<\infty\) and \(1\leq C<\infty\). Then the following statements are equivalent in pairs._
1. \(X\) _has_ \(p\)_-negative type with distortion_ \(C\)_._
2. \((X,d_{X}^{p/2})\) _embeds in_ \(\mathbb{R}^{n}\) _with distortion at most_ \(C\)_._
3. _The inequality_ \[\sum_{q_{ij}>0}d_{X}(x_{i},x_{j})^{p}q_{ij}+C^{2}\sum_{q_{ij}<0}d_{X}(x_{i},x_{ j})^{p}q_{ij}\leq 0\] _holds for all_ \(Q=(q_{ij})_{i,j=1}^{n}\in\mathcal{O}_{n}(\mathbb{R})\)
Again, an extremely useful corollary of the above theorem is the following formula for the Euclidean distortion of a finite semi-metric space.
**Corollary 2.5**.: _Let \((X,d_{X})=(\{x_{1},\ldots,x_{n}\},d_{X})\) be a semi-metric space and \(0<p<\infty\). Then_
\[c_{2}(X,d_{X}^{p/2})^{2}=\max\bigg{\{}\frac{\sum_{q_{ij}>0}d_{X}(x_{i},x_{j})^{ p}q_{ij}}{-\sum_{q_{ij}<0}d_{X}(x_{i},x_{j})^{p}q_{ij}}:Q\in\mathcal{O}_{n}( \mathbb{R}),Q\neq 0\bigg{\}}.\]
We now show that distorted \(p\)-negative type satisfies a simple nesting result.
**Proposition 2.6**.: _Let \((X,d_{X})\) be a semi-metric space, \(0<p<\infty\) and \(1\leq C_{1}<C_{2}<\infty\). If \(X\) has \(p\)-negative type with distortion \(C_{1}\) then \(X\) has strict \(p\)-negative type with distortion \(C_{2}\)._
Proof.: Take \(n\geq 2\), distinct \(x_{1},\ldots,x_{n}\in X\) and \(Q=(q_{ij})_{i,j=1}^{n}\in\mathcal{O}_{n}(\mathbb{R})\) with \(Q\neq 0\). Now, since \(Q\in M_{n}^{+}(\mathbb{R})\), note that \(\langle Q\xi,\xi\rangle=0\) if and only if \(Q\xi=0\). One direction is obvious. For the other, use the existence of the square root \(Q^{1/2}\). Then \(\langle Q\xi,\xi\rangle=0\) implies that \(\|Q^{1/2}\xi\|_{2}^{2}=\langle Q\xi,\xi\rangle=0\) and so \(Q^{1/2}\xi=0\), and thus \(Q\xi=Q^{1/2}(Q^{1/2}\xi)=0\). Also, by linearity, since \(Q\neq 0\) there exists some \(1\leq k\leq n\) such that \(Qe_{k}\neq 0\), and so \(\langle Qe_{k},e_{k}\rangle>0\). Since \(Q\mathbb{1}=0\) this means there exists some \(i\neq j\) such that \(q_{ij}<0\) and since \(i\neq j\) we also have that \(q_{ij}d_{X}(x_{i},x_{j})^{p}<0\). Hence
\[C_{2}^{2}\sum_{q_{ij}<0}d_{X}(x_{i},x_{j})^{p}q_{ij}<C_{1}^{2}\sum_{q_{ij}<0}d _{X}(x_{i},x_{j})^{p}q_{ij}.\]
But then
\[\sum_{q_{ij}>0}d_{X}(x_{i},x_{j})^{p}q_{ij}+C_{2}^{2}\sum_{q_{ij} <0}d_{X}(x_{i},x_{j})^{p}q_{ij}\] \[\qquad<\sum_{q_{ij}>0}d_{X}(x_{i},x_{j})^{p}q_{ij}+C_{1}^{2}\sum_ {q_{ij}<0}d_{X}(x_{i},x_{j})^{p}q_{ij}\] \[\qquad\leq 0\]
which shows that \(X\) has strict \(p\)-negative type with distortion \(C_{2}\).
We may also bootstrap the nesting property from Theorem 1.2 to obtain the following analogous nesting result for distorted \(p\)-negative type.
**Theorem 2.7**.: _Let \((X,d_{X})\) be a semi-metric space, \(0<q<p<\infty\) and \(1\leq C<\infty\). If \(X\) has \(p\)-negative type with distortion \(C\) then \(X\) has strict \(q\)-negative type with distortion \(C^{q/p}\)._
Proof.: Since \(X\) has \(p\)-negative type with distortion \(C\), Theorem 2.3 gives that \((X,d_{X}^{p/2})\) embeds into some Hilbert space \((H,\|\cdot\|_{2})\) with distortion at most \(C\). That is, there exists a map \(\phi:X\to H\) such that
\[d_{X}(x,y)^{p/2}\leq\|\phi(x)-\phi(y)\|_{2}\leq C\,d_{X}(x,y)^{p/2}\]
for all \(x,y\in X\). Take \(n\geq 2\), distinct \(x_{1},\ldots,x_{n}\in X\) and \(Q=(q_{ij})_{i,j=1}^{n}\in\mathcal{O}_{n}(\mathbb{R})\), with \(Q\neq 0\). Since \(0<2q/p<2\), we have that \(H\) has strict \(2q/p\)-negative type (see Theorem 1.2), and so
\[\sum_{i,j=1}^{n}\|\phi(x_{i})-\phi(x_{j})\|_{2}^{2q/p}q_{ij}<0.\]
Putting this all together, we have that
\[\sum_{q_{ij}>0}d_{X}(x_{i},x_{j})^{q}q_{ij}+C^{2q/p}\sum_{q_{ij}<0}d_{X}(x_{i},x_{ j})^{q}q_{ij}\leq\sum_{i,j=1}^{n}\|\phi(x_{i})-\phi(x_{j})\|_{2}^{2q/p}q_{ij}<0\]
and so we are done.
## 3. Strictness and Polygonal Equalities
A very large part of the recent research effort into \(p\)-negative type has revolved around the notions of strict \(p\)-negative type and polygonal equalities (see for example results in [18, 22, 9, 15, 11, 27]). In this section, we derive some basic results pertaining to these concepts in the distorted setting, including proving the existence of certain non-trivial polygonal equalities.
Let us start by defining what we mean by a polygonal equality here. In the distorted setting, Theorem 2.3 prompts us to define our polygonal equalities slightly differently. When we talk about polygonal equalities from now on, we will use the following (new) definition.
**Definition 3.1**.: _Let \((X,d_{X})\) be a semi-metric space, \(0\leq p<\infty\) and \(1\leq C<\infty\). A p-polygonal equality with distortion \(C\) (or a \(C\)-distorted \(p\)-polygonal equality) in \(X\) is an equality of the form_
\[\sum_{q_{ij}>0}d_{X}(x_{i},x_{j})^{p}q_{ij}+C^{2}\sum_{q_{ij}<0}d_{X}(x_{i},x_ {j})^{p}q_{ij}=0\]
_for some distinct \(x_{1},\ldots,x_{n}\in X\), \(Q=(q_{ij})_{i,j=1}^{n}\in\mathcal{O}_{n}(\mathbb{R})\) and \(n\geq 2\). Such an equality is said to be non-trivial if \(Q\neq 0\). Also, the rank of such a polygonal equality is defined to be the rank of the matrix \(Q\)._
It follows immediately from the above definition and the definition of strict \(p\)-negative type with distortion (Definition 1.7) that a semi-metric space \(X\) has strict \(p\)-negative type with distortion \(C\) if and only if it has \(p\)-negative type with distortion \(C\) and \(X\) admits no non-trivial \(p\)-polygonal equalities with distortion \(C\).
At this point, we must stop and justify our use of the terminology 'polygonal equality'. Indeed the above definition in its current form looks nothing like the usual definition of polygonal equalities as they have appeared in the literature at this point in time. We now show the connection between our definition and the usual one.
Let \((X,d_{X})\) be a semi-metric space, \(x_{1},\ldots,x_{n}\in X\) be distinct, \(p\geq 0\) and \(Q=(q_{ij})_{i,j=1}^{n}\in\mathcal{O}_{n}(\mathbb{R})\). By our definition a \(p\)-polygonal equality with distortion \(1\) is an equality of the form
\[\sum_{i,j=1}^{n}d_{X}(x_{i},x_{j})^{p}q_{ij}=0.\]
Now, as in the proof of Proposition 2.1, we have that that \(\operatorname{rank}(Q)\leq 1\) if and only if there exist \(\xi_{1},\ldots,\xi_{n}\in\mathbb{R}\) with \(\sum_{i=1}^{n}\xi_{i}=0\) such that \(q_{ij}=\xi_{i}\xi_{j}\), for all \(1\leq i,j\leq n\). So, in this case we actually have that
\[\sum_{i,j=1}^{n}d_{X}(x_{i},x_{j})\xi_{i}\xi_{j}=0.\]
Readers familiar with the theory of \(p\)-negative type will recognise this as one of the more standard definitions of a polygonal equality, from say [18]. To summarise, our definition of rank 1 polygonal equalities with distortion 1 is equivalent to the standard definition of a polygonal equality from the isometric theory of \(p\)-negative type.
An invaluable tool in the study of strict \(p\)-negative type and polygonal equalities in the isometric setting is the \(p\)-negative type gap function. This was originally defined by Doust and Weston in [9] and studied extensively in [9, 11, 22, 36, 37].
Here we introduce an analogue of the \(p\)-negative type gap in the distorted setting. For what follows, if \(n\geq 1\) and \(A=(a_{ij})_{i,j=1}^{n}\in M_{n}(\mathbb{R})\), we will use the notation
\[\operatorname{pos}(A)=\sum_{a_{ij}>0}a_{ij}.\]
**Definition 3.2**.: _Let \((X,d_{X})=(\{x_{1},\ldots,x_{n}\},d_{X})\) be a finite semi-metric space. The distorted type gap function for \(X\) is defined to be the function \(\Delta_{X}:[0,\infty)\times[1,\infty)\to\mathbb{R}\) given by_
\[\Delta_{X}(p,C)=\inf_{\begin{subarray}{c}Q\in\mathcal{O}_{n}(\mathbb{R})\\ \operatorname{pos}(Q)=1\end{subarray}}-C^{2}\sum_{q_{ij}<0}d_{X}(x_{i},x_{j}) ^{p}q_{ij}-\sum_{q_{ij}>0}d_{X}(x_{i},x_{j})^{p}q_{ij}\]
_for all \(0\leq p<\infty\) and \(1\leq C<\infty\)._
First we remark that \(\Delta_{X}(p,C)\) is always finite. To see this let \(\mathcal{Q}\) denote the set of all \(Q\in\mathcal{O}_{n}(\mathbb{R})\) with \(\operatorname{pos}(Q)=1\) and topologize \(\mathcal{Q}\) with the pointwise topology (i.e. \(Q_{k}\to Q\) if and only if \(Q_{k}\) converges to \(Q\) entrywise). Note that since \(M_{n}(\mathbb{R})\) is finite-dimensional this coincides with the restriction of the unique norm topology on \(M_{n}(\mathbb{R})\) to \(\mathcal{Q}\). It is a simple matter to check that with this topology \(\mathcal{Q}\) is compact (it is a closed and bounded subset of \(M_{n}(\mathbb{R})\)). Also, define \(f:[0,\infty)\times[1,\infty)\times\mathcal{Q}\to\mathbb{R}\) by
\[f(p,C,Q)=-C^{2}\sum_{q_{ij}<0}d_{X}(x_{i},x_{j})^{p}q_{ij}-\sum_{q_{ij}>0}d_{ X}(x_{i},x_{j})^{p}q_{ij}.\]
Then \(f\) is continuous and also
\[\Delta_{X}(p,C)=\inf_{Q\in\mathcal{Q}}f(p,C,Q).\]
So, since \(f\) is continuous and \(\mathcal{Q}\) is compact it follows that \(\Delta_{X}(p,C)\) is always finite and that the above infimum is always attained for some \(Q\in\mathcal{Q}\).
The fact that \(\Delta_{X}(p,C)\) is the infimum over a compact set also implies that it must be continuous. Here we will use the following result from elementary analysis. The proof of this result is left as an exercise for the reader.
**Proposition 3.3**.: _Let \((Y,\tau_{Y})\) be a topological space and let \((Z,\tau_{Z})\) be a compact topological space. Suppose that \(f:Y\times Z\to\mathbb{R}\) is continuous (with respect to the product topology on \(Y\times Z\)) and define \(g:Y\to\mathbb{R}\) by_
\[g(y)=\inf_{z\in Z}f(y,z)\]
_for all \(y\in Y\). Then \(g\) is continuous._
**Corollary 3.4**.: _Let \((X,d_{X})=(\{x_{1},\ldots,x_{n}\},d_{X})\) be a finite semi-metric space. Then \(\Delta_{X}:[0,\infty)\times[1,\infty)\to\mathbb{R}\) is continuous (with respect to the product topology on \([0,\infty)\times[1,\infty)\)._
It follows immediately from the above definition (and the fact that one may rescale appropriately) that if \((X,d_{X})=(\{x_{1},\ldots,x_{n}\},d_{X})\) is a finite semi-metric space, \(0\leq p<\infty\) and \(1\leq C<\infty\) then \(X\) has \(p\)-negative type with distortion \(C\) if and only if \(\Delta_{X}(p,C)\geq 0\). More importantly however is the following refinement of this property when dealing with strict distorted \(p\)-negative type. That is, \(\Delta_{X}\) has the following property.
**Theorem 3.5**.: _Let \((X,d_{X})=(\{x_{1},\ldots,x_{n}\},d_{X})\) be a finite semi-metric space, \(0\leq p<\infty\) and \(1\leq C<\infty\). Then \(X\) has strict \(p\)-negative type with distortion \(C\) if and only if \(\Delta_{X}(p,C)>0\)._
Proof.: First suppose that \(\Delta_{X}(p,C)>0\). So, take \(Q=(q_{ij})_{i,j=1}^{n}\in\mathcal{O}_{n}(\mathbb{R})\) with \(Q\neq 0\). Then, since \(\operatorname{pos}(Q)\neq 0\) one may define \(R=(r_{ij})_{i,j=1}^{n}\) by \(R=\operatorname{pos}(Q)^{-1}Q\). Of course, one then has that \(R\in\mathcal{O}_{n}(\mathbb{R})\) with \(\operatorname{pos}(R)=\operatorname{pos}(Q)\operatorname{pos}(Q)^{-1}=1\). Hence
\[-\frac{1}{\operatorname{pos}(Q)}\bigg{(}\sum_{q_{ij}>0}d_{X}(x_{ i},x_{j})^{p}q_{ij}+C^{2}\sum_{q_{ij}<0}d_{X}(x_{i},x_{j})^{p}q_{ij}\bigg{)}\] \[=-C^{2}\sum_{r_{ij}<0}d_{X}(x_{i},x_{j})^{p}r_{ij}-\sum_{r_{ij}>0 }d_{X}(x_{i},x_{j})^{p}r_{ij}\] \[\geq\Delta_{X}(p,C)\] \[>0.\]
Hence, after dividing both sides by \(-\operatorname{pos}(Q)^{-1}\) one finds that
\[\sum_{q_{ij}>0}d_{X}(x_{i},x_{j})^{p}q_{ij}+C^{2}\sum_{q_{ij}<0}d_{X}(x_{i},x_ {j})^{p}q_{ij}<0\]
which shows that \(X\) has strict \(p\)-negative type with distortion \(C\). Conversely, suppose that \(X\) has strict \(p\)-negative type with distortion \(C\). Let us keep the notation from below Definition 3.2 so that
\[\Delta_{X}(p,C)=\inf_{Q\in\mathcal{Q}}f(p,C,Q).\]
Since \(f\) is continuous and \(\mathcal{Q}\) is compact this infimum must be attained. That is, there exists some \(Q_{0}\in\mathcal{Q}\) (that depends on both \(p\) and \(C\)) such that
\[\Delta_{X}(p,C)=\inf_{Q\in\mathcal{Q}}f(p,C,Q)=f(p,C,Q_{0}).\]
But since \(X\) has strict \(p\)-negative type with distortion \(C\) one has that \(f(p,C,Q_{0})>0\) and hence by the above equation we conclude that \(\Delta_{X}(p,C)>0\) as required.
Combining this with Corollary 3.4 gives the following result pertaining to strict distorted \(p\)-negative type.
**Corollary 3.6**.: _Let \((X,d_{X})\) be a finite semi-metric space, \(0\leq p<\infty\) and \(1\leq C<\infty\). Suppose that \(X\) has strict \(p\)-negative type with distortion \(C\). Then there exists some \(\zeta>0\) (that depends on both \(p\) and \(C\)) such that \(X\) has strict \(q\)-negative type with distortion \(K\) for all \(q\in[p,p+\zeta]\) and \(K\in[\max(C-\zeta,1),C]\)._
Thus we are now able to obtain a generalisation of the fact that all finite semi-metric spaces admit a non-trivial non-distorted \(\wp_{X}\)-polygonal equality (recall that \(\wp_{X}\) is used to denote the supremal \(p\)-negative type of \(X\)).
**Corollary 3.7**.: _Let \((X,d_{X})\) be a finite semi-metric space and \(p\geq\wp_{X}\). Then there exists a non-trivial \(c_{2}(X,d_{X}^{p/2})\)-distorted \(p\)-polygonal equality in \(X\)._
Proof.: For ease of notation, let us denote \(c_{2}(X,d_{X}^{p/2})\) simply by \(c_{2}(X^{p/2})\). By Theorem 2.3 note that \(X\) has \(p\)-negative type with distortion \(c_{2}(X^{p/2})\), for all \(p\geq 0\). In particular this means that \(\Delta_{X}(p,c_{2}(X^{p/2}))\geq 0\), for all \(p\geq 0\). So, take \(p\geq\wp_{X}\) and let us assume for a contradiction that \(\Delta_{X}(p,c_{2}(X^{p/2}))>0\). Then by Theorem 3.5 this means that \(X\) has strict \(p\)-negative type with distortion \(c_{2}(X^{p/2})\) and hence by Corollary 3.6 there exists some \(\zeta>0\) such that \(X\) also has strict \((p+\zeta)\)-negative type with distortion \(c_{2}(X^{p/2})\). By Theorem 2.3 this means that \((X,d_{X}^{(p+\zeta)/2})\) embeds in \(\ell^{2}\) with distortion at most \(c_{2}(X^{p/2})\) and hence \(c_{2}(X^{(p+\zeta)/2})\leq c_{2}(X^{p/2})\). But this is impossible since the function \(r\mapsto c_{2}(X^{r/2})\) is strictly increasing for \(r\geq\wp_{X}\) (see Theorem 2.7). Hence it must be the case that \(\Delta_{X}(p,c_{2}(X^{p/2}))=0\).
Now, arguing as in the proof of Theorem 3.5 (and keeping the notation from that proof) there must exist some \(Q_{0}\in\mathcal{Q}\) such that
\[f(p,c_{2}(X^{p/2}),Q_{0})=\Delta_{X}(p,c_{2}(X^{p/2}))=0.\]
But this is just another way of saying that \(Q_{0}\) is a \(c_{2}(X^{p/2})\)-distorted \(p\)-polygonal equality in \(X\). Also, since \(Q_{0}\in\mathcal{Q}\) we have that \(\operatorname{pos}(Q_{0})=1\) and hence \(Q_{0}\neq 0\). Hence \(Q_{0}\) is a non-trivial \(c_{2}(X^{p/2})\)-distorted \(p\)-polygonal equality in \(X\) and so we are done.
The above corollary enables us to classify those \(p\) and \(C\) for which a finite semi-metric space \(X\) has strict \(p\)-negative type with distortion \(C\).
**Corollary 3.8**.: _Let \((X,d_{X})\) be a finite semi-metric space, \(0\leq p<\infty\) and \(C\geq 1\). Then \(X\) has strict \(p\)-negative type with distortion \(C\) if and only if \(p<\wp_{X}\) or \(C>c_{2}(X,d_{X}^{p/2})\)._
Proof.: First suppose that \(p<\wp_{X}\). By the definition of \(\wp_{X}\) we have that \(X\) has \(\wp_{X}\)-negative type with distortion \(1\) and hence by Theorem 2.7\(X\) also has strict \(p\)-negative type with distortion \(1\). But then by Proposition 2.6 we also have that \(X\) has strict \(p\)-negative type with distortion \(C\). Now, suppose instead that \(C>c_{2}(X,d_{X}^{p/2})\). Then Theorem 2.3 gives that \(X\) has \(p\)-negative type with distortion \(c_{2}(X,d_{X}^{p/2})\) and so Proposition 2.6 again shows that \(X\) has strict \(p\)-negative type with distortion \(C\).
Conversely, suppose that \(X\) has strict \(p\)-negative type with distortion \(C\) and that \(p\geq\wp_{X}\). Then Theorem 2.3 implies that \(C\geq c_{2}(X,d_{X}^{p/2})\). But by Corollary 3.7 we know that \(X\) has nonstrict \(p\)-negative type with distortion \(c_{2}(X,d_{X}^{p/2})\) and so it must be the case that \(C>c_{2}(X,d_{X}^{p/2})\).
## 4. Examples
In this section we provide two examples of semi-metric spaces and their values of (strict) distorted \(p\)-negative type, as well as some of their non-trivial polygonal equalities. The first example we look at is that of the bipartite graph \(K_{m,n}\), where the results here are entirely new. For the second example, we make use of results from Linial and Magen [23] to compute the values of distorted \(p\)-negative type of the Hamming cube \(H_{n}\).
Throughout this section we will use the following standard notation when computing the distortion of a given embedding.
**Definition 4.1**.: _Let \((X,d_{X})\) and \((Y,d_{Y})\) be semi-metric spaces, and suppose that \(f:X\to Y\)._
1. _The contraction of_ \(f\) _is defined to be_ \[\operatorname{contraction}(f)=\sup_{\begin{subarray}{c}x,y\in X\\ x\neq y\end{subarray}}\frac{d_{X}(x,y)}{d_{Y}(f(x),f(y))}.\]
2. _The expansion of_ \(f\) _is defined to be_ \[\operatorname{expansion}(f)=\sup_{\begin{subarray}{c}x,y\in X\\ x\neq y\end{subarray}}\frac{d_{Y}(f(x),f(y))}{d_{X}(x,y)}.\]
It is a simple matter to check that using this notation one has that if \((X,d_{X})\) and \((Y,d_{Y})\) are semi-metric spaces and \(f:X\to Y\) is a bi-lipschitz embedding then the distortion of \(f\) is given by
\[\operatorname{dist}(f)=\operatorname{contraction}(f)\times\operatorname{ expansion}(f).\]
### The Bipartite Graph \(K_{m,n}\)
The first example that we will concern ourselves with will be the bipartite graph \(K_{m,n}\) equipped with its graph metric. When we say let \((X,d_{X})\) be the bipartite graph \(K_{m,n}\) we mean that \(X\) is the space \(X=\{u_{1},\dots,u_{m},v_{1},\dots,v_{n}\}\) with metric \(d_{X}\) defined by \(d_{X}(u_{i},u_{j})=d_{X}(v_{k},v_{l})=2\), for all \(1\leq i,j\leq m\) and \(1\leq k,l\leq n\) with \(i\neq j\) and \(k\neq l\), and also \(d_{X}(u_{i},v_{j})=1\) for all \(1\leq i\leq m\) and \(1\leq j\leq n\).
To properly describe the optimal embeddings of powers of \(K_{m,n}\) into Hilbert space we first need to understand how the complete graph \(K_{n}\) can be isometrically embedded into Hilbert space. For what follows when we say let \((X,d_{X})\) be the complete graph \(K_{n}\) we mean that \(X=\{u_{1},\dots,u_{n}\}\) with metric \(d_{X}\) such that \(d_{X}(u_{i},u_{j})=1\), for all \(1\leq i,j\leq n\) with \(i\neq j\).
It is a simple matter to construct an isometric embedding of \(K_{n}\) into \(\mathbb{R}^{n}\). Indeed, one may simply take \(u_{i}\mapsto e_{i}/\sqrt{2}\) for all \(1\leq i\leq n\) where \(e_{1},\dots,e_{n}\) are the standard basis vectors. However, since \(K_{n}\) is an \(n\) point metric space it must therefore be possible to isometrically embed \(K_{n}\) into3\(\mathbb{R}^{n-1}\). While the problem of writing an explicit formula for an isometric embedding of \(K_{n}\) into \(\mathbb{R}^{n-1}\) is not a conceptually challenging one it is rather tedious. Let
Footnote 3: It can also be shown that \(K_{n}\) cannot be isometrically embedded into \(\mathbb{R}^{r}\) for any \(r<n-1\). Indeed, since \(K_{n}\) is an ultrametric space it has strict 2-negative type and hence it must embed isometrically into \(\mathbb{R}^{n-1}\) as an affinely independent set. For such results see [15].
\[c_{n}=\frac{\sqrt{2}(1+\sqrt{n})}{2(n-1)}\]
and
\[C_{n}=\frac{1}{n}\bigg{(}c_{n}+\frac{1}{\sqrt{2}}\bigg{)}\mathbb{1}\]
where here \(\mathbb{1}\) denotes the vector in \(\mathbb{R}^{n-1}\) all of whose entries are \(1\). Then define \(\phi:K_{n}\to\mathbb{R}^{n-1}\) by \(\phi(u_{i})=e_{i}/\sqrt{2}-C_{n}\) for all \(1\leq i\leq n-1\) and \(\phi(u_{n})=c_{n}\mathbb{1}-C_{n}\). It is a simple yet tedious task to check that \(\phi\) is an isometric embedding of \(K_{n}\) into \(\mathbb{R}^{n-1}\). It is also simple to check that \(\|\phi(u_{i})\|_{2}=(1-1/n)^{1/2}/2^{1/2}\) for all \(1\leq i\leq n\).
We will refer to this particular embedding as the standard embedding of \(K_{n}\) into \(\mathbb{R}^{n-1}\).
We now move on to the problem of desribing optimal embeddings of powers of \(K_{m,n}\). In proving the optimality of our embeddings we will require a matrix \(Q\) that will serve as a distorted polygonal equality for \(K_{m,n}\).
**Lemma 4.2**.: _Define \(Q=(q_{ij})_{i,j=1}^{m+n}\in M_{m+n}(\mathbb{R})\) by_
\[q_{ij}=\begin{cases}\frac{1}{m^{2}},&\text{ if }1\leq i,j\leq m,\\ \frac{1}{m^{2}},&\text{ if }m+1\leq i,j\leq m+n,\\ -\frac{1}{mn},&\text{ if }1\leq i\leq m,m+1\leq j\leq m+n,\\ -\frac{1}{mn},&\text{ if }m+1\leq i\leq m+n,1\leq j\leq m.\end{cases}\]
_Then \(Q\in\mathcal{O}_{m+n}(\mathbb{R})\)._
Proof.: Define \(\xi_{1},\dots,\xi_{m+n}\in\mathbb{R}\) by
\[\xi_{i}=\begin{cases}\frac{1}{m},&\text{ if }1\leq i\leq m,\\ -\frac{1}{n},&\text{ if }m+1\leq i\leq m+n.\end{cases}\]
Then \(\sum_{i=1}^{m+n}\xi_{i}=0\) and \(Q=(\xi_{i}\xi_{j})_{i,j=1}^{m+n}\). As in the proof of Proposition 2.1, it now follows that \(Q\in\mathcal{O}_{m+n}(\mathbb{R})\).
In what follows if \(r\geq 1\) then we shall use \(0_{r}\) to denote the zero vector in \(\mathbb{R}^{r}\) (i.e. the vector in \(\mathbb{R}^{r}\) all of whose coordinates are zero). Also, for ease of notation we shall set
\[\wp_{m,n}=\log_{2}\bigg{(}\frac{2mn}{2mn-m-n}\bigg{)}.\]
**Theorem 4.3**.: _Let \((X,d_{X})\) be the bipartite graph \(K_{m,n}\), where \(m,n\geq 1\) (not both \(1\)) and let \(\wp=\wp_{m,n}\). Also, let \(x_{1},\dots,x_{m}\in\mathbb{R}^{m-1}\) be the image of the standard embedding of \(K_{m}\) into \(\mathbb{R}^{m-1}\) and let \(y_{1},\dots,y_{n}\in\mathbb{R}^{n-1}\) be the image of the standard embedding of \(K_{n}\) into \(\mathbb{R}^{n-1}\) (see the comments above Lemma 4.2). Then for \(p\geq\wp\) the map \(\phi:(X,d_{X}^{p/2})\to\mathbb{R}^{m-1}\oplus\mathbb{R}^{n-1}=\mathbb{R}^{m+n-2}\) defined by_
\[\phi(u_{i}) =(x_{i},0_{n-1}),\,\forall 1\leq i\leq m,\] \[\phi(v_{j}) =(0_{m-1},y_{j}),\,\forall 1\leq j\leq n,\]
_has \(\operatorname{dist}(\phi)=c_{2}(X,d_{X}^{p/2})\). Consequently \(\wp_{X}=\wp\) and_
\[c_{2}(X,d_{X}^{p/2})=\begin{cases}1,&\text{ if }0\leq p\leq\wp,\\ 2^{p/2}\bigg{(}1-\frac{1}{2}\bigg{(}\frac{1}{m}+\frac{1}{n}\bigg{)}\bigg{)}^{ 1/2},&\text{ if }\wp\leq p<\infty.\end{cases}\]
Proof.: It follows immediately from the definition of \(\phi\) that
\[\|\phi(u_{i})-\phi(u_{j})\|_{2}=1=\|\phi(v_{k})-\phi(v_{l})\|_{2}\]
for all \(1\leq i\neq j\leq m\), \(1\leq k\neq l\leq n\). Now take \(1\leq i\leq m\) and \(1\leq j\leq n\). By what was said above about the standard embeddings of \(K_{r}\) into \(\mathbb{R}^{r-1}\) we have that
\(\|\phi(u_{i})\|_{2}=(1-1/m)^{1/2}/2^{1/2}\) and \(\|\phi(v_{j})\|_{2}=(1-1/n)^{1/2}/2^{1/2}\). Also, since \(\phi(u_{i})\) and \(\phi(v_{j})\) are clearly orthogonal we have that
\[\|\phi(u_{i})-\phi(v_{j})\|_{2}^{2} =\|\phi(u_{i})\|_{2}^{2}+\|\phi(v_{j})\|_{2}^{2}\] \[=\frac{1}{2}\bigg{(}1-\frac{1}{m}\bigg{)}+\frac{1}{2}\bigg{(}1- \frac{1}{n}\bigg{)}\] \[=1-\frac{1}{2}\bigg{(}\frac{1}{m}+\frac{1}{n}\bigg{)}.\]
Hence, since we are we are thinking of \(\phi\) as a map \(\phi:(X,d_{X}^{p/2})\to\mathbb{R}^{m+n-2}\), one has that
\[\text{expansion}(\phi) =\sup_{\begin{subarray}{c}x,y\in X\\ x\neq y\end{subarray}}\frac{\|\phi(x)-\phi(y)\|_{2}}{d_{X}(x,y)^{p/2}}\] \[=\max\bigg{(}\frac{1}{2^{p/2}},\bigg{(}1-\frac{1}{2}\bigg{(} \frac{1}{m}+\frac{1}{n}\bigg{)}\bigg{)}^{1/2}\bigg{)}.\]
Since \(p\geq\wp=\wp_{m,n}\) it is readily checked that
\[\text{expansion}(\phi)=\bigg{(}1-\frac{1}{2}\bigg{(}\frac{1}{m}+\frac{1}{n} \bigg{)}\bigg{)}^{1/2}.\]
Similarly, since \(p\geq\wp=\wp_{m,n}\) the contraction of \(\phi\) is given by
\[\text{contraction}(\phi) =\sup_{\begin{subarray}{c}x,y\in X\\ x\neq y\end{subarray}}\frac{d_{X}(x,y)^{p/2}}{\|\phi(x)-\phi(y)\|_{2}}\] \[=\max\bigg{(}2^{p/2},\bigg{(}1-\frac{1}{2}\bigg{(}\frac{1}{m}+ \frac{1}{n}\bigg{)}\bigg{)}^{-1/2}\bigg{)}\] \[=2^{p/2}.\]
Hence the distortion of \(\phi:(X,d_{X}^{p/2})\to\mathbb{R}^{m+n-2}\) is
\[\text{dist}(\phi)=\text{contraction}(\phi)\times\text{expansion}(\phi)=2^{p/2} \bigg{(}1-\frac{1}{2}\bigg{(}\frac{1}{m}+\frac{1}{n}\bigg{)}\bigg{)}^{1/2}.\]
For the lower bound, let \(Q=(q_{ij})_{i,j=1}^{m+n}\) be the matrix defined in Lemma 4.2, which we know is in \(\mathcal{O}_{m+n}(\mathbb{R})\). Then taking \(z_{1}=u_{1},\ldots,z_{m}=u_{m},z_{m+1}=v_{1},\ldots,z_{m+n}=v_{n}\), Theorem 2.5 gives that
\[c_{2}(X,d_{X}^{p/2})^{2} \geq-\frac{\sum_{q_{ij}>0}d_{X}(z_{i},z_{j})^{p}q_{ij}}{\sum_{q_{ ij}<0}d_{X}(z_{i},z_{j})^{p}q_{ij}}\] \[=2^{p}\bigg{(}1-\frac{1}{2}\bigg{(}\frac{1}{m}+\frac{1}{n}\bigg{)} \bigg{)}.\]
Thus for \(p\geq\wp\) one has that
\[c_{2}(X,d_{X}^{p/2})=2^{p/2}\bigg{(}1-\frac{1}{2}\bigg{(}\frac{1}{m}+\frac{1}{ n}\bigg{)}\bigg{)}^{1/2}.\]
Note in particular that \(c_{2}(X,d_{X}^{p/2})=1\). The fact that \(c_{2}(X,d_{X}^{p/2})=1\) for all \(0\leq p<\wp\) now follows from Theorem 2.7.
**Corollary 4.4**.: _Let \((X,d_{X})\) be the bipartite graph \(K_{m,n}\), where \(m,n\geq 1\) (not both \(1\)), \(\wp=\wp_{m,n}\), \(0\leq p<\infty\) and \(1\leq C<\infty\). Then \(X\) has \(p\)-negative type with distortion \(C\) if and only if \(0\leq p\leq\wp\), or \(p>\wp\) and_
\[C\geq 2^{p/2}\bigg{(}1-\frac{1}{2}\bigg{(}\frac{1}{m}+\frac{1}{n}\bigg{)} \bigg{)}^{1/2}.\]
Proof.: This is a direct consequence of the above theorem and Theorem 2.3.
**Corollary 4.5**.: _Let \((X,d_{X})\) be the bipartite graph \(K_{m,n}\), where \(m,n\geq 1\) (not both \(1\)), \(\wp=\wp_{m,n}\), \(0\leq p<\infty\) and \(1\leq C<\infty\). Then \(X\) has strict \(p\)-negative type with distortion \(C\) if and only if \(0\leq p<\wp\), or \(p\geq\wp\) and_
\[C>2^{p/2}\bigg{(}1-\frac{1}{2}\bigg{(}\frac{1}{m}+\frac{1}{n}\bigg{)}\bigg{)} ^{1/2}.\]
Proof.: This is a direct consequence of Theorem 4.3 and Corollary 3.8.
### The Hamming Cube \(H_{n}\)
For our second example, we study the Hamming cube \(H_{n}\). For what follows, when we say let \((X,d_{X})\) be the Hamming cube \(H_{n}\) we mean that \(X=\{0,1\}^{n}\subseteq\mathbb{R}^{n}\) with metric \(d_{X}\) defined by \(d_{X}(x,y)=\sum_{i=1}^{n}|x_{i}-y_{i}|\), for all \(x=(x_{1},\dots,x_{n})^{T},y=(y_{1},\dots,y_{n})^{T}\in H_{n}\).
Once again, we start by detailing our canditate for a polygonal equality for \(H_{n}\). Here it will be easier to omit mention of any explicit ordering of the elements of \(X=H_{n}\) and instead use the notation \(Q=(q_{x,y})_{x,y\in X}\in M_{2^{n}}(\mathbb{R})\).
A proof of the following lemma can be found in [23].
**Lemma 4.6**.: _Let \((X,d_{X})\) be the Hamming cube \(H_{n}\), where \(n\geq 2\), and define \(Q=(q_{x,y})_{x,y\in X}\) by_
\[q_{x,y}=\begin{cases}n-1,&\text{ if }x=y,\\ -1,&\text{ if }d_{X}(x,y)=1,\\ 1,&\text{ if }d_{X}(x,y)=n,\\ 0,&\text{ otherwise.}\end{cases}\]
_Then \(Q\in\mathcal{O}_{2^{n}}(\mathbb{R})\)._
**Theorem 4.7**.: _Let \((X,d_{X})\) be the Hamming cube \(H_{n}\), where \(n\geq 2\). For \(p\geq 1\) define the map \(\phi:(X,d_{X}^{p/2})\to\mathbb{R}^{n}\) by \(\phi(x)=x\). Then \(\phi\) is an optimal distortion embedding. Consequently, one has that_
\[c_{2}(X,d_{X}^{p/2})=\begin{cases}1,&\text{ if }0\leq p\leq 1,\\ n^{(p-1)/2}&\text{, if }1\leq p<\infty.\end{cases}\]
Proof.: Note that if \(x_{k},y_{k}\in\{0,1\}\) then \(x_{k}-y_{k}\in\{0,\pm 1\}\) and so \((x_{k}-y_{k})^{2}=|x_{k}-y_{k}|\). Hence if \(x=(x_{1},\dots,x_{n})^{T},y=(y_{1},\dots,y_{n})^{T}\in X\) then
\[\|\phi(x)-\phi(y)\|_{2}^{2}=\sum_{k=1}^{n}(x_{k}-y_{k})^{2}=\sum_{k=1}^{n}|x_ {k}-y_{k}|=d_{X}(x,y).\]
Hence if \(d_{X}(x,y)=k\) then \(\|\phi(x)-\phi(y)\|_{2}=k^{1/2}\). Since \(p\geq 1\) the expansion and contraction of \(\phi\) are then given by
\[\text{expansion}(\phi)=\sup_{\begin{subarray}{c}x,y\in X\\ x\neq y\end{subarray}}\frac{\|\phi(x)-\phi(y)\|_{2}}{d_{X}(x,y)^{p/2}}=\max_{1 \leq k\leq n}\frac{k^{1/2}}{k^{p/2}}=\max_{1\leq k\leq n}k^{(1-p)/2}=1\]
\[\text{contraction}(\phi)=\sup_{\begin{subarray}{c}x,y\in X\\ x\neq y\end{subarray}}\frac{d_{X}(x,y)^{p/2}}{\|\phi(x)-\phi(y)\|_{2}}=\max_{1 \leq k\leq n}\frac{k^{p/2}}{k^{1/2}}=\max_{1\leq k\leq n}k^{(p-1)/2}=n^{(p-1)/2}.\]
Hence the distortion of \(\phi\) is given by
\[\text{dist}(\phi)=\text{contraction}(\phi)\times\text{expansion}(\phi)=n^{(p-1)/ 2}.\]
Now all that we need to do is show that this cannot be improved. Define \(Q=(q_{x,y})_{x,y\in X}\) as in Lemma 4.6. Then by Corollary 2.5 we have that
\[c_{2}(X,d_{X}^{p/2})^{2} \geq-\frac{\sum_{q_{x,y}>0}d_{X}(x,y)^{p}q_{x,y}}{\sum_{q_{x,y}< 0}d_{X}(x,y)^{p}q_{x,y}}\] \[=-\frac{\sum_{d_{X}(x,y)=n}d_{X}(x,y)^{p}q_{x,y}}{\sum_{d_{X}(x,y )=1}d_{X}(x,y)^{p}q_{x,y}}\] \[=-\frac{2^{n}\times n^{p}\times 1}{n2^{n}\times 1^{p}\times-1}\] \[=n^{p-1}.\]
Hence if \(p\geq 1\) then
\[c_{2}(X,d_{X}^{p/2})=n^{(p-1)/2}.\]
Note in particular that \(c_{2}(X,d_{X}^{1/2})=1\). The fact that \(c_{2}(X,d_{X}^{p/2})=1\) for all \(0\leq p<1\) now follows from Theorem 2.7.
**Corollary 4.8**.: _Let \((X,d_{X})\) be the Hamming cube \(H_{n}\), where \(n\geq 2\), \(0\leq p<\infty\) and \(1\leq C<\infty\). Then \(X\) has \(p\)-negative type with distortion \(C\) if and only if \(0\leq p\leq 1\), or \(p>1\) and_
\[C\geq n^{(p-1)/2}.\]
Proof.: This is a direct consequence of the above theorem and Theorem 2.3.
**Corollary 4.9**.: _Let \((X,d_{X})\) be the Hamming cube \(H_{n}\), where \(n\geq 2\), \(0\leq p<\infty\) and \(1\leq C<\infty\). Then \(X\) has strict \(p\)-negative type with distortion \(C\) if and only if \(0\leq p<1\), or \(p\geq 1\) and_
\[C>n^{(p-1)/2}.\]
Proof.: This is a direct consequence of Theorem 4.7 and Corollary 3.8.
## Acknowledgments
The work of the author was supported by the Research Training Program of the Department of Education and Training of the Australian Government. The author also wishes to thank Ian Doust for his help in reading over many drafts of this paper. |
2309.14101 | Coulomb potential screening via charged carriers and charge-neutral
dipoles/excitons in two-dimensional case | With the shrinking of dimensionality, Coulomb interactions play a distinct
role in two-dimensional (2D) semiconductors owing to the reduced dielectric
screening in the out-of-plane direction. Apart from dielectric screening, free
charge carriers and/or dipoles can also make a non-negligible contribution to
Coulomb interaction. While the Thomas-Fermi model is effective in describing
charge carrier screening in three dimensions, the extent of screening to two
dimensions resulting from charge carriers and charge-neutral dipoles remains
quantitatively unclear. Herein, we present an analytical solution based on
linear response theory, offering a comprehensive depiction of the Coulomb
screened potential in both 2D and 3D systems, where screening effects from both
charge carriers and charge-neutral dipoles are addressed. Our work provides a
useful and handy tool for directly analysing and evaluating Coulomb interaction
strength in atomically thin materials, particularly in the context of
electronic and optoelectronic engineering. As a demonstration, we utilized the
derived modified Coulomb potential for the exciton system in 2D semiconductors
to estimate the exciton binding energy variation arising from the exciton
density fluctuation and temperature-dependent exciton polarizability, yielding
excellent agreement with the computational and experimental findings. | Ke Xiao, Chi-Ming Kan, Stuart. S. P. Parkin, Xiaodong Cui | 2023-09-25T12:50:49Z | http://arxiv.org/abs/2309.14101v2 | **Coulomb potential screening via charged carriers and charge-neutral dipoles/excitons in two-dimensional case**
## Abstract:
With the shrink of dimensionality, Coulomb interaction displays a distinct role owing to the reduced dielectric screening in out-of-plane direction. Apart from the dielectric screening, the free charge carriers and/or dipoles can also make nonnegligible contribution to Coulomb interaction. While the Thomas Fermi model is effective in describing charge carrier screening in three dimensions, the extent of screening to two-dimension resulting from charge-neutral dipoles and carriers remains quantitatively unclear. To address this gap, we present a simple analytical solution based on linear response theory, offering a comprehensive depiction of the Coulomb screened potential in both 2D and 3D systems, where screening effects from both charge carriers and charge-neutral dipoles are addressed. Our work provides a handy tool for directly analysing and evaluating Coulomb interaction strength in atomically thin materials and particularly in the context of electronic and optoelectronic engineering. As a demonstration, we utilize the derived modified Coulomb potential for the exciton system to estimate the exciton binding energy variation arising from exciton density fluctuation and the temperature dependent exciton polarizability, yielding excellent agreement with the experimental and computational findings.
## Introduction:
The rise of atomically thin two-dimensional (2D) materials provides an ultimate 2D platform for physics research and great promise of applications from their fascinating properties. With the shrink of dimensionality, Coulomb interaction is greatly enhanced owing to reduced dielectric screening and spatial confinement [1, 2]. This enhanced Coulomb interaction plays a more significant role in electronic properties of the 2D materials than their three-dimension (3D) counterparts, usually determining characteristic optical and electric properties of 2D materials. Renowned evidences include the giant exciton binding energy [3, 4, 5], the significant renormalization of electronic bandgap [6, 7], moire excitons in 2D heterostructures [8, 9, 10], enhanced superconductivity [11, 12] etc. It is one of the concurrently focused topics to elaborate Coulomb interactions in depicting exotic phenomena in 2D materials [13, 14, 15].
In contrast to 3D cases where the macroscopic Coulomb screening is well described by a |
2310.00017 | Influence of Orbital Angular Momentum of light on Random Spin-split
modes in Disordered Anisotropic Optical media | Spin orbit interaction of light in a disordered anisotropic medium is known
to yield spin split modes in the momentum domain because of the random spatial
gradient of the geometric phase of light. Here, we have studied the statistics
of such spin split modes for beams carrying intrinsic orbital angular momentum
through the quantification of momentum domain entropy and investigated its
dependence on various beam parameters. The influence of the spatial structure
of the beam and the phase vortex on the statistics of the spin split modes were
separately investigated using input Laguerre-Gaussian and Perfect Vortex beams
passing through disordered anisotropic medium with controlled input disorder
parameter, which was realized by modulating the pixels of a liquid
crystal-based spatial light modulator. The results of systematic investigations
on the impact of beam waist, spot size and topological charge of the vortex
beam shows that the influence of the spot size on the emergence of the random
spin split modes is much more significant as compared to the other beam
parameters. | Anwesha Panda, Sneha Dey, Yogishree Arabinda Panda, Aditya Anurag Dash, Aloke Jana, Nirmalya Ghosh | 2023-09-26T09:08:05Z | http://arxiv.org/abs/2310.00017v1 | Influence of Orbital Angular Momentum of light on Random Spin-split modes in Disordered Anisotropic Optical media
###### Abstract
Spin orbit interaction of light in a disordered anisotropic medium is known to yield spin split modes in the momentum domain because of the random spatial gradient of the geometric phase of light. Here, we have studied the statistics of such spin split modes for beams carrying intrinsic orbital angular momentum through the quantification of momentum domain entropy and investigated its dependence on various beam parameters. The influence of the spatial structure of the beam and the phase vortex on the statistics of the spin split modes were separately investigated using input Laguerre-Gaussian and Perfect Vortex beams passing through disordered anisotropic medium with controlled input disorder parameter, which was realized by modulating the pixels of a liquid crystal-based spatial light modulator. The results of systematic investigations on the impact of beam waist, spot size and topological charge of the vortex beam shows that the influence of the spot size on the emergence of the random spin split modes is much more significant as compared to the other beam parameters.
**Key words:** Spin-orbit interaction of light, Vortex beam, Spatial phase gradient.
## 1 Introduction
Spin-orbit coupling, also known as spin-orbit interaction (SOI), is an universal concept in physics that involves the coupling of spin and orbital degrees of freedom in both particles with mass (e.g., electrons) and massless particles (e.g., photons) due to relativistic effects. It occurs in various systems, ranging from atomic, condensed-matter systems to optical technologies, giving rise to interesting phenomena and potential applications. However, it is intriguing to note that this phenomenon, although typically discussed in the context of quantum particles, can also be manifested in classical light. This follows that classical light beams can carry both spin angular momentum (SAM) related to circular polarization and orbital angular momentum (OAM) associated with helical wavefronts of light, both of which interconvert into each other to create rich physics associated with SOI.
The SOI of light gives rise to two closely intertwined phenomena. The first phenomenon involves the influence of the light's trajectory on its state of polarization, resulting in the emergence of spin-dependent optical vortices which is commonly observed in systems with cylindrical or spherical symmetry. The second phenomenon is the reciprocal effect where polarization affects the trajectory of light. This effect is termed the Spin Hall effect (SHE) of light and is typically observed when symmetry is broken[1, 2, 3, 4, 5, 6, 7, 8, 9]. SOI of light are basically of two types - one is geometric phase mediated and other is transverse angular momentum mediated. Geometric phases and their gradients, along with the preservation of the total angular momentum of light, are closely linked to optical Spin-Orbit Interaction (SOI) phenomena. Within this context, two distinct forms of geometric phase play a role--the spin redirection Berry phase and the Pancharatnam-Berry (PB) geometric phase. Spin Hall effect (SHE) originating from the geometric phase gradient, leads to either spatial domain or momentum domain shift. The other type of SHE, which is completely independent of the geometrical phase of light, originates from the transverse spin-angular momentum of light and is observed in the case of surface waves, evanescent waves, and waveguide modes, is not investigated in this paper [10, 11, 12, 13, 2, 3].
These advancements have resulted in a variety of fundamental effects in the realm of photonic SOI across diverse light-matter interactions. Remarkable phenomena like spin-to-vortex transformation, orbital Hall effect, optical Rashba effect, plasmonic Aharonov-Bohm effect, spin-dependent transverse momentum, transverse SAM, spin-momentum locking, and spin-controlled directional waveguiding. These breakthroughs have paved the way for novel insights into universal SOI principles, offering new possibilities for designing spin-orbit photonic-devices.[13, 14, 15, 16, 17, 18, 19]
Most of the scenarios described previously in the context of spin orbit interactions specifically those dealing with geometrical phases are for ordered inhomogeneous anisotropic medium. One of such realization is the metasurface which is fabricated by spatially structuring anisotropic media in the nanometer length scale. However, a recent discovery showcased the possibility of obtaining spin-orbit-coupled random scattering modes across the entire momentum range in a completely disordered, inhomogeneous, and anisotropic optical system. This phenomenon, known as the random optical Rashba effect[20, 21], is characterized by the presence of disordered spin-orbit coupling throughout the beam profile or a disordered spatial distribution of the geometric phase and its gradient [8]. Notably, the impact of intrinsic orbital angular momentum beams on spin-split modes remains unexplored, which is the focus of this investigation.
This paper aims to explore the influence of topological charge, spot size, and spatial structure of the beam on resulting spin-split modes in disordered anisotropic media. For this purpose, we have separately investigated using Lagaurre-Gaussian (LG) and Perfect vortex (PV) beams. The main purpose of using PV beams is to investigate the role of spot size and topological charges separately, as the size of vortices is independent of topological charges for PV beams. Here, we have quantified the statistics of spin-split modes in the momentum domain by the standard Shannon entropy. The study shows the momentum domain entropy of the spin-split modes are affected by both topological charge and spatial structure.
The structure of this paper is as follows: Section 2 presents the theoretical framework of LG beams and perfect vortices, along with the formation of spin-split modes. Section 3 outlines the experimental procedure, while Section 4 discusses simulations and results of scattered modes in momentum space entropy. Finally, Section 5 provides concluding remarks and a summary of the findings.
## 2 Theory
Let \(|E_{i}\rangle\) and \(|E_{o}\rangle\) be the input and output electric fields respectively, and the beam (containing RCP or LCP polarized light) is passing through an inhomogeneous(in the transverse plane with coordinates \(\xi\rightarrow\)x/y-plane and z being the propagation direction of light) anisotropic medium, we get -
\[|E_{o}\rangle=e^{i\phi_{d}(\xi)\pm\phi_{g}(\xi)}\left|E_{i}\right\rangle \tag{1}\]
where \(\phi_{d}(\xi)\) is dynamic phases and \(\phi_{g}(\xi)\) is PB geometric phase. When the two phases have equal gradient, \(\frac{d\phi_{d}(\xi)}{d\xi}\)=\(\frac{d\phi_{d}(\xi)}{d\xi}\)=\(\Omega_{\xi}\), then the momentum domain shifts for RCP and LCP polarization states become
\[<k_{\xi}>_{RCP}=2\Omega_{\xi}\quad and\quad<k_{\xi}>_{LCP}=0 \tag{2}\]
where
\[<k_{\xi=x/y}>=\frac{<E_{0}|i\frac{\partial}{\partial\xi x/y}|E_{0}>}{<E_{0}|E_ {0}>}\]
i.e., for RCP polarization, momentum domain shift becomes twice of spatial gradient of phase (geometric or dynamic), and for LCP, momentum domain shift does not occur [22, 23].
**Dynamical phase and Pancharatnam-Berry (PB) geometric phase in twisted nematic liquid crystal layers:** Polarized light gains PB geometric phase as well as dynamical phase while propagating in an anisotropic material. The dynamical phase for a linear birefringent medium is determined by the extraordinary and ordinary refractive indices (\(n_{e}\) and \(n_{o}\)) and consequently it also depends upon the magnitude of linear retardance \(\delta\) (defined as \(\delta=\frac{2\pi}{\lambda}(n_{e}-n_{0})d\), where \(d\) is the path length and \(\lambda\) is the wavelength). The PB geometric phase in such birefringent medium, on the other hand, is determined by the orientation angle of the anisotropy axis. Thus, in principle, one can produce equal spatial gradients of the dynamical phase and PB geometric phase in an inhomogeneous birefringent medium by controllably and simultaneously changing the magnitude of linear retardance \(\delta\) and the orientation angle of the anisotropy axis in the transverse plane.
The above can be realized by one of the readily available system, which is a twisted nematic liquid crystal-based spatial light modulator (SLM). The evolution of polarization in SLM can also be alternatively modelled using the effective Jones matrix (\(J_{eff}\)) as a sequential product of matrices of an equivalent linear retarder (\(J_{reta}\), with effective linear retardance \(\delta_{eff}\) and its orientation angle \(\theta_{eff}\)) and an effective optical rotator (with optical rotation \(\psi_{eff}\)). The rotation does not truly have a dynamical origin, it is actually related to the twist angle (\(\psi\)).[22, 24, 25, 26, 27, 28]
\[J_{eff}=R(\psi_{eff})J_{reta}(\delta_{eff},\theta_{eff}) \tag{3}\]
\[\text{where }\psi_{eff}=-\psi+2\theta_{eff}\]
The total dynamical phase is primarily determined by the total linear retardance \(\delta\), while the PB geometric phase is determined by effective optical rotation (\(\psi_{eff}\)). It was shown previously [22], for a certain range of grey values (n=30-170), the two gradients, \(\frac{d\delta(n)}{dn}\) and \(\frac{d\psi_{eff(n)}}{dn}\) are equal. This leads to an equal gradient of geometric and dynamic phases which results in equation 2. The variation of the \(\delta\), \(\psi_{eff}\) parameters depends on the grey-level values (n). By changing the grey level values (n) from 30 to 170, both geometric and dynamical phases can be simultaneously tailored in an SLM. In our experiment we have modulated the pixels of SLM by generating random grey values (n) using a delta correlated uniformly distributed random function \(f^{\epsilon}(\phi_{g})=1/2\pi\epsilon\) for \(-\epsilon\pi\leq\phi_{g}(x)<e\pi\) otherwise \(f^{\epsilon}(\phi_{g})=0\), \(0\leq\epsilon\leq 1,(wheree\) controls the amount of randomness of \(x\) coordinate).(shown in Fig.1)
The variation of phase in transverse plane, manifests as a distribution of intensities in momentum domain, which can be attributed to either dynamical phase, or geometrical phase, or a combination of both. Particularly, when this distribution is generated through the influence of only geometrical phase (\(\phi=\pm\phi_{g}\)), the SHE, becomes observable for right circularly polarized (RCP) and left circularly polarized (LCP) light. Here, the SOI of light results from the light beam's spatial inversion symmetry being broken by an inhomogeneous distribution of a geometric and dynamic phase combination[1, 2, 3, 4, 5, 6, 7, 8, 29]. The strength of such an effect depends on the phase inhomogeneity acquired by the light beam. The corresponding intensity will be distributed throughout the momentum space as
\[I(k_{x},k_{y})=|\iint_{-\infty}^{+\infty}e^{-i(k_{x}x+k_{y}y)}E_{o}(x,y)dxdy|^ {2}. \tag{4}\]
It is also important to note, as long as we can describe the system(the anisotropic media) by a local phase gradient (\(\xi\)), we will observe momentum domain spin-Hall effect of light as per equation 2. But as the randomness increases, beyond a certain value of disorderness when the system cannot be described by a particular local gradient anymore, one would observe random spin-split scattered modes in the momentum domain. To quantify the statistics of the momentum domain intensity distribution of the random spin-split scattered modes, we have defined the well-known Shannon entropy function in the following way-
\[H=-\sum_{i}p_{i}[I(k_{x},k_{y})]log(p_{i}[I(k_{x},k_{y})]) \tag{5}\]
where \(p_{i}[I(k_{x},k_{y})]\) is the probability density function (PDF) of intensity distribution of the scattered modes in the momentum domain[8].
For this purpose, in addition to the normal Gaussian beam we have used Laguerre-Gaussian (LG) and Perfect Vortex (PV) beams with different \(l\) values (topological charges). Perfect vortex beams have a constant spatial structure regardless of their topological charge. This characteristic makes it a valuable tool for decoupling the effects of the spatial beam structure and the phase vortex.
Laguerre-Gaussian beam at the source plane z=0 has the form -
\[E_{pl}(x,y,0)=(\frac{\sqrt{2}\rho}{\omega})^{l}L_{p}^{l}(\frac{2\rho^{2}}{ \omega^{2}})exp(-\frac{\rho^{2}}{\omega^{2}}+il\phi) \tag{6}\]
where \(\rho=(x^{2}+y^{2})^{\frac{1}{2}}\) and \(\phi=tan^{-1}(y_{0}/x_{0})\), \(\omega\) is beam waist. \(L_{p}^{l}\) is the associated Laguerre polynomial where p and \(l\) are the radial modes and the angular modes (topological charges) respectively.
An approximate model of the PV beam [30] to enable experiments is as follows:
\[E_{PV}(\rho,\phi_{0})=exp[-\frac{(\rho-\rho_{0})^{2}}{\Delta\rho^{2}}]exp(il \phi_{0}) \tag{7}\]
where \((\rho,\phi_{0})\) are the polar coordinates in beam cross section, \(l\) is the topological charge \(\rho_{0}\) is the radius of annular bright intensity, \(\Delta\rho\) is a small width. The Fourier transformation of an ideal Bessel beam function [31] may be used to calculate the approximate model of PV beams, and it can be written as
\[E_{BG}(\rho,z)=J_{l}(k_{x}\rho)exp(il\phi_{0}+ik_{z}z) \tag{8}\]
where \(J_{l}\) is the first kind of \(l^{th}\) order bessel function, \(k=\sqrt{k_{x}^{2}+k_{z}^{2}}=2\pi/\lambda,r=(\rho,\phi_{0})\) and \((k_{r},k_{z})\) are the radial and longitudinal wave vectors respectively.
Figure 1: The geometric phase distribution projected in the spatial light modulator (SLM). a) the grid was divided into pixels in x and y directions, where each pixel has a single random grey value (generated using delta correlated function as mentioned above). b) grid is divided azimuthally into n number of uniform divisions, with each azimuthal division having a single random grey scale value (n).
Experimental Methods
A schematic of the experimental setup for observing spin-split scattering modes for LG beam is shown in Figure 2(a). A fundamental Gaussian mode of a 632.8 nm line of a He-Ne laser (HNL050L, Thorlabs, 5mW power) is used in this set-up. The beam is transmitted through the spatial light modulator (SLM, LC2012). A computer-generated fork hologram for different topological charges (\(\pm\)l) is projected to the SLM1 to produce an orbital angular momentum carrying beam (LG). After passing through the SLM1, the central beam (zeroth order) remains Gaussian, and its adjacent beams (first order) are LG with \(\pm\)l topological charges. An aperture is used after SLM to select and pass only one LG beam (\(+l\) or \(-l\)) according to necessity. After being reflected by two mirrors (M1 and M2), the beam passes through PSG (Polarization State Generator) unit comprised of a Glan-Thomson linear polarizer (P1, GTH10M, Thorlabs, USA). The PSA (Polarization-State-Analyzer) unit consists of a linear polarizer and a quarter-waveplate, but positioned in the reverse order for selecting LCP or RCP polarization. In the middle of PSG and PSA, we have spatial light modulator (SLM2, same model as SLM1) realized as random inhomogeneous anisotropic media (shown in Figures 1a and 1b). Finally, there is a lens to obtain the Fourier image in the momentum domain. A CCD (1024\(\times\)768 square pixels, pixel dimension 4.65 \(\mu\)m, Thorlabs, USA) has been used at the end to collect the light at the Fourier plane. Spin-selective random scattering modes are observed for RCP polarization only and LCP polarization does not show any results. Figure 2(b) and 2(c) are the momentum domain intensity distribution for input LG beam while Figure 2(d) and 2(f) are for input PV beam.
In the usual case of spin-Hall effect of light, the input beam selected by PSG is linearly polarized. In SLM2, we have synchronously modulated the geometric and dynamic phases as shown in [22]. For one circularly polarized light (RCP), the geometric and dynamic phases add up; for the other, they cancel out giving no effect for disorder. This is called spin-selective scattering modes. To observe these modes, the inhomogeneous anisotropic medium was made possible by modulating the pixels of the SLM2 by randomly distributed grey values (n=30-170). The spin-selective scattering modes have been observed for all the beams such as Gaussian, LG, and perfect vortex. Figure 2(b),(c) shows this effect for LG beam with topological charge \(l\)=5 and Figure 2(d),(e) is for PV beam with topological charge \(l\)=5.
To build a setup for perfect vortex (PV), we need an additional lens and axicon in the above setup. Axicon lens converts LG to Bessel Gauss consequently Lens L1 has been used as a Fourier lens to transforms Bessel Gauss to perfect vortex. The position of lens L1 and axicon is commutative. PV beam is obtained exactly at the focal plane of lens L1 where the SLM2 [projecting the disordered anisotropic media (shown in Figure 1)] has to be placed accurately; a slight shift from this plane gives us bessel gauss. To obtain the Fourier transform lens L2 also has to be placed in the Fourier plane of L1. This was an experimental challenge to adjust L2 and SLM2 in the same plane, keeping the distance between them only less than 1 millimeter. We observed spin-selective scattering modes for perfect vortex as well similar to LG or Gaussian beam.
In order to calculate the momentum space entropy (Shannon entropy) in the Fourier plane of LG or PV beam, we calculate the normalized probability density function (PDF) of intensity distribution collected by CCD camera of 1024\(\times\)768 pixels. This normalized PDF is used in the expression of Shannon entropy using Eqn 5 [8], to get the momentum space entropy.
Figure 2: **(a) Schematic of the experimental arrangement for observing spin-selective random scattering modes using Gaussian and Laguerre-Gaussian (LG) beams.** He-Ne Laser: light source; SLM1, SLM2: spatial light modulators (SLM1 was used to generate the LG beam with different topological charge, SLM2 was used to realize the disordered anisotropic media); A: aperture; M1, M2: mirrors; P1, P2: polarizers; QWP: quarter wave-plate; L: lens; CCD: camera (placed in the Fourier plane to record momentum domain) here. The polarization-state analyzer (PSA) unit comprises a linear polarizer and quarter waveplate. This unit analyzes desirable polarization states of light (LCP or RCP in this case). Spin-selective scattering modes of LG beam transmitted through disordered anisotropic media of \(\epsilon\)=1 are shown for topological charge \(l\)=5 for RCP **(b)** and LCP **(c)** by projecting 50 azimuthal divisions in SLM2 (each division has different grey-scale values (n) which have been assigned randomly). Similar result are shown for input PV beam of topological charge \(l=5\) passing through the same grey scale pattern projected on SLM2 as mentioned above; in **(c)** LCP and **(d)** RCP. The spin selective property of the random inhomogeneous media [23] holds not only for Gaussian beam but also for LG and PV beams. **(f) Schematic of the experimental setup to observe spin-selective scattering modes of input perfect vortex beams** Most components are similar to the setup shown in (a). Additional components to produce the PV beam are:- L1, Axicon. L1, L2: lenses which are used as Fourier lenses; Axicon lens converts LG beam to Bessel Gauss.
Results and Discussions
At first, we consider the effect of input LG beam with the varying \(l\) values and input disorder parameter \(\epsilon\) of the disordered anisotropic optical media. As shown in Figure 3, which displays the experimentally observed momentum space intensity distributions of Gaussian and LG beam with \(l\)=0, 3, 5, and with varying input disorder parameters, \(\epsilon\) = 0, 0.5, 1. It is observed as the input disorder parameter \(\epsilon\) increases, the number of random scattered spin-split modes increases gradually which indicates increase in momentum space entropy. This variation has been quantified by using equation 5 and is presented in Fig 4.
Figure 4, depicts the variation of momentum space entropy with input \(\epsilon\) parameter along with varying topological charges \(l\) for input LG beam. We observe a sudden surge in momentum space entropy at a critical value of the disorder parameter \(\epsilon\) i.e., there exists a threshold value of \(\epsilon\). This provides strong evidence for existence of a phase transition within the system under study. This occurs because, as long as a local spatial phase gradient (both geometrical and dynamical) can be clearly defined within the system, we anticipate the Spin Hall Shift. However, as the heterogeneity of the optical media intensifies, a critical point is reached where the local spatial phase gradient can no longer be distinctly defined. i.e., the media becomes completely random. We observe the emergence of random spin split modes. In Fig. 4 (a) and (b) we see a clear threshold value of the disorder parameter \(\epsilon\) around 0.25. After this, as input disorder parameter \(\epsilon\) increases the variation of momentum space entropy increases rapidly. Also, we see greater \(l\) values imply greater total momentum space entropy for a input disorder parameter \(\epsilon\). It is vital to note that as one varies the \(l\) values, one would not only vary the topological charge but also the effective vortex size or the input spot size of the beam in the spatial domain. This correspondingly changes the amount of inhomogeneity probed by the input beam in the spatial domain and consequently their momentum domain span. It may so happen that \(l\) influences the spin split modes but in addition to that the effective spot size will also affect them as larger spot size will probe greater area of inhomogenity. This is why we anticipated that not only \(l\) but the spin split modes will also be affected by the spot size of the input beam. Thus, it is necessary to investigate the role of beam spot size separately.
We have investigated the role of spot size of the input beam by keeping the \(l\) value same and varying the effective spot size of the input beam. The momentum space entropy variation shown in Fig 5. follows the same trend as in Fig 4. A clear threshold is observed here for disordered parameter around \(\epsilon\) = 0.25, similar to the previous case. However, the variation of momentum space entropy with the input disorder parameter \(\epsilon\), is much steeper when
Figure 4: Momentum space entropy (\(H\)) variation of LG beams with different topological charges, \(l\) = 0, 3, 5, 7, 10, passing through disordered anisotropic media while disorder parameter \(\epsilon\) varies from 0 to 1. The results are for different effective beam waist in the spatial domain, (a) 2.5mm and (b) 1.88mm. The LG beam is passed through the SLM2 which is projecting with Figure 1(a) random media. Results are shown only for RCP polarization, was selecting using PSA.
Figure 3: Variation of momentum domain intensity distribution of Gaussian and Laguerre-Gaussian beams for different topological charges (\(l\)=0, 3, 5) transmitting through disordered anisotropic media with input disorder parameter \(\epsilon\) = 0, 0.5, 1. As the \(\epsilon\) value increases, the beam gets more and more scattered. SLM2 was projected with 50 azimuthal divisions of random grey scale (n). Results shown are only for RCP polarization selected using PSA.
we have larger spotsize of the input beam. The important concept here is that if we take the same heterogeneous system and use a smaller spot size of the beam then it probes a lesser inhomogeneity but the corresponding momentum span is large. On the other hand, for a larger spot size the scale of inhomogeneity probed by the input beam is more, for a corresponding small momentum domain span. So, one would expect a higher momentum space entropy in the latter case. Thus, the effects of spotsize and topological charge \(l\) on the spin split modes are still combined.
To decouple their effects, we have used the Perfect Vortex (PV) beams. As mentioned in the theory section the spatial structure of the PV beam remains constant for all \(l\) values. The experimentaly obtained PV beams are shown in figure Fig. 6 (a). The figure shows for topological charges \(l=3\), \(5\), \(7\) the spatial structure of the beam remains constant in spatial domain. The corresponding momentum space intensity distribution for \(l=3\), \(5\), \(7\) for input disorder parameter \(\epsilon=0\), \(0.5\), \(1\) is shown in Fig. 6 (b). Similar to the previous case, with increasing input disorder parameter \(\epsilon\), the beam gets more scattered. For the maximum input disorder parameter, for \(\epsilon=1\), the maximum intensity of the central beam decreases as the number of random spin split modes is highest for this case. Since spatial structure for PV beams is independent of \(l\), the inhomogeneity probed by the input PV beam remains fixed here for all values of \(l\); unlike the case of LG beam, where the inhomogeneity probed by the input beam changes with the change of \(l\) values due to change in the spatial structure of the beam.
The corresponding momentum space entropies are quantified in figure 7. Even though in the space domain, the amount of heterogeneity probed by the input PV beam remains same for all \(l\) values, their momentum space (Fourier domain span) may get changed as PV transforms into Bessel-Gauss beam in the Fourier domain. Despite this what we observed is with variation in values of \(l\) and with variation in disorder parameter \(\epsilon\), the change of momentum space entropy is rather minimal. Therefore, this indicates, that the influence of topological charge \(l\) alone on the statistics of momentum domain spin split modes and on their corresponding momentum domain entropy is rather weak. For the LG beam, the momentum domain entropy of the spin split modes is also primarily dominated by the change in spatial structure of the beam i.e. the effective spot size rather than its topological charge. This is further confirmed by the simulation in Fig 8.
Figure 5: Momentum space entropy (\(H\)) variation of LG beams for different effective beam waist in spatial domain, \(\omega_{0}\)=2.5mm, 1.88mm, 1.5mm, 1.25mm, 1.07mm; for (a) \(l\)=5 and for (b) \(l\)=7 for disorder parameter \(\epsilon\) varying from 0 to 1. The LG beam is passed through the SLM2 which is projecting with Figure 1(a) random media. Results are shown only for RCP polarization, was selecting using PSA.
Figure 6: (a) Input Perfect Vortex beam in spatial domain where the spatial structure of the beam remains same for different topological charges \(l\)=0, 5, 10. (b) The corresponding momentum space intensity distribution after the input perfect vortex beam with the \(l=3\), 5, 7 passes through inhomogeneous anisotropic media (projected in SLM 2) with disorder parameter \(\epsilon\) values = 0, 0.5, 1.
Fig. 8 shows the variation of momentum space entropy with the variation input disorder parameter for LG beam of different input spot sizes. Fig 8 (a) corresponds to input beam waist 10 mm and Fig. 8 (b) corresponds to input beam waist of 2.5 mm. The momentum space entropy is much higher for smaller input spot size: 2.5 mm (fig. 8 (b)) than for the larger input spot size: 10mm (fig. 8 (a)). It is known that effective spot size of LG beam decreases with lowering in \(l\) value. Here, we clearly see a rise in threshold value as the overall input spot size increases. As discussed before, a smaller input spot size probes smaller area of inhomogeneity but corresponds to a larger momentum domain span. Thus, we can conclude from fig. 8 that the role of input spatial structure of the beam is more dominant than its topological charge \(l\).
|
2305.19928 | Supplementary Features of BiLSTM for Enhanced Sequence Labeling | Sequence labeling tasks require the computation of sentence representations
for each word within a given sentence. A prevalent method incorporates a
Bi-directional Long Short-Term Memory (BiLSTM) layer to enhance the sequence
structure information. However, empirical evidence Li (2020) suggests that the
capacity of BiLSTM to produce sentence representations for sequence labeling
tasks is inherently limited. This limitation primarily results from the
integration of fragments from past and future sentence representations to
formulate a complete sentence representation. In this study, we observed that
the entire sentence representation, found in both the first and last cells of
BiLSTM, can supplement each the individual sentence representation of each
cell. Accordingly, we devised a global context mechanism to integrate entire
future and past sentence representations into each cell's sentence
representation within the BiLSTM framework. By incorporating the BERT model
within BiLSTM as a demonstration, and conducting exhaustive experiments on nine
datasets for sequence labeling tasks, including named entity recognition (NER),
part of speech (POS) tagging, and End-to-End Aspect-Based sentiment analysis
(E2E-ABSA). We noted significant improvements in F1 scores and accuracy across
all examined datasets. | Conglei Xu, Kun Shen, Hongguang Sun | 2023-05-31T15:05:25Z | http://arxiv.org/abs/2305.19928v4 | # Supplementary Features of BiLSTM for Enhanced Sequence Labeling
###### Abstract
Sequence labeling tasks require the computation of sentence representations for each word within a given sentence. A prevalent method incorporates a Bi-directional Long Short-Term Memory (BiLSTM) layer to enhance the sequence structure information. However, empirical evidence Li (2020) suggests that BiLSTM's capacity to produce sentence representations for sequence labeling tasks is inherently limited [1]. This limitation primarily results from the integration of fragments from past and future sentence representations to formulate a complete sentence representation. In this study, we observed that the entire sentence representation, found in both the first and last cells of BiLSTM, can supplement each cell's individual sentence representation. Accordingly, we devised a global context mechanism to integrate entire future and past sentence representations into each cell's sentence representation within the BiLSTM framework. By incorporating the BERT model within BiLSTM as a demonstration, and conducting exhaustive experiments on nine datasets for sequence labeling tasks--including named entity recognition (NER), part of speech (POS) tagging, and End-to-End Aspect-Based sentiment analysis (E2E-ABSA)--we noted significant improvements in F1 scores and accuracy across all examined datasets.
BiLSTM, BERT, global context, sequence labeling.
## I Introduction
Bilstm [2] has emerged as a widely embraced neural network for modeling structural information in inputs for sequence labeling tasks, such as NER [3, 4], and POS tagging [5], since its initial application in speech recognition tasks. Recently, with the development of pretrained language models, BiLSTM has been employed as an auxiliary layer to augment sentence representations for sequence labeling [6, 7]. Furthermore, due to its ability to capture sequential information in inputs, BiLSTM is also a popular architecture in other domains. Li (2019) demonstrates the efficacy of combining BERT with BiLSTM [8], which involves the concurrent identifications of aspect terms/categories and their corresponding sentiments within a sequence tagging framework. Other studies have shown that BiLSTM could effectively improve the accuracy of heart rate prediction [9] and achieve competitive prediction results on long-term traffic flow forecasting [10].
The issue of shallow connections between consecutive hidden states of BiLSTM has been recognized for some time. Pascanu (2013) proposed the deep transition RNN for language modeling [11], which increases the transition depth between consecutive hidden states for richer representations. Meng and Zhang (2019) capitalized on a deep transitional architecture for machine translation [12], while Liu (2019) achieved state-of-the-art results on sequence labeling with a similar architecture [13]. More recently, Li (2020) examined the lack of global sentence information in inner cells of BiLSTM in the context of NER [1], concluding that this specific limitation could not be resolved by merely increasing the transitional depth of consecutive hidden states. However, previous methods have primarily focused on enhancing transition depth of BiLSTM and modifying its inner structure, which could potentially impede the inference speed and are difficulty to use in the real-world applications.
In this paper, we propose a straightforward global mechanism designed to integrate global sentence information into the sentence representation of each cell. This mechanism could be conveniently incorporated into the BiLSTM framework. Specifically, we discovered that the entire past (forward) sentence representation, located in the last cell, can serve as supplementary features for each future (backward) sentence representation. Conversely, the entire future sentence representation situated in the first cell exhibits the same supplementary feature in reverse. Furthermore, this global mechanism could also be employed independently for certain tasks, separate from BiLSTM.
We assessed this global mechanism on nine datasets from sequence labeling tasks, including E2E-ABSA, POS tagging, NER. Using BERT with BiLSTM as an example, it bolsters the model's performance on these tasks without significantly impacting the speed of inference and training. Further experiments were conducted to probe the global mechanism's ability to decipher the relationships between tags. By directly adding it after BERT, improvements in F1 and accuracy scores were noted.
## II Related Work
Pretrained language models are pretrained language models such as BERT [14], XLNET [15] have delivered state-of-the-art results across a myriad of tasks. An increasing body of literature has introduced BiLSTM as an additional layer for pretrained language modules to enrich sentence representation in sequence labeling tasks. For instance, Jie and Lu (2019), Sarzynska-Wamer (2021) combined BiLSTM-CRF
with ELMO [16, 17], achieving superior performance in NER tasks. Similarly, Xu (2021) applied an advanced BiLSTM with BERT [7], setting a new standard for NER tasks. In the realm of POS tagging tasks, Labrak and Dufour (2022) reached unprecedented level of success POS tagging tasks using flair embeddings [18] and BiLSTM [19]; Additionally, X. Li (2019) employed BERT with BiLSTM to substantially augment E2E-ABSA [8].
Deficiency of BiLSTM: Regarding the shallow representations of BiLSTM, Liu (2019) enhanced sentence representation for sequence labeling tasks with a deep RNN transitional architecture [13]. Meanwhile Meng and Zhang (2019) leveraged a Linear transformation enhanced GRU model to significantly improve the BLEU score in machine translation [12]. Further, Li (2020) utilized a simple self-attentive mechanism on BiLSTM outputs to solve the lack of complete sentence information in inner cells [1].
Gate mechanism: Concerning the gate mechanism, a concept well-established in LSTM [20], several researchers have utilized it to fuse past contextual and current information. Specifically, H. Chen (2019) and Zeng, (2019) improved sentence representation of CNN on natural language tasks with a gate mechanism [21][22]. Moreover, Yuan (2020) and X. Zeng (2016) used it to extract features from different support regions on object detection tasks [23, 24].
## III Model
Our global context mechanism experiments build upon a model that includes a BERT embedding layer with a downstream BiLSTM layer. We implement the global context mechanism following the BiLSTM layer and compare it with the baseline: self-attention network [25]. An overview of the model is illustrated in Figure 1.
### _BERT Embedding Layer_
BERT embedding offers dynamic token representations based on input sentences, a noteworthy advancement over traditional word embedding methods GLOVE [26] and Word2Vector [27] that generates a static matrix, Specifically, given a sequence of input \(S=\{w_{1},w_{2},\ldots,w_{n}\}\), n denotes the length of the input sentence. BERT provided its contextualized representation \(Z=\{z_{1},z_{2},\ldots,z_{n}\}\) as follows:
\[Z=BERT(S) \tag{1}\]
### _BiLSTM Layer_
BiLSTM has been a very powerful structure for sequence labeling tasks, owning to its ability to model sentence structure and keep dependencies in long sentences. In this researcher, we utilize BiLSTM to enhance sentence representation for each word. Using the time step t as an example, BiLSTM generates the sentence representation \(H_{t}\) based on \(Z=\{z_{1},z_{2},\ldots,z_{n}\}\):
\[\overrightarrow{H_{t}}=\overrightarrow{LSTM_{t}}(\overrightarrow{H_{t-1}}, z_{t}) \tag{2}\]
\[\overleftarrow{H_{t}}=\overleftarrow{LSTM_{t}}(\overleftarrow{H_{t+1}}, z_{t}) \tag{3}\]
\[H_{t}=\overrightarrow{H_{t}}\parallel\overleftarrow{H_{t}} \tag{4}\]
Fig. 1: Overview of the model architecture.
### _Global Context Mechanism_
In light of the fact that the entire sentence information is confined to the first and last cells, we amalgamate it with the entire sentence representation \(G=\overrightarrow{H_{1}}\parallel\overrightarrow{H_{n}}\)using weights \(i_{H}\) and \(i_{G}\). Figure depicts the structure of the global context mechanism. Given the BiLSTM outputs \(H=\{H_{i},H_{2},\ldots,H_{n}\}\)\(H\ni R^{n\times d}\); for the \(t_{t}h\) step, we derive \(O_{t}=G||H_{t}\), for gate mechanism to generate \(i_{H}^{t}\) and \(i_{G}^{t}\).
In the gate mechanism, a linear map is employed to select pertinent features from \(O_{t}\) firstly.
\[R_{H}=W_{H}O_{t}+b_{H} \tag{5}\]
\[R_{G}=W_{G}O_{t}+b_{G} \tag{6}\]
Where \(W_{H}\) and \(W_{G}\in R^{2d\times d}\); \(R_{G}^{t}\), \(R_{G}^{t}\) is for global information \(G\) and current sentence representation \(H_{i}\) respectively. And then weights \(i_{H}^{t}\) and \(i_{G}^{t}\) are given by a sigmoid function.
\[i_{H}^{t}=sigmoid(R_{H}^{t}) \tag{7}\]
\[i_{G}^{t}=sigmoid(R_{G}^{t}) \tag{8}\]
At last, \(G\) and \(H_{t}\) are fused by \(i_{H}^{t}\) and \(i_{G}^{t}\).
\[\hat{O_{t}}=i_{H}^{t}\odot H_{t}\parallel i_{G}^{t}\odot G \tag{9}\]
Where \(\odot\) denotes element wise product.
\begin{table}
\begin{tabular}{|l|c c|c c|c c|} \hline \multirow{2}{*}{Models} & \multicolumn{2}{c|}{Conll2003} & \multicolumn{2}{c|}{Wnut2017} & \multicolumn{2}{c|}{Weibo} \\ \cline{2-7} & F1 & Speed & F1 & Speed & F1 & Speed \\ \hline BERT & 91.51 & 16.30 & 43.59 & 15.53 & 68.09 & 11.46 \\ BERT-BiLSTM & 91.85 & 15.12 & 46.95 & 14.59 & 68.86 & 10.44 \\ BERT-BiLSTM-context & **91.91** & 14.80 & **48.02** & 14.03 & **69.84** & 10.15 \\ BERT-BiLSTM-attention & 91.19 & 14.66 & 46.39 & 13.77 & 67.83 & 10.18 \\ \hline \end{tabular}
\end{table} TABLE II: Result on NER.
\begin{table}
\begin{tabular}{|l|c c|c c|c c|c c|} \hline \multirow{2}{*}{Models} & \multicolumn{2}{c|}{Laptop14} & \multicolumn{2}{c|}{Rest14} & \multicolumn{2}{c|}{Rest15} & \multicolumn{2}{c|}{Rest16} \\ \cline{2-9} & F1 & Speed & F1 & Speed & F1 & Speed & F1 & Speed \\ \hline BERT & 58.49 & 12.87 & 69.75 & 13.61 & 57.07 & 15.37 & 65.95 & 15.81 \\ BERT-BiLSTM & 61.12 & 12.46 & 73.47 & 13.17 & 61.14 & 14.79 & 71.05 & 14.66 \\ BERT-BiLSTM-context & **62.92** & 11.65 & **73.84** & 12.97 & **63.24** & 14.51 & **71.51** & 13.74 \\ BERT-BiLSTM-attention & 59.48 & 11.98 & 72.73 & 12.68 & 60.34 & 13.99 & 69.05 & 13.44 \\ \hline \end{tabular}
\end{table} TABLE I: Result on E2E-ABSA. Unit of speed is the number of iterations per second.
Fig. 2: Overview of the model architecture.
The prediction results are given by:
\[\tilde{O_{t}}=softmax(W_{c}\hat{O_{t}}+b_{c}) \tag{10}\]
\(W_{c}\in R^{d\times u}\), u is the number of classes.
### _Self-Attention Network_
Another method to capture the interaction between past and future contexts at each time step for BiLSTM is through a token-level self-attentive mechanism proved by Li (2020) [1]. Given the BiLSTM outputs H of a sentence, the model maps each \(H_{i}\in H\) to different subspaces, this depends on whether it is being used as a query vector to consult other hidden states. The final representation is crafted by fusing value vectors according to weights computed by incoming queries between key and query vectors. In this study, we employ a multi-head attention self-attention network [25].
Formally assuming the number of head is m, for the head i, attention weight matrix \(\alpha^{i}\) and context matrix \(C^{i}\) are computed as follows:
\[\alpha^{i}=softmax(\frac{HW^{qi}(HW^{ki})^{T}}{\sqrt{d_{C}}}) \tag{11}\]
Where \(W^{qi}\), \(W^{vi}\), \(W^{ki}\in R^{d_{h}*d_{e}}\) are trainable projection matrices.
Taking time step \(t\) as example, the context matrix \(C_{t}=C_{t}^{1}\parallel C_{t}^{2}\parallel\cdots\parallel C_{t}^{m}\) and BiLSTM output \(H_{t}\) are considered together for classification.
\[\hat{O_{t}}=H_{i}+C_{t} \tag{12}\]
\[\tilde{O_{t}}=softmax(W_{c}\hat{O_{t}}) \tag{13}\]
## IV Experiments
This section presents the results of global context mechanism applied on nine datasets from E2E-ABSA, NER, and POS tasks. We selected the best models based on the results from development dataset, using an early stopping setting. The BERT-base model, as implemented in the HuggingFace package, was used in this study. Detailed information about the learning rate can be found in the appendix. All experiments were conducted on a server equipped with an Nvidia A10 GPU.
### _End-to-End Aspect-Based sentiment analysis_
E2E-ABSA aims to detect aspect terms and their corresponding sentiments jointly. The possible tag values include \(B-\{POS,NEG,NEU\}\), \(I-\{POS,NEG,NEU\}\), \(E-\{POS,NEG,NEU\}\), \(S-\{POS,NEG,NEU\}\) or \(O\). These tags denote the beginning of an aspect, inside of an aspect, end of an aspect, single-word aspect, with positive, negative, or neutral sentiment respectively, as well as outside of aspect.
Experiments are conducted on two review datasets originating from SemEval [28, 29, 30] re-prepared by Li and Bing (2019) [31] as a sequence labeling task. For Laptop14 and Resturant15 (Rest15), batch size of 16 is employed; for Restaurant16 and Restaurant14, a batch size of 32 is utilized AdamW is used for gradient update for all these four datasets.
As shown in Table I. This mechanism attains \(0.46\%\), \(2.1\%\), \(0.37\%\), \(1.8\%\) absolute f1 improvements on Restaurant14, Restaurant15, Restaurant16 and Laptop14 respectively, while requiring minimal computing resources. The result suggests that fusing information by self-attention network does not benefit BiLSTM after BERT on E2E-ABSA.
\begin{table}
\begin{tabular}{|l|c|c|} \hline Models & Conll2003 & UD \\ \hline BERT & 95.56 & 96.85 \\ BERT-BiLSTM & 95.66 & 95.90 \\ BERT-BiLSTM-context & 95.62 & 97.01 \\ BERT-BiLSTM-attention & 95.30 & **97.09** \\ BERT-context & **95.67** & 96.90 \\ \hline \end{tabular}
\end{table} TABLE V: Comparison between CRF and global context mechanism on pure BiLSTM.
\begin{table}
\begin{tabular}{|l|c c|c c|c c|c c|} \hline Models & Rest14 & Rest15 & Rest16 & Laptop14 & Conll2003 & Wnut2017 & Weibo & Conll2003 & UD \\ \hline BERT-BiLSTM & 73.47 & 61.44 & 71.06 & 61.12 & 91.85 & 46.95 & 68.86 & 95.66 & 96.90 \\ BERT-BiLSTM-context & 73.84 & 63.24 & 71.51 & 62.92 & 91.91 & 48.02 & 69.84 & 95.62 & 97.01 \\ BERT-BiLSTM-\(con\hat{text}\) & 72.07 & 61.90 & 67.88 & 60.97 & 91.21 & 48.08 & 68.47 & 95.50 & 96.76 \\ \hline \end{tabular}
\end{table} TABLE III: Comparison between the directions for fusing sentence representations.
\begin{table}
\begin{tabular}{|l|c|c|} \hline Models & Conll2003 & UD \\ \hline BERT & 95.56 & 96.85 \\ BERT-BiLSTM & 95.66 & 95.90 \\ BERT-BiLSTM-context & 95.62 & 97.01 \\ BERT-BiLSTM-attention & 95.30 & **97.09** \\ BERT-context & **95.67** & 96.90 \\ \hline \end{tabular}
\end{table} TABLE VI: Result on POS tagging.
\begin{table}
\begin{tabular}{|l|c c|c c|c c|c c|} \hline Models & Res14 & Rest15 & Rest16 & Laptop14 & Conll2003 & Wnut2017 & Weibo & Conll2003 & UD \\ \hline BERT-BiLSTM & 73.47 & 61.44 & 71.06 & 61.12 & 91.85 & 46.95 & 68.86 & 95.66 & 96.90 \\ BERT-BiLSTM-context & 73.84 & 63.24 & 71.51 & 62.92 & 91.91 & 48.02 & 69.84 & 95.62 & 97.01 \\ BERT-BiLSTM-\(con\hat{text}\) & 72.52 & 59.20 & 69.43 & 59.20 & 91.24 & 46.92 & 69.46 & 95.53 & 96.92 \\ \hline \end{tabular}
\end{table} TABLE III: Comparison between the directions for fusing sentence representations.
### _Named Entity Recognition_
NER aims to predict entity type of each token, which could include Person, Organization, Location etc. In this section, we utilize two English Datasets: Conll2003 [32], Wnut2017 [33] and a Chinese dataset, Weibo [34]. All three datasets employ a batch size of 16 and the AdamW optimizer.
As shown in Table II, the global context mechanism can increase F1 scores across all three datasets, whereas the self-attention method does not provide an F1 improvement.
### _Part-of-speech Tagging_
Part-of-speech tagging involves marking a word with its part of speech, which can include noun, verb, adjective, adverb, etc., in English.
We design experiments for POS tagging on Universal Dependencies (UD) v2.11 [35] and Conll2003 using batch size of 16 and AdamW optimizer.
Table VI indicates that the global context mechanism also increases accuracy in POS tagging. When we directly add global context mechanism after BERT, the results reveal that global context mechanism could achieve same accuracy as BiLSTM with higher speed on Conll2003 POS tagging.
To further investigate this, we compare the conditional random field (CRF) for comparison with POS tagging on pure BiLSTM. Table V displays that the global context mechanism leads to accuracy improvements and is much faster than the CRF. This suggests that the global context mechanism can serve as a substitute for CRF when a trade-off between accuracy and speed is required.
## V Ablation Study
An ablation study was performed to assess the effects of the weights used for fusing sentence representations and the direction of sentence representations.
### _The directions of sentence representations_
We denote the combination of forward and backward sentence representation with their respective global sentence representation as \(con\vec{heck}\). As demonstrated in Table III, for these three types of sequence labeling tasks, we observe that fusing sentence representations in same direction only proves effective for Wnut2017 and Returant15. From this, we infer the that the positional information for one direction may override each other when sentence representations are fused.
### _Weights_
To evaluate the effectiveness of fusing sentence representations using weights, we introduced a comparison where sentence representations are added directly denoted as \(con\vec{heck}\).
Table IV shows that F1 scores of adding sentence representations directly are inferior to those of BERT-BiLSTM. This suggests that the weight mechanism plays a crucial role in fusing sentence representations.
### _Without BiLSTM_
As mentioned in the POS tagging experiments section, the global context mechanism can capture the relationship of tags for POS tagging. We conduct further experiments on this for other tasks by adding context mechanism directly after BERT. The results, presented in Table VII, indicates that the global context mechanism improves F1 and accuracy for most tasks and it is significantly faster than BERT with BiLSTM.
### _Case studies_
Chinese language is characterized by a significant degree of polysemy, where the interpretation of each character heavily depends on the context. Analyses were performed on the predictive outputs of model both with and without context mechanism. The results suggest that the global context mechanism significantly assist model in understanding the polysemy inherent in Chinese characters. A representative example from the test dataset is provided in Table VIII, with errors highlighted yellow. The Chinese characters '\(\mathcal{\negmedown}\)' and '\(\mathcal{\negmedown}\)' can either refer to an individual or signify a title, depending on the context. Without the global context mechanism, the model struggles to discern whether these characters refer to a person or denote a title. However, with the aid of the global context mechanism, the model correctly assigns the relevant types for '\(\mathcal{\negmedown}\)'.
### _Visualization_
The weights \(i_{H}\) and \(i_{G}\), corresponding to BiLSTM outputs and global context information respectively, are segregated into six divisions at intervals 100, followed by the construction of scatter graphs. Upon examination, we find that a significant, portion of positions in \(i_{H}\) possess weights reaching one, whereas \(i_{G}\) contains a higher number of positions with smaller weights. Additionally, about quarter of positions in \(i_{H}\) and \(i_{G}\) showcases similar values. We have chosen the Chinese character '\(\mathcal{\negmedown}\)' from Table VIII to visualize, as depicted in Figure 3.
## VI Conclusion
In this research, we discovered that the entirety future and past sentence representations could be supplementary for past and future sentence representations of each cell respectively in BiLSTM. Based on this, we introduced a straightforward global context mechanism designed to compensate for the absence of complete sentence information in the intermediate
\begin{table}
\begin{tabular}{|l|c c c|c c c|c c|} \hline Models & Rest14 & Rest15 & Rest16 & Laptop14 & Conll2003 & Wnut2017 & Weibo & Conll2003 & UD \\ \hline BERT & 69.75 & 57.07 & 65.95 & 58.49 & 91.51 & 43.59 & 68.09 & 95.56 & 96.85 \\ BERT-context & 69.99 & 57.17 & 68.17 & 60.42 & 91.72 & 45.16 & 67.31 & 95.67 & 96.90 \\ \hline \end{tabular}
\end{table} TABLE VII: Experiments on adding global context mechanism directly.
cells of BiLSTM. This methodology can be conveniently incorporated with BiLSTM in practical applications. Empirical evaluations are conducted on three distinct tasks-named entity recognition (NER), part of speech (POS) tagging and End-to-End Aspect-Based sentiment analysis (E2E-ABSA)-illustrate marked enhancements in both F1 score and accuracy across all these tasks, while maintaining the speed of training and inference. Moreover, the experimental results suggest that the global context mechanism effectively captures the relational information between tags.
|
2302.14273 | QP Chaser: Polynomial Trajectory Generation for Autonomous Aerial
Tracking | Maintaining the visibility of the targets is one of the major objectives of
aerial tracking applications. This paper proposes QP Chaser, a trajectory
planning pipeline that can enhance the visibility of single- and dual-target in
both static and dynamic environments. As the name suggests, the proposed
planner generates a target-visible trajectory via quadratic programming
problems. First, the predictor forecasts the reachable sets of moving objects
with a sample-and-check strategy considering obstacles. Subsequently, the
trajectory planner reinforces the visibility of targets with consideration of
1) path topology and 2) reachable sets of targets and obstacles. We define a
target-visible region (TVR) with topology analysis of not only static obstacles
but also dynamic obstacles, and it reflects reachable sets of moving targets
and obstacles to maintain the whole body of the target within the camera image
robustly and ceaselessly. The online performance of the proposed planner is
validated in multiple scenarios, including high-fidelity simulations and
real-world experiments. | Yunwoo Lee, Jungwon Park, Seungwoo Jung, Boseong Jeon, Dahyun Oh, H. Jin Kim | 2023-02-28T03:12:51Z | http://arxiv.org/abs/2302.14273v1 | # QP Chaser: Polynomial Trajectory Generation for Autonomous Aerial Tracking
###### Abstract
Maintaining the visibility of the targets is one of the major objectives of aerial tracking applications. This paper proposes QP Chaser, a trajectory planning pipeline that can enhance the visibility of single- and dual-target in both static and dynamic environments. As the name suggests, the proposed planner generates a target-visible trajectory via quadratic programming problems. First, the predictor forecasts the reachable sets of moving objects with a sample-and-check strategy considering obstacles. Subsequently, the trajectory planner reinforces the visibility of targets with consideration of 1) path topology and 2) reachable sets of targets and obstacles. We define a target-visible region (TVR) with topology analysis of not only static obstacles but also dynamic obstacles, and it reflects reachable sets of moving targets and obstacles to maintain the whole body of the target within the camera image robustly and ceaselessly. The online performance of the proposed planner is validated in multiple scenarios, including high-fidelity simulations and real-world experiments.
_Note to Practitioners--_This paper proposes an aerial target tracking framework that can be adopted in single- and dual-target scenarios. Existing approaches to keep visibility in tracking missions rarely reflect dynamic objects except for a single target. This paper suggests the prediction of the reachable area of moving objects and the generation of a target-visible trajectory, which are computed in real-time. Since the proposed planner considers the possible reach area of moving objects, generated trajectory of the drone is robust to the inaccuracy of prediction in terms of the visibility of the target. Also, the planning scheme can be extended to multiple-target scenarios.
Aerial tracking, visual servoing, mobile robot path-planning, vision-based multi-rotor
## I Introduction
Multi-rotors aided by vision sensors are widely employed in both academia [1, 2, 3, 4] and industry [5, 6, 7], due to high maneuverability and compactness of platform. The main applications of vision-aided multi-rotors are surveillance [8] and cinematography [9], and autonomous target chasing is essential in such tasks. In target-chasing missions, various situations exist in which a single drone has to handle both single- and multi-target scenarios without occlusion. For example, in filming shooting, there are scenes in that one or several actors are shot in one take, without being visually disturbed by structures in the shooting set. Moreover, the occlusion of main actors by background actors is generally prohibited. Therefore, a tracking strategy that can handle both single- and multi-target among static and dynamic obstacles can benefit various scenarios in chasing tasks.
Despite great attention in aerial chasing works during the recent decade, aerial target tracking remains a challenging task. First, it is difficult to forecast accurate future paths of dynamic objects due to perceptual errors from sensors and unreliable estimation of intentions of multiple moving objects in obstacle environments. Also, a motion generator in the chasing system ought to address the visibility of targets, collision avoidance against obstacles, and dynamic limits of the drone simultaneously, and should be executed in real-time.
In order to solve the problem, this paper proposes a target-chasing strategy that enhances the visibility of single- and dual-target in both static and dynamic environments. The proposed method consists of two parts: _Prediction problem_ and _Chasing problem_. In the former problem, to cope with an uncertain future trajectory of the targets, the computation of reachable sets of moving objects that they can reach considering obstacle configuration is proposed. In the latter problem, we propose robustly target-visible trajectory generation that considers safety from obstacles and dynamical feasibility. The key idea of the proposed trajectory planning is a target-visible region (TVR), a time-dependent set spatial set that keeps the visibility of targets. Two-stage visibility consideration in
Fig. 1: Target tracking mission in a realistic situation. **(a)**: A target (red) moves in an indoor arena, and a dynamic obstacle (green) interrupts the visibility of the target. **(b)**: Two targets (red and green) move among stacked bins (grey). A chaser drone (blue) generates a trajectory keeping the target within the camera view consistently. |
2309.09636 | A new test of gravity -- II: Application of marked correlation functions
to luminous red galaxy samples | We apply the marked correlation function test proposed by Armijo et al.
(Paper I) to samples of luminous red galaxies (LRGs) from the final data
release of the Sloan Digital Sky Survey (SDSS) III. The test assigns a
density-dependent mark to galaxies in the estimation of the projected marked
correlation function. Two gravity models are compared: general relativity (GR)
and $f(R)$ gravity. We build mock catalogues which, by construction, reproduce
the measured galaxy number density and two-point correlation function of the
LRG samples, using the halo occupation distribution model (HOD). A range of HOD
models give acceptable fits to the observational constraints, and this
uncertainty is fed through to the error in the predicted marked correlation
functions. The uncertainty from the HOD modelling is comparable to the sample
variance for the SDSS-III LRG samples. Our analysis shows that current galaxy
catalogues are too small for the test to distinguish a popular $f(R)$ model
from GR. However, upcoming surveys with a better measured galaxy number density
and smaller errors on the two-point correlation function, or a better
understanding of galaxy formation, may allow our method to distinguish between
viable gravity models. | Joaquin Armijo, Carlton M. Baugh, Peder Norberg, Nelson D. Padilla | 2023-09-18T10:13:00Z | http://arxiv.org/abs/2309.09636v2 | # A new test of gravity - II: Application to luminous red galaxy samples
###### Abstract
We apply the marked correlation function test proposed by Armijo et al. (Paper I) to samples of luminous red galaxies (LRGs) from the final data release of the Sloan Digital Sky Survey (SDSS) III. The test assigns a density dependent mark to galaxies in the estimation of the projected marked correlation function. Two gravity models are compared: general relativity (GR) and \(f(R)\) gravity. We build mock catalogues which, by construction, reproduce the measured galaxy number density and two point correlation function of the LRG samples, using the halo occupation distribution model (HOD). A range of HOD models give acceptable fits to the observational constraints and this uncertainty is fed through to the error on the predicted marked correlation functions. The uncertainty from the HOD modelling is comparable to the sample variance for the SDSS-III LRG samples. Our analysis shows that current galaxy catalogues are too small for the test to distinguish a popular \(f(R)\) model from GR. However, upcoming surveys with a better measured galaxy number density and smaller errors on the two point correlation function, or a better understanding of galaxy formation, may allow our method to distinguish between viable gravity models.
keywords: cosmology: observations - large-scale structure.
## 1 Introduction
After the discovery of the accelerating cosmic expansion, \(\Lambda\)CDM became the standard cosmological model (Riess et al., 1998; Perlmutter et al., 1999). Nevertheless, the cosmological constant in this model remains unappealing from a theoretical perspective, which has motivated efforts to look at gravity models beyond general relativity (GR) to explain the accelerated cosmic expansion (Joyce et al., 2016). Recently, theories that modify the model of gravity by adding Lagrangian metric variations of the scalar field have been studied intensively (Clifton et al., 2012). However, some of these modified gravity (MG) models have been ruled out by the detection of gravitational waves and their optical counterparts with the same propagation speed (Creminelli and Vernizzi, 2017; Ezquiaga and Zumalacacarregui, 2017; Baker et al., 2017). Such tight constraints illustrate the way in which a range of modified gravity models remain viable and demonstrates the need to devise new probes of gravity (Heymans and Zhao, 2018; Baker et al., 2021; Arai et al., 2023).
A model that is a simple extension of GR is the \(f(R)\) model of gravity (De Felice and Tsujikawa, 2010), in which the Ricci scalar, \(R\), is perturbed in the Einstein-Hilbert action by the addition of a function \(f(R)\). This modification acts to enhance gravity, by producing an effective 'fifth force' that reshapes the distribution of matter over certain scales. However, the \(f(R)\) model includes a screening mechanism that hides this new physics on scales where GR works well (Khoury and Weltman, 2004), allowing this model to satisfy solar system constraints. This elusive fifth force has to be searched for on cosmological scales where gravity is the dominant force shaping the formation of large-scale structure. Currently, constraints on the amplitude of the fifth force are obtained from observations of the abundance of massive clusters of galaxies (Cataneo et al., 2015), and weak lensing peak statistics (Liu et al., 2016); modelling forecasts of these probes for next generation surveys have helped to add more constraints on MG models (Liu et al., 2021; Harnois-Deraps et al., 2022).
This paper is the second in a series about a new test of gravity which uses the marked correlation function. The original idea was proposed by White (2016), who suggested using a mark based on the local density of a galaxy to compute the marked correlation function, with the aim of using this to distinguish between gravity models. This idea was applied in simulations of different gravity models by Armijo et al. (2018) and Hernandez-Aguayo et al. (2018). In Paper I, we introduced a pipeline to apply the marked correlation function as a diagnostic of gravity, in which a halo occupation distribution (HOD) model was used to populate \(N\)-body simulations of different gravity models with galaxies. A key step in our analysis was the construction of mock catalogues which match the available observational constraints, namely the unweighted clustering of galaxies and their abundance, in all of the gravity models to be tested. This step adds an important contribution to the error budget on the predicted marked correlation function, which as we show later can be comparable to the same variance which results from the volume probed. In Paper II we describe the application of our method to current large-scale galaxy catalogues, discussing the properties of the sampled studied in more detail than in Paper I.
Other studies have investigated using the marked correlation func
tion as a probe of gravity. Satpathy et al. (2019) estimated the marked correlation function for SDSS-III BOSS galaxies using the LOWZ sample. These authors found the LOWZ measurements agreed with simulations of GR-ACDM in redshift space on scales between \(6<s/(\text{Mpc}~{}h^{-1})<69\). Their analysis is restricted to these scales due to the challenge of modelling redshift space distortions (though see Cuesta-Lazaro et al., 2020 and Ruan et al., 2022 for recent improvements that extend the modelling down to smaller scales). Armijo et al. (2018) showed that the differences between GR and \(f(R)\) gravity are stronger on smaller scales \(r<2\) Mpc \(h^{-1}\) in real space, which still need to be tested.
The structure of this paper is as follows. We describe the data, the luminous red galaxy (LRG) samples from SDSS-III BOSS DR12, in Section 2. Section 3 outlines the estimation of the marked correlation function. In Section 4 we present the measured marked correlation function for the LOWZ and CMASS samples, and discuss how well these results agree with the mock catalogues made from the GR and \(f(R)\) simulations, considering the various sources of error. In SS 5 we consider the implications of these results and speculate on how future observations and improvements in modelling could make the constraints on gravity models using this test more competitive. Note that the \(f(R)\) gravity model was outlined in Section 2 of Paper I, and the simulations used here, along with the construction of the mock catalogues were described in Section 3 of the same paper.
## 2 Data
We use the LRG samples from the Baryon Oscillation Spectroscopic Survey (BOSS) (Eisenstein et al., 2011; Dawson et al., 2013), which is part of the SDSS-III program twelfth data release (DR12) (Alam et al., 2015). The LRGs are divided in two samples with different photometric selections that yield galaxies that are separated in redshift: LOWZ, which contains LRGs over the redshift range \(0.10<z<0.43\), and CMASS which predominately targets galaxies in the redshift interval \(0.43<z<0.70\). We decided to use only the NGC region of both the LOWZ and CMASS samples, instead of using the full NGC+SGC areas for practical convenience: as these patches correspond to different areas on the sky, we need to consider them as different surveys, with different photometric properties and potentially different systematic errors. Furthermore, the NGC region covers twice the solid angle of the SGC, and so dominates the pair counts in clustering estimates. For further simplicity of analysis we decided to use two subsamples extracted from LOWZ and CMASS which are defined in narrow redshift ranges. For LOWZ we limited the selection between redshift \(0.240<z<0.360\) and for CMASS to redshifts between \(0.474<z<0.528\). This allow us to perform our analysis with two samples with similar volumes, where one of the samples has a larger number density. Also, by restricting the redshift range in this way, the variation in the number density of galaxies across the sample is greatly reduced. The catalogues are fully described in Reid et al. (2016), where further details of the galaxy selection and the use of the resulting LRG samples for LSS studies are presented.
### Galaxy number density
As mentioned above, we select narrower redshift range subsamples from the LOWZ and CMASS catalogues to obtain samples for which the number density varies little with redshift, \(n(z)\), compared with the full samples. This allows us to treat the data sample as having a constant number density which simplifies the clustering analysis. Fig. 1 shows the dependence of the LRG number density, \(n(z)\), on redshift \(z\), after applying the photometric selection in the original LOWZ and CMASS samples. The local variation in \(n(z)\) is due to large-scale structure. If we did not restrict the redshift interval studied in this way, we would be introducing new dependencies into the properties (e.g. the weight assigned to each galaxy) that depend on the number density when we compute the marked correlation function. To avoid this problem, we define the number density of the survey to be the number of galaxies divided by the total volume \(n_{\text{obs}}=N_{\text{gal}}/V_{s}\). By using a more restricted volume for both samples this means that there is less variation in number density, which in turn reduces the error when computing the clustering and marked clustering. The dashed lines in Fig. 1 show the redshift limits of these new subsamples. Using these additional redshift selections results in samples with roughly uniform number densities over the redshift range being considered. We can also compare these new samples with simulations of roughly the same volume when we create the mock catalogues. With these additional redshift selections and the definition of number density given above, the galaxy number density of the LOWZ subsample is \(n_{\text{g}}=3.097\times 10^{-4}~{}h^{3}\,\text{Mpc}^{-3}\), whereas for CMASS the value is 21 per cent higher, \(n_{\text{g}}=3.761\times 10^{-4}~{}h^{3}\,\text{Mpc}^{-3}\). This allows us to evaluate the marked correlation function analysis for samples with different number densities.
### Galaxy-galaxy two-point correlation function
Once we have selected the new restricted redshift range of the subsamples, the next step is to estimate the clustering of galaxies on different scales. The two-point correlation function can be computed as the excess probability of finding a pair of galaxies at a given separation compared with the number of pairs expected in a random distribution of points. Throughout this study, we measure the cluster
Figure 1: The galaxy number density \(n(z)\) as function of redshift \(z\) for the BOSS DR12 NGC data. LOWZ (black) and CMASS (gray) samples have different selection functions which lead to different curves for \(n(z)\). Over the redshift range shown the number density varies strongly for each sample. We also plot the scaled number density of the random galaxy catalogue (red) from Reid et al. (2016), used for clustering analyses, and the subsample redshift selection used in this study LOWZ \(0.240<z<0.360\) (blue dashed line) and CMASS \(0.474<z<0.528\) (light blue dashed line).
ing using the projected correlation function \(w_{\rm p}\), which is an integral over the two-point correlation function \(\xi(r_{\rm p},\pi)\), binned in the separation \(r_{\rm p}\) in the projected perpendicular distance, and in the separation parallel to the line-of-sight, \(\pi\). The integral of \(\xi(r_{\rm p},\pi)\) is taken over the separation parallel to the line-of-sight direction \(\pi\). Clustering measurements as a function of the perpendicular distance \(r_{\rm p}\) can be considered as being in real space (i.e. free from redshift space distortions) in the distant-observer approximation (Davis & Peebles, 1983). We take this approach instead of using the redshift space two-point correlation function \(\xi(s)\) to avoid the influence of small-scale redshift space distortions, which can complicate the prediction of the marked correlation function on such scales. These issues were highlighted by Satpathy et al. (2019), in which the marked correlation function of LOWZ is presented in redshift space for pair separations in the range \(0.5<s/({\rm Mpc}^{-1})<69\). These authors concluded that their results are restricted to these scales by the limited accuracy with which the clustering in redshift space can be modelled on small scales (though for recent improvements in this modelling see Cuesta-Lazaro et al., 2020 and Ruan et al., 2022). To calculate the projected correlation function and obtain the clustering signal in real space we integrate \(\xi(r_{\rm p},\pi)\) in the \(\pi\)-direction:
\[\frac{w_{\rm p}}{r_{\rm p}}=\frac{2}{r_{\rm p}}\int_{0}^{\infty}\xi(r_{\rm p},\pi)\mathrm{d}\pi. \tag{1}\]
As we are not solving this integral analytically we bin \(\xi(r_{p},\pi)\) until \(\pi_{\rm max}\), which is chosen so that the integral converges to a stable value. Using the correlation function on scales larger than \(\pi_{\rm max}\) tends to add noise to the estimate, depending on the details of the galaxy sample. Considering the range of scales we are interested in, we choose \(\pi_{\rm max}=80h^{-1}\) Mpc, as adopted in Parejko et al. (2013) for the LOWZ data sample. In Fig. 2 we plot the results for the projected correlation function as a function of the separation perpendicular to the line of sight \(r_{\rm p}\) on scales between \(0.5<r_{\rm p}/(h^{-1}{\rm Mpc})<50\) for both the LOWZ and CMASS subsamples. The correlation functions show similar features, with a small offset due to the different number densities that the subsamples have and because the samples probe galaxies with different bias factors at different redshifts. We note that the curves cross one another at \(r_{\rm p}=7\,h^{-1}\,{\rm Mpc}\), which can be attributed to different slopes being found for the correlation functions of the LOWZ and CMASS galaxies over the range \(2<r_{\rm p}/(h^{-1}{\rm Mpc})<10\). This could be a reflection of the intrinsic differences between LOWZ and CMASS galaxies, with CMASS galaxies having a broader colour selection (Tojeiro et al., 2012). We use the jackknife re-sampling method to compute the uncertainties on the measurements of \(w_{\rm p}\)(e.g. Norberg et al., 2009). These calculations can be compared in Fig. 2 with independent estimates, such as the measurements from Singh et al. (2021), in which \(w_{\rm p}\) is estimated for the LOWZ and CMASS samples as part of these authors' study of intrinsic alignments. In Singh et al. (2021)\(w_{\rm p}\) is calculated using the full redshift ranges of the LOWZ and CMASS samples, with \(\pi_{\rm max}=100h^{-1}\) Mpc (see their Fig. 4 ). The different set up used in this study in comparison to that used by Singh et al. (2021) can explain the small differences between our results. The broader redshift range used by Singh et al. means a higher volume of the surveyed galaxies, in particular for CMASS (a factor of 6 in volume), which has an impact on the estimation of the uncertainties in \(w_{\rm p}\), being approximately a 40% smaller for their study.
## 3 Marked correlation function
We calculate the marked correlation function of the LOWZ and CMASS samples using marks derived from estimates of the local density. We use the method developed in Armijo et al. (2023), in which the marked correlation function is estimated in projection (see Section 5 of Paper I). To compute the marked correlation function we use the tworcg1 code to compute \(w_{\rm p}(r_{\rm p})\) for the data and mock catalogues; this code supports estimators that use weighted pair counts. The code can also efficiently calculate jackknife errors in a _single_ loop over the galaxy pairs. To compute the marks based on the galaxy local density we calculate 2D Voronoi tessellations after dividing each sample into several redshift slices. In the case of the LOWZ subsample defined between \(0.24<z<0.36\), we create 8 redshift slices with a mean thickness of \(\Delta\bar{Z}=38.42\,h^{-1}\,{\rm Mpc}\), whereas for CMASS, 4 samples are defined with a mean thickness of \(\Delta\bar{Z}=30.72\,h^{-1}\) Mpc. The projection over \(\Delta\bar{Z}\) is the only smoothing applied to the sample, besides the Voronoi tessellation. The slightly smaller slice thickness adopted for the CMASS slices was chosen to preserve \(\bar{V}\), the mean volume of a Voronoi cell in each case, exactly the same as in the simulations, due to the higher galaxy number density in the CMASS sample compared to LOWZ. To construct tessellations over the irregular boundary of the survey angular mask, we apply a random sample embedded within a rectangular region covering the survey edges. This results in any holes left by the mask being flagged as very low-density regions during the tessellation step. The only requirement for this random sample wrapping around the survey is that it should oversample the observed \(n(z)\) by a large factor. We select this factor to be at least 10 times larger than the \(n(z)\) of the
Figure 2: The projected two-point correlation function \(w_{\rm p}\) as a function of the projected perpendicular pair separation \(r_{\rm p}\) for BOSS DR12 NGC. The correlation function is measured from the selected subsamples of LOWZ (black dots) and CMASS (gray dots). Error bars are estimated using jackknife resampling over 100 jackknife regions. Calculations of \(w_{\rm p}\) for GR mock catalogues at \(z=0.3\) (black line) and \(z=0.5\) (gray line) are also shown. We compare our results with those from Singh et al. (2015), where \(w_{\rm p}\) is also calculated for the LOWZ (light blue circles) and CMASS (light red circles) samples over a much wider range of redshifts in each case.
galaxies to make sure the result of the marked correlation function converges to stable values. The mark scheme is equivalent to the one presented in Satpathy et al. (2019), where the marks based on the local density definition are combined with the observational weights when computing the correlation function. We extend the analysis of Satpathy et al. by making measurements for the CMASS sample as well as for LOWZ.
## 4 Results
We plot the measurements of the marked correlation function, \(\mathcal{M}(r_{\rm p})\), for the LOWZ and CMASS subsamples in Fig. 3. We compare these measurements with the predictions for the marked correlation function made using the GR and F5 mock catalogues presented in Armijo et al. (2023). The marked correlation function of the LOWZ sample appears to be in agreement with the predictions from both the GR and F5 models over the range of scales tested. Within the uncertainties introduced by the model, both the GR and F5 results overlap on scales \(r_{\rm p}>3\,h^{-1}\,{\rm Mpc}\). On smaller scales, the models show a modest difference, but not one that is statistically significant given the LOWZ errors. For the CMASS sample the results are similar but show somewhat different features: the observational measurements at large projected separations, \(r_{\rm p}>10\,h^{-1}\,{\rm Mpc}\) are again reproduced by both the GR and F5 models. However, in the CMASS case there is also a clear mismatch between models and data on scales \(2<r_{\rm p}/(\,h^{-1}\,{\rm Mpc})<10\). For smaller scales, \(r_{\rm p}<2\,h^{-1}\,{\rm Mpc}\), the data fits the GR model better than F5. Nevertheless, as the model predictions still overlap given the errors, the difference is still marginal.
The LOWZ data seems to be a slightly better fit to the GR model with \(\chi^{2}_{\rm v,\,GR}=1.13\) in comparison to the F5 model which has \(\chi^{2}_{\rm v,\,F5}=1.48\), where these reduced \(\chi^{2}\) values are calculated considering the mean of all the valid models shown in Fig. 3.
### Marked correlation function error analysis
We now compare the size of different contributions to the uncertainty in the calculation of the marked correlation function. For the data, we resample the catalogues to estimate the sample variance using jackknife errors. To quantify the significance of the mark, we also shuffle the weights for the marked correlation function calculation. In the case of the mocks, in addition to the sources of error listed above, an important contribution to the error estimate comes from the uncertainty in the model used to to create the galaxy catalogues, the halo occupation distribution (HOD) model. In Fig. 4, we compare these sources of uncertainty in units of the marked correlation function in each case. The first uncertainty contribution comes from the sample or cosmic variance, caused by measuring the clustering statistic in a random realization of the underlying cosmology (Gil-Marin et al., 2010). We use jackknife resampling (Shao, 1986), which is a widely used method to estimate the effect of sample variance in clustering studies (e.g. Norberg et al., 2009). The estimation of the jackknife error bar (red line in Fig. 4) shows a higher fractional uncertainty at small \(r_{\rm p}\) than at large separations, which is expected from previ
Figure 3: The marked correlation function \(\mathcal{M}(r_{\rm p})\) as function of the projected distance \(r_{\rm p}\) for the BOSS galaxy samples and the results from the respective HOD mock galaxy catalogues from the GR (red) and F5 (blue) simulations. Left panel: \(\mathcal{M}(r_{\rm p})\) measured from LOWZ (black dots) at \(0.24<z<0.36\) compared with the HOD mock catalogues within the \(1\)-\(\sigma\) confidence interval from the MCMC fitting of the two-point clustering and number density. Right: same as left panel, but for the CMASS subsample (grey dots) at \(0.474<z<0.528\). The shaded areas for the models come from selecting the central 68 per cent of all the family of HOD catalogues of each model, GR, F5 at redshift \(z=0.3\) (dark red and dark blue) and \(z=0.5\) (light red and light blue). The error bars on the data are estimated by applying jackknife resampling to 100 subvolumes of the data. In the bottom panels we show the relative residuals using the data measurements as a reference, meaning that we display \(\mathcal{M}^{\rm mod}/\mathcal{M}^{\rm flat}=1\), with \(\mathcal{M}^{\rm mod}\) the marked correlation function for each HOD set and \(\mathcal{M}^{\rm flat}\) is the marked correlation function of LOWZ and CMASS in left and right panels respectively.
ous formulations of the marked correlation function (Armijo et al., 2018). Another source of error comes from the correct estimation of weights for individual galaxies, which gives significance to the individual marks when the clustering is computed. This can be estimated by doing a shuffle of the galaxy marks, assigning a random weight to all galaxies, and recomputing the marked correlation function. The random weights will erase any correlation between the marks and the clustering, which will result in \(\mathcal{M}=1\) on all scales. We show the dispersion of 100 shuffling realizations for the mock in Fig. 4 (blue line). Finally, we also compare with the uncertainty introduced by the HOD modelling when creating the mock data, which is explained in Armijo et al. (2023). This contribution to the error dominates over the others on small scales, which explains the difference in the size of the error bars on the results from the data and the mocks in Fig. 3. These are the scales on which the marked correlation function has the largest amplitude and hence for which there is the greatest potential to distinguish between different gravity models. Unfortunately, for the LOWZ and CMASS samples we have considered, the error from the range of acceptable HOD models is too large for these datasets to be able to distinguish the F5 gravity model from GR.
## 5 Conclusions and Discussion
We have applied the marked correlation test of gravity introduced in Armijo et al. (2023; Paper I) to currently available large-scale structure samples extracted from the LOWZ and CMASS LRG catalogues. We compared these results with predictions made from simulations of the GR and F5 \(f(R)\) gravity models, including the uncertainties introduced by the HOD modelling used to populate the simulations with galaxies.
The measurements of the marked correlation function for the LOWZ and CMASS samples show a slight tendency to agree with the GR model better than F5. However, this conclusion is not statistically significant once all sources of error are taken into account.
In particular, the HOD modelling used to populate \(N\)-body simulations with galaxies introduces an error that is typically ignored in the assessment of the forecast for a clustering measurement. This error arises because a range of HOD models give acceptable fits to the clustering and galaxy abundance measurements used to constrain the HOD model parameters (see Paper I). In Armijo et al.(2023) we argued that it is essential to fold this HOD model uncertainty through the mock pipeline. Here, we have demonstrated that for the LOW and CMASS samples studied, this contribution to the error budget for the marked correlation function dominates on small scales, compared to sample variance and the error from shuffling the marks.
When compared to the LOWZ data (left panel in Fig. 3), the marked correlation is in agreement with both the GR and F5 simulations within the error bars estimated from the HOD modelling. The same analysis is more complex in the case of CMASS data (right panel of Fig. 3), as there is a disagreement between the proposed models and the data. This disagreement comes from a limitation of the model to replicate the CMASS data, which is comprised of slightly 'bluer' galaxies than the ones in the LOWZ sample (Maraston et al., 2013), due to the broader range in both magnitude and colour accepted compared with other LRG samples (Tojeiro et al., 2012; Guo et al., 2013); this selection is to increase the number density of galaxies at higher redshift. This selection can be harder to capture with the simple HOD model used here, which could lead to discrepancies between the model and the data. Furthermore, the comparison between the error bars of model and data in Fig. 4, indicates that the HOD model introduces more uncertainty (around a factor of 2) on the scales where the disagreement is found.
We find no sign of any departure from GR for the LOWZ data, which confirms the conclusions reached by Satpathy et al. (2019), who measured the two-point correlation function in redshift space for separations in the range \(6<s/(\text{Mpc}\ h^{-1})<69\). Our results are presented in the projected space, extending the calculation down to small scales with \(r_{\text{p}}\sim 0.5h^{-1}\) Mpc. We can calculate the goodness of fit for the LOWZ data obtaining \(\chi^{2}_{\nu,\text{GR}}=0.76\) and \(\chi^{2}_{\nu,\text{F5}}=1.64\), which indicates that LOWZ fits the GR model better. However, the value of \(\chi^{2}_{\nu,\text{F5}}\) is not enough to rule out the F5 model with this data alone. For CMASS we note that the higher number density of the sample reduces the estimated error on the uncertainties including sampling variance, which could help to constrain the models further (Seljak et al., 2009). Nevertheless, systematic effects make the data disagree with both models on scales between \(2<r_{\text{p}}/(h^{-1}\ \text{Mpc})<7\) which limits the conclusions we can reach from this dataset. We attribute such differences to the selection function of the CMASS sample, which retains a broader selection of magnitude and colours than the LRG LOWZ sample. This can also be seen in Fig. 2, where the projected correlation function of the CMASS sample (grey squares) also behaves differently to the one from LOWZ (black dots). In conclusion, the LOWZ data is consistent with both the GR and F5 simulations. The same conclusion cannot be applied to CMASS, as the marked correlation function is more sensitive to its selection function.
This leads naturally to speculation about what would need to improve for the test proposed by Armijo et al. (2023) to be in a position to distinguish between currently viable gravity models. The dominant source of error on small scales, on which the marked correlation function is largest, is the allowed range of HOD models. Using a more sophisticated HOD model might improve the performance of the mock at reproducing the clustering measured for the CMASS sample. However, this would come at the expense of greater freedom in a larger HOD parameter space and presumably even greater uncertainty in the marked correlation function on small scales. Alternatively, the HOD model could be replaced by a calculation with less uncertainty, or equivalently, fewer parameters. For example, with a
Figure 4: Comparison of the uncertainties estimation of the marked correlation function, \(\mathcal{M}\), as function of the scale \(r_{\text{p}}\) from considering the HOD modelling (green), the jackknife resampling (red) and the effect of shuffling (blue). We use the GR HOD mock catalogues from Armijo et al. (2023) to calculate \(M(r_{\text{p}})\).
higher resolution \(N\)-body simulation to hand, a sub-halo abundance matching approach could be used instead, assigning model LRGs to resolved subhalos.
The other way to reduce the uncertainty in the galaxy formation modelling is to improve the measurement of the number density of galaxies, for example by targeting fainter and therefore more abundant galaxies, or by obtaining a better measurement of the two-point correlation function. The latter improvement would be driven by sampling a larger survey volume. This will also have the side effect of potentially reducing the sample variance errors in the marked correlation function, though this is hard to judge without a calculation as the marked clustering is derived from the ratio of correlation functions taken from the same volume. Both of these objectives will be met by upcoming wide field surveys, such as the DESI survey of LRGs (Zhou et al., 2020, 2021).
## Acknowledgements
This work was supported by World Premier International Research Center Initiative (WPI), MEXT, Japan. JA acknowledges support from CONICYT PFCHA/DOCTORADO BECAS CHILE/2018 - 72190634. PN and CMB are supported by the UK Science and Technology Funding Council (STFC) through ST/T000244/1. NDP acknowledges support from a RAICES, a RAICES-Federal, and PICT-2021-I-A-00700 grants from the Ministerio de Ciencia, Tecnologia e Innovacion, Argentina. We acknowledge financial support from the European Union's Horizon 2020 Research and Innovation programme under the Marie Sklodowska-Curie grant agreement number 734374 - Project acronym: LACEGAL. This work used the DiRAC@Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K00042X/1, ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure.
## Data Availability
The inclusion of a Data Availability Statement is a requirement for articles published in MNRAS. Data Availability Statements provide a standardised format for readers to understand the availability of data underlying the research results described in the article. The statement may refer to original data generated in the course of the study or to third-party data analysed in the article. The statement should describe and provide means of access, where possible, by linking to the data or providing the required accession numbers for the relevant databases or DOIs.
|
2309.11079 | Application of Efron-Petrosian method to radio pulsar fluxes | We apply the Efron-Petrosian technique to radio fluxes of pulsars detected in
the Parkes multi-beam survey to test the independence of luminosity and
distance. For this dataset, we find that for four different distance exponents
(ranging from 0.5 to 2), the flux thresholds at which the luminosity and
distances are uncorrelated, correspond to very low $p$-values for the
Kolmogorov-Smirnov test between the truncated and untruncated datasets. This is
due to the fact that the Parkes multi-beam survey is not sufficiently
homogeneous to lend itself to a treatment by the Efron-Petrosian method. We
then repeat the analysis after rendering the dataset more homogeneous by
excluding the distant pulsars from this sample. We find that for this culled
dataset, the flux is consistent with distance exponents of 1.5 and 2.0. | Pragna Mamidipaka, Shantanu Desai | 2023-09-20T06:02:59Z | http://arxiv.org/abs/2309.11079v2 | # Application of Efron-Petrosian method to radio pulsar fluxes
###### Abstract
We apply the Efron-Petrosian technique to radio fluxes of pulsars detected in the Parkes multi-beam survey to test the independence of luminosity and distance. For this dataset, we find that for four different distance exponents (ranging from 0.5 to 2), the flux thresholds at which the luminosity and distances are uncorrelated, correspond to very low \(p\)-values for the Kolmogorov-Smirnov test between the truncated and untruncated datasets. This is due to the fact that the Parkes multi-beam survey is not sufficiently homogeneous to lend itself to a treatment by the Efron-Petrosian method. We then repeat the analysis after rendering the dataset more homogeneous by excluding the distant pulsars from this sample. We find that for this culled dataset, the flux is consistent with distance exponents of 1.5 and 2.0.
## I Introduction
Pulsars are rotating neutron stars, which emit pulsed radio emissions with periods ranging from milliseconds to a few seconds with magnetic fields ranging from \(10^{8}\) to \(10^{14}\) G [1; 2]. They are known to be wonderful laboratories for a diverse suite of areas in Physics and Astronomy [3; 4] from stellar evolution [5], dark matter [6], tests of equivalence principle [7] to Lorentz Invariance violation [8].
In a series of recent works, Ardavan has applied the Efron-Petrosian (EP) technique [9] to probe the scaling dependence of the gamma ray fluxes of pulsars [10] (A22, hereafter) as well as the X-ray emission of magnetars (neutron stars with magnetic fields \(>10^{14}\) G) [11] as a function of distance. Their analysis of three years of Fermi-LAT data [12] showed that a scaling of the flux density (\(F\)) with the distance (\(D\)) according to \(F\propto D^{-3/2}\), is favored at higher levels of significance compared to the inverse-square law scaling of \(F\propto D^{-2}\). This was subsequently confirmed using the 12 year Fermi-LAT catalog [13] in [14]. A similar conclusion was also obtained [11] based on the analysis of X-ray fluxes of magnetars compiled in the McGill magnetar catalog [15]. These results also agree with observational predictions of the theoretical model for the pulsar emission mechanism outlined in [16]. If confirmed, this result is very exciting and would allow us to get insights into relativistic Plasma Physics in very strong magnetic fields and shed light on the pulsar emission mechanism, which is still not completely understood [17].
Previously, there have been claims that the radio pulsar fluxes do not follow an inverse square law in models involving superluminal polarization currents [18]. Such a model predicts an inverse linear scaling of the flux with distance. A claim was also made in literature that the radio pulsar fluxes _do_ scale inversely with the first power of distance according to (\(F\propto D^{-1}\)) [19; 20] based on the application of the Stepwise Maximum Likelihood Method [21], in accord with the theoretical model proposed in [18]. However, this result could not be confirmed using an independent analysis [22].
Nevertheless, the original theoretical model proposed in [18] has been superseded by the model in [16], which predicts that the fluxes of radio pulsars obey the inverse square law. Despite this, it behooves us to apply the same methodology as in A22 to radio pulsar fluxes. The aim of this work is to apply the EP method in the same way as A22 to radio pulsars to confirm any putative violation of the inverse-square law, which has been argued for gamma-ray fluxes.
The outline of this manuscript is as follows. We recap the EP method and its myriad applications in literature in Sect. II. The dataset and analysis for the observed radio pulsar population from the Parkes multibeam survey can be found in Sect. III. We conclude in Sect. IV. All logarithms in this manuscript are to the base 10.
Efron-Petrosian method
The EP method is a widely used technique in Astrophysics and Cosmology to account for selection biases or evolution while dealing with flux-limited or truncated samples [9]. The EP method has been applied to a diverse suite of astronomical objects such as quasars, gamma-ray bursts, magnetars, asteroids, solar flares, blazars etc [9; 23; 24; 25; 26; 27; 28; 29; 30]. This method has been used for a variety of science goals such as a probe of luminosity evolution, checking for intrinsic correlations between two astrophysical variables, precision tests of cosmological models, search for cosmological time dilation, tests of standard candles, etc. More details on the myriad applications of the EP technique can be found in the aforementioned references. We provide a brief description of the EP method. More details about this technique can be found in A22 or the above references.
Consider a flux limited catalog consisting of fluxes (\(F\)) obtained using a dedicated survey. If we assume that the pulsar flux scales with the distance (\(D\)) according to \(F\propto D^{-\alpha}\), the isotropic luminosity (\(L\)) of the pulsar is given in terms of \(F\) and \(D\) according to [14]:
\[L=4\pi l^{2}\left(\frac{D}{l}\right)^{\alpha}F. \tag{1}\]
As discussed in A22, \(l\) is a constant with dimensions of distance and is mainly used as a normalization constant 1. For the inverse-square law we get the familiar relation \(L=4\pi D^{2}F\).
Footnote 1: We note that in the pulsar literature instead of \(L\) defined in Eq. 1, one usually defines a pseudo-luminosity, instead of \(L\) defined in Eq. 1, which does not contain the \(4\pi\) factor [31]
If we now assume that a given pulsar survey has a flux threshold \(S_{th}\), the corresponding truncation for the luminosity \(L_{th}\) scales with \(D\) according to
\[\log L_{th}=\log[4\pi l^{2-\alpha}S_{th}]+\alpha\log D \tag{2}\]
For any distance-luminosity pair given by (\(\log D_{i}\), \(\log L_{i}\)) one can find a suitable set of luminosity-distance points given by:
\[\log D \leq \log D_{i}\ \text{for i}=1...\text{n}. \tag{3}\] \[\log L \geq \log[4\pi l^{2-\alpha}S_{th}]+\alpha\log D, \tag{4}\]
where \(n\) is the number of pulsars not excluded by the flux threshold. All (\(D,L\)) pairs which satisfy the above conditions are referred to as "associated" [29] or "comparable" [10] set to (\(D_{i},L_{i}\)). The total number of such associated pairs corresponding to (\(D_{i},L_{i}\)) is equal to \(N_{i}\). We now determine the \(y\)-rank (\(L_{i}\)) of this point (\(\mathcal{R}_{i}\)) compared to its associated set of points, when ranked according to ascending order.
The EP technique then computes the following normalized statistic (which is related to the Kendall-\(\tau\) statistic [27]) for all data points (\(n\)) greater than the flux threshold:
\[\tau=\frac{\sum\limits_{i=1}^{n}\left(\mathcal{R}_{i}-\mathcal{E}_{i}\right)} {\sqrt{\sum\limits_{i=1}^{n}\mathcal{V}_{i}}}, \tag{5}\]
where \(\mathcal{E}_{i}=\frac{1}{2}(N_{i}+1)\) and \(\mathcal{V}_{i}=\frac{1}{12}(N_{i}^{2}-1)\) The hypothesis of independence between \(L\) and \(D\) depends on the absolute value of \(\tau\). If \(L\) and \(D\) are independent, then the value of \(\tau\) should be close to 0. If they are correlated the values of \(\tau\) are quite high and the hypothesis of independence can be rejected at high significance levels. One can quantify this hypothesis of rejection of independence of distance and luminosity by calculating a \(p\)-value, given by [9]:
\[p=\left(\frac{2}{\pi}\right)^{1/2}\int_{|\tau|}^{\infty}\exp(-x^{2}/2)dx \tag{6}\]
In terms of \(Z\)-score, one can reject the hypothesis that \(L\) and \(D\) are independent at significance equal to \(n\sigma\) if \(|\tau|>n\). In the literature, the EP method has been used in a couple of different ways. One way is to scale the luminosity with some power law of distance (or redshift) and find the distance exponent for which \(\tau=0\) (or \(|\tau|<1\)[27]),
which corresponds to the independence of the corrected luminosity and distance [30]. Alternately, one can compare the relative significance for the independence of hypothesis for different distance exponents. This is the approach adopted in A22 (and other recent works by Ardavan). We also note that the original EP method does not have any specific recommendations about what value of flux threshold to use. The original EP method recommended to use the instrumental limiting sensitivity for the flux threshold [24]. However, since sometimes this maybe too low, an elevated value for the flux threshold has often been used [27]. Whether any one of these thresholds are relevant or not is dictated partly by the physics that underlies the problem, partly by the value of the probability of detection that these thresholds imply [32] and partly by the requirement that the cut dataset and the uncut dataset should both be drawn from the same parent distribution (the unknown distribution that is complete over all values of the flux). It has been pointed out that one gets physically meaningful results for the EP method when the detection thresholds are chosen near the peak of the dataset histogram [32]. To determine the range of values of the detection threshold for which this latter requirement is satisfied, one can apply the Kolmogorov-Smirnov (KS) test to the dataset under consideration [10; 11; 32; 33; 14]. Bryant et al. [32] have shown many examples from literature where misleading results were obtained, because the flux thresholds are not chosen correctly. In most cases, one needs to evaluate \(\tau\) as a function of flux threshold over a wide range of thresholds and check for the corresponding KS test \(p\)-value at these flux thresholds in addition to the value of \(\tau\). This is the approach followed in the recent works by Ardavan.
## III Dataset and analysis for observed radio pulsar fluxes
At the time of writing there are a total of 3389 pulsars in the v1.70 ATNF pulsar catalog [34]. Some of these pulsars (for example the first discovered radio pulsar) have been discovered serendipitiously, while there is a lot of heterogeneity in the surveys used to detect the pulsars. Therefore, to avoid any systematics related to different flux thresholds which could lead to biased results [32], instead of applying the EP method to all pulsars, we decided to apply it to pulsars from only one specific survey, viz. the Parkes multi-beam survey [35]. The Parkes multi-beam survey carried out a survey of the galactic plane with \(|b|<5^{\circ}\) and \(l=260^{\circ}\) to \(l=50^{\circ}\). The observations were made using a 13-beam receiver on the 64 m Parkes radio telescope with two polarizations per beam at a frequency of 1374 MHz covering a bandwidth of 288 MHz. It has a limiting flux sensitivity of 0.2 mJy.
We first downloaded all the pulsars tagged as pksmb (detected in the Parkes multi-beam survey) in the ATNF catalog. In this way, we obtain a total of 1124 pulsars. Considering only the pulsars with period \(>0.01s\) (to remove the millisecond pulsars), we get 1095 pulsars. Further, after removing pulsars without valid flux and distance estimates, we get 1071 pulsars. For each pulsar, we downloaded the pulsar distances and the flux measured at 1400 MHz (\(S_{1400}\)), which was then converted from mJy to ergs cm\({}^{-2}\)s\({}^{-1}\). In the ATNF catalog, the distances have been obtained from the dispersion measure using the YMW16 electron density model [36].
The distribution of the logarithm of \(S_{1400}\) can be found in Fig. 1. Corresponding to each of these flux thresholds, we use the two-sample KS test [37] between the truncated dataset, consisting of pulsars having flux greater than the given threshold and the original untruncated dataset. The probability that both datasets are drawn from the same
Figure 1: (a) Histogram of the logarithm of radio fluxes of pulsars discovered in the Parkes multi-beam survey at 1400 MHz. (b) \(p\)-value from the KS test (\(p_{KS}\)) as a function of flux threshold, for testing whether the truncated versions of the dataset (after discarding pulsars according to a flux threshold) are drawn from the same distribution.
distribution can be characterized by the \(p\)-value from the KS test2, which we refer to as \(p_{KS}\). This plot of \(p_{KS}\) as a function of the flux threshold can be found in the right panel of Fig. 1. We find that the KS \(p\)-value decreases with increasing threshold, but turns around for flux thresholds above \(\log(S_{th})>-26\) and asymptotes to values of around 0.1 at the maximum observed fluxes. We note that the peak of the flux histogram (\(\log(S)\sim-26.4\)) corresponds to a very low value of \(p_{KS}=4.75\times 10^{-71}\). The distribution of the logarithm of flux as a function of the distance can be found in Fig. 2. The distribution of the luminosity (assuming an inverse square law) can be found in Fig. 3. The solid red line in Fig. 3 shows the flux threshold marked for a flux value of \(2.04\times 10^{-27}\rm{ergcm}^{-2}\rm{s}^{-1}\). The points below the red line are excluded from the EP analysis. The rectangular area indicated by the dashed lines shows the set comparable to one fiducial value of the distance and luminosity.
Footnote 2: We computed the two-sample KS test \(p\)-value using scipy in Python2.
The distribution of the \(\tau\) value for different values of \(\alpha\) as a function of the flux threshold can be found in Fig. 4. The values of the flux threshold for which \(\tau\) becomes zero for each of the exponents are as follows. For \(\alpha=2\)
Figure 3: The corresponding Luminosity-distance dataset for the Flux-Distance dataset in Fig 2. The solid red line represents the threshold by \(\log(S_{th})=-26.690\). Those elements of this data set that lie within (and on the boundary of) the rectangular area bounded the vertical axis and the vertical and horizontal dashed lines (in green) comprise the set comparable to the data point (2.82, -19.46) on the vertical dashed line.
Figure 2: Scatter plot of the logarithm of radio pulsar flux as a function of the logarithm of distance.
\(\tau=0\) at \(\log(S_{th})=-26.56\). The \(p\)-value of KS test at this flux threshold is equal to \(1.06\times 10^{-39}\). For \(\alpha=1.5\), \(\tau=0\) at \(\log(S_{th})=-26.72\). The corresponding \(p\)-value is equal to \(1.68\times 10^{-11}\). For \(\alpha=1\) and \(\alpha=0.5\), \(\tau=0\) at \(\log(S_{th})=-26.83\) (\(\alpha=1\)) and \(\log(S_{th})=-27.082\) (\(\alpha=0.5\)), which correspond \(p_{ks}\) of \(2.11\times 10^{-4}\) and \(9.86\times 10^{-1}\), respectively. Hence, the KS \(p\)-values are very small for the values of the flux at which \(\tau\) is equal to zero for all distance exponents, except \(\alpha=0.5\).
Therefore, prima-facie, we cannot draw any conclusions about the correct distance exponent, since we get very small \(p-\)values from the KS test, for the fluxes at which the luminosity and distances are uncorrelated for the higher distance exponents. We get a reasonable value of \(P_{ks}\) for an unacceptably low value of \(\alpha\), i.e., for \(\alpha=0.5\). However, this does not point to a failing or deficiency of the method. From Fig. 1 we find that there is a clustering of pulsars for \(\log(D(pc))>3.2\). Therefore, the pulsars in the Parkes multi-beam survey data are not distributed uniformly with distance, because of which the dataset to which we have applied the EP method is not sufficiently homogeneous.
### Analysis with a culled dataset with \(\log(D)<3.2\)
In order to apply EP analysis, we created a subsample of the aforementioned Parkes multi-beam survey with \(\log(D)<3.2\), where \(D\) is expressed in pc, so that the dataset is nearly homogeneous as a function of distance. The
Figure 4: The Efron–Petrosian statistic \(\tau\) versus the logarithm of the flux threshold, for different values of \(\alpha\) for Parkes multibeam survey pulsars. The dashed lines correspond to values of \(\tau=\pm 1\). For \(\alpha=2\), the flux threshold touches zero at \(-26.56\), which lies outside the range of this plot.
Figure 5: (a) Histogram of the logarithm of radio fluxes of pulsars, for the reduced dataset with \(\log(D)<3.2\). (b) \(p\)-value from the KS test (\(p_{KS}\)) as a function of flux threshold, for testing whether the truncated versions of the dataset (after discarding pulsars according to a flux threshold) are drawn from the same distribution.
flux histogram of this culled sample along with \(p_{KS}\) as a function of the flux threshold can be found in Fig. 5. We then redid the same EP analysis as what was done for the full sample. This plot of EP \(\tau\) as a function of the flux threshold can be found in Fig. 6. Once again, we checked the values of \(p_{KS}\) at which \(\tau=0\) for all the four distance exponents. For \(\alpha=2\), \(\tau=0\) for \(\log(S_{th})=-27.02\) with \(p_{KS}=1\). For \(\alpha=1.5\), \(\tau=0\) at \(\log(S_{th})=-27.09\), with \(p_{KS}=1\). For \(\alpha=1\), \(\tau=0\) at \(\log(S_{th})=-25.85\), with \(p_{KS}=3.15\times 10^{-8}\). Finally for \(\alpha=0.5\), \(\tau=0\) at \(\log(S_{th})=-25.79\), with \(p_{KS}=2.51\times 10^{-8}\). Therefore, the values of the flux threshold at which \(\tau=0\) for distance exponents of 0.5 and 1 correspond to unphysically low values of \(p_{KS}\), implying that these distance exponents are not viable. However, the corresponding values for the distance exponents of 1.5 and 2.0 correspond to \(p_{KS}\) of one, implying the truncated and the original dataset are similar. This shows the fluxes of pulsars for the culled Parkes multi-beam survey dataset with \(\log(D)<3.2\) is consistent with distance exponents of 1.5 and 2.0.
## IV Conclusions
Recently, in a series of works Ardavan has demonstrated that the gamma-ray flux of pulsars as well as the X-ray flux of magnetars show a violation of the inverse-square law and the flux scales with distance according to \(F\propto D^{-3/2}\). This conclusion was based on the application of the EP method, where it was found that the independence of luminosity and distance is rejected at a higher level of significance for an inverse-square law scaling compared to \(D^{-3/2}\). These results agree with the theoretical predictions in [16] for the X-ray and gamma-ray fluxes. However, the same theoretical model predicts an inverse-square law for the radio fluxes.
We then replicated this procedure on the radio pulsar fluxes at 1400 MHz, for pulsars discovered with the Parkes multi-beam survey. We found that the flux values for which the pulsar luminosity and distances are uncorrelated (using four different distance exponents) correspond to very low \(p\)-values for the KS test between the truncated and untruncated pulsar sample. This is due to the fact that Parkes multi-beam survey dataset is not sufficiently homogeneous to lend itself to a treatment by the EP method, because of clustering of pulsars at large distances.
Therefore, we rendered the data more homogeneous by creating a subset of the above dataset by removing the distant pulsars, and only using those pulsars with \(\log(D)<3.2\) and repeated the same analysis. We find that once again the values of the flux threshold for which the distance and luminosity are uncorrelated correspond to unphysically low \(p\)-values for the KS test for distance exponents of 0.5 and 1. However, the corresponding flux thresholds for distance exponents of 1.5 and 2 correspond to \(p\)-values of one. This shows that for this culled subsample of Parkes multi-beam survey data obtained after excluding distant pulsars, the flux is consistent with both the inverse-square law as well as with a flux scaling of \(F\propto D^{-1.5}\).
In the spirit of open science, we have made our analysis codes publicly available and these codes can be found at [https://github.com/Pymamid/EP-statistic-radio-pulsars.git](https://github.com/Pymamid/EP-statistic-radio-pulsars.git).
Figure 6: The Efron–Petrosian statistic \(\tau\) versus the logarithm of the flux threshold for the culled sample of Parkes multibeam survey pulsars (with \(\log(D)<3.2\)), for different values of \(\alpha\). The dashed lines correspond to values of \(\tau=\pm 1\). For \(\alpha=2\), \(\tau=0\) at \(\log(S_{th})=-27.02\) corresponding to \(p_{KS}\) at equal to 1. For \(\alpha=1.5\), \(\tau=0\) at \(\log(S_{th})=-27.09\), with \(p_{KS}\) equal to 1. For \(\alpha=1\) and 0.5, \(\tau=0\) at \(\log(S_{th})=-25.85\) (\(\alpha=1\)) and \(\log(S_{th})=-25.79\) (\(\alpha=0.5\)) which corresponds to \(p_{KS}=3.15\times 10^{-8}\) and \(2.51\times 10^{-8}\), respectively. Therefore, for this culled dataset, the pulsar fluxes are consistent with the distance exponents of 1.5 and 2.
###### Acknowledgements.
We are grateful to Maria Giovanna Dainotti and Manjari Bagchi for useful correspondence, as well as the anonymous referee for several useful comments and constructive feedback on our manuscript.
|
2309.14129 | Speaker anonymization using neural audio codec language models | The vast majority of approaches to speaker anonymization involve the
extraction of fundamental frequency estimates, linguistic features and a
speaker embedding which is perturbed to obfuscate the speaker identity before
an anonymized speech waveform is resynthesized using a vocoder. Recent work has
shown that x-vector transformations are difficult to control consistently:
other sources of speaker information contained within fundamental frequency and
linguistic features are re-entangled upon vocoding, meaning that anonymized
speech signals still contain speaker information. We propose an approach based
upon neural audio codecs (NACs), which are known to generate high-quality
synthetic speech when combined with language models. NACs use quantized codes,
which are known to effectively bottleneck speaker-related information: we
demonstrate the potential of speaker anonymization systems based on NAC
language modeling by applying the evaluation framework of the Voice Privacy
Challenge 2022. | Michele Panariello, Francesco Nespoli, Massimiliano Todisco, Nicholas Evans | 2023-09-25T13:32:09Z | http://arxiv.org/abs/2309.14129v3 | # Speaker anonymization using neural audio codec language models
###### Abstract
The vast majority of approaches to speaker anonymization involve the extraction of fundamental frequency estimates, linguistic features and a speaker embedding which is perturbed to obfuscate the speaker identity before an anonymized speech waveform is resynthesized using a vocoder. Recent work has shown that x-vector transformations are difficult to control consistently: other sources of speaker information contained within fundamental frequency and linguistic features are re-entangled upon vocoding, meaning that anonymized speech signals still contain speaker information. We propose an approach based upon neural audio codec (NACs), which are known to generate high-quality synthetic speech when combined with language models. NACs use quantized codes, which are known to effectively bottleneck speaker-related information: we demonstrate the potential of speaker anonymization systems based on NAC language modeling by applying the evaluation framework of the Voice Privacy Challenge 2022.
Michele Panariello,\({}^{1}\) Francesco Nespoli,\({}^{2}\) Massimiliano Todisco,\({}^{1}\) Nicholas Evans\({}^{1}\)\({}^{1}\)EURECOM, France \({}^{2}\)Microsoft UK Speaker anonymization, neural audio codec, language modeling
## 1 Introduction
_Speaker anonymization_ involves the task of processing a speech signal to conceal the identity of the speaker while retaining the spoken content and other para-linguistic attributes such as intonation and prosody. As defined by the Voice Privacy Challenge [1], a speaker anonymization system should provide a certain trade-off between _privacy protection_ and _utility preservation_. The former is measured by the difficulty of an attacker to recover the identity of the original speaker from an anonymized signal via automatic speaker verification (ASV). The latter is assessed primarily by the reliability of an automatic speech recognition (ASR) system to transcribe the anonymized waveform, among other, secondary metrics such as pitch correlation and gain of voice distinctiveness [2].
Most speaker anonymization systems are based on often-incremental changes to original work in [3], which operates upon three distinct components extracted from an input waveform: an F0 curve encoding prosodic information; some form of linguistic features encoding the spoken content; an x-vector [4] encoding the speaker identity. The x-vector is perturbed to conceal the identity of the speaker and then fed to a vocoder with the other two components in order to synthesize an anonymized waveform. This approach assumes that speaker information is contained entirely within the x-vector, even if this is known not to be the case [5, 6]. Residual speaker information captured in linguistic features and the F0 curve is re-entangled with the anonymized x-vector upon vocoding. An x-vector extracted afresh from the anonymized utterance then still contains speaker information which can be used by an adversary to reverse the anonymization and re-identify the speaker [5, 7]. Other researchers [5] have found that the level of speaker information contained in linguistic features can be reduced through their quantization.
Motivated by their successful application to numerous audio synthesis tasks [8, 9], we have sought to exploit the potential of neural audio codec (NAC) language modeling to design a speaker anonymization system that better suppresses speaker information and hence provides an improved trade-off between anonymization and utility. Such an approach is appealing since linguistic features are not used directly for waveform synthesis, as with previous approaches: instead, they are used to infer a set of NAC acoustic tokens with a language model. These features are quantized and therefore have the potential to bottleneck speaker information and improve anonymization. The final waveform is synthesized by decoding the acoustic tokens with a decoder neural network. We hope that the representational power of NACs should help to preserve speech quality and utility.
## 2 Related Work
In the following we describe some related research which provided motivation for the work presented in this paper.
**X-vector-based speaker anonymization -** The original x-vector-based pipeline introduced in [3] is the basis of much of the related work reported recently. An approach to dispense with the intermediate acoustic model was proposed in [2]. More refined x-vector anonymization functions were proposed in later work [10, 11], with some [12] achieving notable improvements to privacy protection levels, albeit under the assumption that the attacker does not have full knowledge of the anonymization system. Whatever the approach, x-vector perturbation does not prevent speaker-related information contained in the F0 curve and linguistic features
from _leaking_ into the anonymized waveform upon vocoding. As a result, the x-vector which can be re-extracted from the anonymized waveform by a privacy adversary who wishes to reidentify the speaker tends to _drift_ away from that at the vocoder input [13]. While the drift can be beneficial to anonymization, it hinders the design of more effective anonymization functions and can also be inverted by an adversary to undo the privacy protection [14]. Attempts to sanitize speaker information from linguistic features have also been explored, e.g. [7] based on the concept of differential privacy, which reports improvements to privacy at the cost of reduced utility and pitch correlation. The same issue was tackled in [5] by means of feature quantization, which is shown to be effective as a speaker information bottleneck, though with a degradation to utility. In this paper, we follow a similar approach, but propose a completely new synthesis pipeline in which we avoid the leakage of information between different speech components by design.
**NAC language modeling -** NACs were proposed recently for audio compression [15, 16]. They consist in convolutional autoencoders that compress audio to low-bitrate, tokenized representations which support high-fidelity decoding. Due to their discretized nature, encoded representations can be modeled using techniques normally used for language-related tasks, such as transformers. This idea was first introduced in [8] for a variety of audio generation tasks, and tailored to text-to-speech (TTS) in [9]. A transformer is used to convert the input (be it audio or text) to a set of high-level _semantic tokens_. These are then fed to another transformer which converts them into _NAC acoustic tokens_ which can be decoded to resynthesize an audio signal.
The same technique can also be applied to voice conversion [8, 17], and is hence ideally suited to speaker anonymization. NAC language models operate on quantized codes, which are known to be beneficial to privacy protection [5]. Moreover, such models appear to naturally disentangle linguistic information into semantic tokens, while encoding speaker information and recording conditions mostly in acoustic tokens [8]. Hence, we propose a speaker anonymization system whereby an input utterance is re-synthesized by means of a NAC language model. Semantic tokens are kept unchanged, while acoustic tokens are substituted with those of a different speaker, the goal being to preserve the linguistic content of the signal while suppressing information related to the original speaker.
## 3 Proposed Approach
### Neural audio codec language modeling
A diagram of the proposed system is shown in Figure 1. Following [9], it is comprised of a semantic encoder, a NAC (encoder and decoder), a pair of transformers and a pool of speaker prompts. They are described in the following.
**The semantic encoder** produces high-level semantic representations of the input signal using a codebook of \(N_{S}\) quantized embeddings. The output is a sequence of integers \(\mathbf{s}\in\{1,\dots,N_{S}\}^{T_{S}}\), where \(T_{S}\) is the number of frames and where each integer is a codeword index.
**The NAC** is an encoder-decoder architecture. The encoder maps input waveforms to a quantized, compressed representation from which the decoder reconstructs a high-fidelity waveform. Efficient compression is achieved with a set of \(Q\) hierarchical codebooks. Lower level codebooks capture coarser waveform characteristics, while finer details are captured by higher level codebooks. Following [8, 9], we refer to the first \(Q_{C}\) codebooks as 'coarse codebooks', and to the last \(Q-Q_{C}\) codebooks as 'fine codebooks'. All have \(N_{Q}\) codewords so that the output of the encoder is \(\mathbf{\tilde{a}}\in\{1,\dots,N_{Q}\}^{Q\times T_{A}}\), where \(T_{A}\) is the number of frames into which the input is divided.
**The coarse and fine transformers** estimate a set of acoustic tokens \(\mathbf{a}\) from a prompt of input semantic tokens \(\mathbf{s}\) and acoustic tokens \(\mathbf{\tilde{a}}\). Essentially, the transformers attempt to predict what semantic information should'sound like' in the domain of quantized acoustic tokens. The coarse transformer autoregressively predicts coarse acoustic tokens, i.e. the codewords belonging to the coarse codebooks. More specifically, for frame \(t\), the transformer predicts the probability distribution of token \(\mathbf{a}_{q,t}\) conditioned on the following elements: the semantic prompt \(\mathbf{s}\), the coarse tokens from the
Figure 1: Diagram of the proposed anonymization system.
acoustic prompt \(\mathbf{\tilde{a}}_{<Q_{C},:}\), and all previous predictions.1 The modeled distribution is therefore
Footnote 1: In practice, the sequence upon which to perform regression is flattened to (\(\mathbf{s},\mathbf{\tilde{a}},\mathbf{a}_{1,1},\mathbf{a}_{2},1,\dots,\mathbf{a }_{Q_{C},1},\mathbf{a}_{1,2},\mathbf{a}_{2,2},\dots,\mathbf{a}_{Q_{C},2},\dots\)). See [8, 9] for more details.
\[p\left(\mathbf{a}_{q,t}|\mathbf{s},\mathbf{\tilde{a}}_{<Q_{C},:},\mathbf{a}_{ <Q_{C},<t},\mathbf{a}_{<q,t}\right) \tag{1}\]
for \(q\in[1,Q_{C}]\). The fine transformer is instead non-autoregressive. It estimates the tokens of codebook \(q\) using all tokens belonging to codebooks \(<q\) and all tokens of the acoustic prompt \(\mathbf{\tilde{a}}\), thus modeling the distribution
\[p\left(\mathbf{a}_{q,:}|\mathbf{\tilde{a}},\mathbf{a}_{<q,:}\right) \tag{2}\]
for every \(q\in[Q_{C}+1,Q]\). Once the acoustic tokens \(\mathbf{a}\) have been predicted for all codebooks \(q\in[1,Q]\), they can be input into the decoder to synthesize an anonymized waveform.
**The pool of speaker prompts** is a set of acoustic tokens extracted by the NAC encoder from utterances belonging to a set of external speakers. Those speakers are referred to as _pseudo-speakers_, since they replace the original speaker in the anonymized utterance. As suggested in [8], acoustic tokens, especially the coarse tokens, can capture information related to the speaker identity. We use them to perform voice conversion, as detailed in the following.
### Anonymization technique
A set of semantic tokens \(\mathbf{s}\) is first extracted from the input utterance. They encode the high-level spoken content. Their quantization helps to suppress speaker-related information.
A pseudo-speaker is chosen by randomly selecting an acoustic prompt \(\mathbf{\tilde{a}}\) from the speaker prompt pool. Anonymization can be performed at either speaker or utterance levels. At the _speaker level_, anonymization is performed using the same speaker prompt for each utterance corresponding to any one speaker. In contrast, for _utterance level_ anonymization, a speaker prompt is selected at random for each utterance. While several anonymization systems include techniques to synthesize fictitious voices [5, 10, 11, 12], here we use real voices as pseudo-speakers to focus our analysis on the intrinsic anonymization capability of the NAC language model.
Prompted with \(\mathbf{s}\) and \(\mathbf{\tilde{a}}\), the coarse and fine transformers generate a set of acoustic tokens \(\mathbf{a}\) which reflect the semantic information of the original utterance, but the acoustic characteristics of the pseudo-speaker. Acoustic tokens \(\mathbf{a}\) are fed to the decoder which synthesizes the anonymized output waveform.
## 4 Experimental Setup
Our codebase is branched from Bark,2 an open-source, NAC-based TTS system which is very similar to VALL-E [9]. The modules described in Section 3.1 are all taken from Bark. The semantic encoder has a HuBERT backbone [19] and a LSTM [20] back-end which predicts the semantic token associated to the HuBERT feature vector output at each frame. The semantic dictionary is of size \(N_{S}=10000\). The coarse and fine transformers are 12-layer GPT-like models [21], each having \(Q=8\) different codebooks with \(N_{Q}=1024\) codewords. The first \(Q_{C}=2\) codebooks are considered coarse. The NAC is EnCodec [16]. The difference between Bark and our system is that, being a TTS model, Bark estimates semantic tokens \(\mathbf{s}\) corresponding to an input text using a further _semantic transformer_. In our implementation, we bypass the semantic transformer and use ground truth semantic tokens from the input waveform thereby providing for voice conversion instead of TTS. With this setup, we are able to use pre-trained Bark modules off-the-shelf, without the need for any training.
Footnote 2: The original source code is available at www.github.com/suno-ai/bark/tree/main/bark/susto-ai/bark, though we built our system from the port included in the CoquiTTS library available at www.github.com/coqui-ai/TTS. Our source code, as well as audio samples, will be made available upon publication.
We adopt the Voice Privacy Challenge 2022 protocol [2] for evaluation. The test set comprises subsets of the _LibriSpeech_[22] and _VCTK_[23] databases. The pool of speaker prompts is taken from the Bark voice library. It consists of 130 utterances collected from speakers of different gender and nationality.3 The threat model is the _semi-informed_ attack described in [2]. Trial utterances are anonymized at the _speaker level_. The attacker is assumed to have access to the anonymization system. They anonymize a set of external data (librispeech-clean-360) at the _utterance level_ and use it to train an ASV system (a TDNN with a PLDA back-end [24]). They also have access to original (non-protected) enrollment utterances which they anonymize at the _speaker level_. The attacker thus has enrollment and trial utterances both of which are anonymized and uses an ASV system to verify whether they correspond to the same speaker. The attacker has no knowledge of which pseudo-speaker was used for anonymization on the test utterance and will hence likely select a different pseudo-speaker to anonymize the enrollment utterance. The privacy metric is the resulting EER estimated from a large number of ASV trials. Utility is assessed by training an ASR system on the same anonymized version of librispeech-clean-360 and by estimating the word error rate (WER) from anonymized test data. Two additional metrics are defined in the VoicePrivacy Challenge evaluation plan [2]. The first is the F0 curve correlation \(\boldsymbol{\rho}^{F0}\) between original and anonymized utterance which is used as a measure of prosody preservation. The second is the gain of voice distinctiveness \(G_{VD}\) which is used to estimate how well the anonymized voices of different speakers can be distinguished [2].
Footnote 3: www.github.com/suno-ai/bark/tree/main/bark/assets/prompts/v2
We adopt the B1b and T11 participant system from the Voice Privacy Challenge 2022, in addition to the system pro
posed by Champion et al. [5] as baselines. System T11 is the non-TTS system that achieved the highest privacy level in the 2022 challenge.4 The work in [5] was the first to propose the use of codebook-based feature quantization.
Footnote 4: Results available at www.voiceprivacychallenge.org/results-2022. The overall highest privacy level was in fact achieved by a TTS-based system [11] that barely passed the prosody preservation requirement of scoring \(\mathbf{\rho}^{F_{0}}>0.3\). In general, TTS-based systems are known to almost completely erase speaker information, at the cost of a severe loss of intonation and prosody. Therefore, we do not include [11] in our comparative analysis.
## 5 Results
Results are shown separately for LibriSpeech and VCTK test sets in Table 1. Our system achieves the highest privacy levels: 28.5% EER for LibriSpeech; 45.5% EER for VCTK. The substantially lower EERs of 17.5% and 28.0% for the two test sets and the system of Champion et al. suggest that our quantization approach is more effective in removing speaker information than that proposed in [5]. In an effort to further suppress speaker information, Champion et al. also experimented with the addition of Gaussian noise to the input F0 curve. Improvements to privacy nonetheless result in a lower pitch correlation \(\mathbf{\rho}^{F0}\approx 0.55\).5 For our method, the pitch correlation is in the order of \(\mathbf{\rho}^{F0}\approx 0.7\) on average, and compares favorably with that of other systems in the literature [2]. In terms of privacy protection, our model also comfortably outperforms the T11 system by 8% and 5% EER for LibriSpeech and VCTK test sets respectively. The gain in voice distinctiveness for the T11 system are also low. This is not surprising since the system maps all speakers to similar pseudo-speakers. In contrast, our system gives values of \(G_{VD}\approx-2\), denoting substantially better speaker distinctiveness.
Footnote 5: This result is provided by courtesy of the main author of [5].
However, utility estimates for our model are lower than that of other systems. The WER increase from 4.2% (original data) to 7.5% for the LibriSpeech subset and from 12.8% to 18.9% for the VCTK subset. Similar issues have also been reported in the literature. The authors of [8] show that the NAC copy-synthesis of LibriSpeech test-clean dataset causes an increases to the WER of its own ASR system from 2.5% to 6%, with similar results being reported in [9]. Nevertheless, informal listening tests on our data do not reveal any notable artifacts or degradation to intelligibility.
In an attempt to shed light on the cause for this phenomenon, we repeated similar experiments using a different ASR architecture, namely that reported in [18], which is retrained according to the same setup described in Section 4. The issue persists. The WER increases from 2.5% to 4.6% for the LibriSpeech subset and from 7.6% to 15.5% for the VCTK subset. These findings suggest that the degradation to utility is more dependend on the NAC language model than on the ASR system. As suggested in [8], this could be due to the quality of some pseudo-speaker prompts, since the extracted fine acoustic tokens tend also to capture aspects of the (potentially poor) _recording conditions_, the characteristics of which are then transferred to anonymized outputs. More thorough experimentation to help us better understand this phenomena is already underway.
## 6 Conclusions
We present a novel approach to speaker anonymization based on a neural audio codec (NAC) language model. Our system performs voice conversion by extracting a set of semantic tokens from an input signal and using them to estimate a set of acoustic tokens belonging to a different speaker, which in turn are used to synthesize an anonymized speech signal with a NAC decoder. The quantized nature of the semantic and acoustic tokens successfully bottlenecks speaker-related information delivering substantially improved anonymization performance without compromising prosody or speaker distinctiveness. While informal listening tests show that anonymized signals are of high quality and intelligibility, automatic transcription with a speech recognition system shows a modest degradation to utility. Future work should investigate strategies to better protect utility while retaining the benefits to privacy safeguard, such as using high-quality speaker prompts or fine-tuning parts of the system with utility preservation constraints.
\begin{table}
\begin{tabular}{l c c c c|c c c c} \hline \hline \multirow{2}{*}{**System**} & \multicolumn{4}{c|}{_LibriSpeech_} & \multicolumn{4}{c}{_VCTK_} \\ & EER (\%) & WER (\%) & \(G_{VD}\) & \(\mathbf{\rho}^{F_{0}}\) & EER (\%) & WER (\%) & \(G_{VD}\) & \(\mathbf{\rho}^{F_{0}}\) \\ \hline \hline Original & 4.4 & 4.2 & 0 & 1 & 3.2 & 12.8 & 0 & \\ Original (eval. pipeline of [18]) & 1.5 & 2.5 & n.a. & 1.1 & 7.6 & n.a. & 1 \\ \hline B1b [2] & 8.6 & 4.4 & -5.8 & 0.78 & 9.7 & 10.7 & -7.1 & 0.81 \\ \hline T11 [10] & 20.6 & 3.9 & -19.0 & 0.68 & 39.7 & 7.9 & -18.4 & 0.73 \\ \hline Champion et al. [5] & 17.5 & 4.5 & n.a. & 0.67 & 28.0 & 10.0 & n.a. & 0.73 \\ Champion et al. (noise on F0) [5] & 23.4 & 4.6 & n.a. & 0.52 & 40.8 & 10.3 & n.a. & 0.60 \\ \hline Ours & 28.5 & 7.5 & -1.5 & 0.68 & 45.5 & 18.9 & -2.1 & 0.74 \\ Ours (eval. pipeline of [18]) & 34.1 & 4.6 & n.a. & 36.6 & 15.5 & n.a. & 0.74 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of the analyzed systems on the Voice Privacy Challenge 2022 test subsets. |
2309.07633 | Evolutionary-Based Online Motion Planning Framework for Quadruped Robot
Jumping | Offline evolutionary-based methodologies have supplied a successful motion
planning framework for the quadrupedal jump. However, the time-consuming
computation caused by massive population evolution in offline
evolutionary-based jumping framework significantly limits the popularity in the
quadrupedal field. This paper presents a time-friendly online motion planning
framework based on meta-heuristic Differential evolution (DE), Latin hypercube
sampling, and Configuration space (DLC). The DLC framework establishes a
multidimensional optimization problem leveraging centroidal dynamics to
determine the ideal trajectory of the center of mass (CoM) and ground reaction
forces (GRFs). The configuration space is introduced to the evolutionary
optimization in order to condense the searching region. Latin hypercube
sampling offers more uniform initial populations of DE under limited sampling
points, accelerating away from a local minimum. This research also constructs a
collection of pre-motion trajectories as a warm start when the objective state
is in the neighborhood of the pre-motion state to drastically reduce the
solving time. The proposed methodology is successfully validated via real robot
experiments for online jumping trajectory optimization with different jumping
motions (e.g., ordinary jumping, flipping, and spinning). | Linzhu Yue, Zhitao Song, Hongbo Zhang, Xuanqi Zeng, Lingwei Zhang, Yun-Hui Liu | 2023-09-14T11:54:50Z | http://arxiv.org/abs/2309.07633v1 | # Evolutionary-Based Online Motion Planning Framework for Quadruped Robot Jumping
###### Abstract
Offline evolutionary-based methodologies have supplied a successful motion planning framework for the quadrupedal jump. However, the time-consuming computation caused by massive population evolution in offline evolutionary-based jumping framework significantly limits the popularity in the quadrupedal field. This paper presents a time-friendly online motion planning framework based on meta-heuristic Differential evolution (DE), Latin hypercube sampling, and Configuration space (DLC). The DLC framework establishes a multidimensional optimization problem leveraging centroidal dynamics to determine the ideal trajectory of the center of mass (CoM) and ground reaction forces (GRFs). The configuration space is introduced to the evolutionary optimization in order to condense the searching region. Latin hypercube sampling offers more uniform initial populations of DE under limited sampling points, accelerating away from a local minimum. This research also constructs a collection of pre-motion trajectories as a warm start when the objective state is in the neighborhood of the pre-motion state to drastically reduce the solving time. The proposed methodology is successfully validated via real robot experiments for online jumping trajectory optimization with different jumping motions (e.g., ordinary jumping, flipping, and spinning).
## I Introduction
A crucial aspect of a quadrupedal robot's capacity to traverse various terrains is its ability to perform jumping motions in tough natural surroundings. To adapt to uneven terrains, many researchers have focused on locomotion via diverse gaits (e.g., bounding, walking). Some works have already yielded outstanding results, such as high-speed bounding in [1] and [2]. However, crossing over unavoidable obstacles (e.g., deep canals and roadblocks) usually requires a robust and high-performance jumping motion controller. A core difficulty for quadruped jumping is generating trajectories in real time under kino-dynamic constraints [3] (e.g., physical constraints). [4] and [5] have already achieved unforgettable results. However, the laborious calculation of offline trajectories makes it difficult or impossible to apply to jobs involving frequent re-planning. This motivates the development of a unified framework that satisfies online planning.
Some publications address jumping trajectory optimization issues subject to complex kino-dynamics constraints using Reinforcement Learning (RL). The RL approach has shown a remarkable capacity for complicated locomotion on legged robots [6, 7, 8]. Recently, some works have used RL to deal with the jumping of quadruped robots. Learned policies, inspired by cat landing behavior, were used to control the robot's posture in the landing phases. [9]. However, few works are focusing on planning multiple complicated jumping motions using a single policy.
Gradient-based trajectory optimization is a commonly employed optimization method in robot jumping control. The MIT research uses gradient-based optimization algorithms to assist the robots Cheetah 3 jump on a high desk (0.76 (m)) and Mini-Cheetah to cover a variety of jumping motions with an online 3-D jumping trajectory optimization approach [10, 11], respectively. Similar to [12], they use collocation-based optimization to build offline trajectories over dynamically-feasible barriers. Additionally, [13] utilized a mixed-integer convex program to circumvent the reference motion limits; however, this method must be optimized offline.
Heuristic algorithms can efficiently solve optimization problems with complex constraints, which supplies a new approach for generating jumping motions. Differential Evolution (DE) is a heuristic-based algorithm proposed by Storn and Price [15]. DE algorithms have been utilized in robotics,
Fig. 1: Various jumping motion experiments to validate the proposed online motion framework. (1) Two-leg vertical jumps with \(\theta=\frac{\pi}{3}\). (2) Back-flip jumps with \(\theta=-2\pi\). (3) Back-flip jumps from 0.3 (m) height platform (4) Four-leg vertical jumps reach the max height of 0.7 (m).
signal processing, and other industries to address complicated optimization problems [20]. In our previous work, DE is employed to generate offline jumping trajectories for a quadrupedal robot [16]. However, this offline approach is inherently inferior for systems with online re-planning requirements. Moreover, the technique called Latin hypercube sampling (LHS) can be used to generate an initialization population for DE, increasing the convergence speed in low-dimensional space (typically less than 20 dimensions) [17, 18].
In our work, we try to accelerate the DE algorithm through three techniques: search space compressing without losing the jumping motion performance, careful selection of the initial population, and a warm start by pre-calculation. By adding the conditioned configuration space of the robot, the smaller searching space enables the DE approach to reduce the population and the number of iterations, hence accelerating the optimization time. The LHS gives more uniform initial populations of the DE algorithm, which accelerates away from the local minimum. Also, when the desired state is in close proximity to solved states saved in the pre-motion library, pre-calculated optimization variables can be shared with new evolution as a warm start. To sum up, the DE algorithm based on C-space, LHS, optimization variables transformation, and Pre-motion library is used to construct the proposed framework.
In this work, we intend to answer the following questions:
a) How to design a time-friendly optimization framework for online motion planning using an evolution-based technique?
b) How do the configuration space, Latin hypercube sampling, and optimization variables transformation accelerate the convergence speed of the optimization? c) How to produce a series of trajectories for the Pre-motion Library?
Our primary contributions can be shown as follows:
1. A time-friendly online motion planning framework for quadruped jumping based on the meta-heuristic Differential evolution, Latin hypercube sampling, and Configuration space (DLC) algorithm is proposed, which can generate various jumping trajectories online.
2. We creatively combine configuration space, Latin hypercube sampling, and the Pre-motion Library to reduce optimization time.
3. The algorithm has been verified online by various jumps on a real quadruped robot (see Fig. 1).
Moreover, the current study differs from our prior work [16] in that it only addresses how to solve a quadruped robot jumping problem by an evolutionary algorithm but does not consider how to overcome the time-consuming restriction.
## II Models and dynamics
The objective of this section is to present the centroidal dynamics and 2D planar model for the DLC framework. The reduced-order dynamic model of jumping motion treats the robot as a single rigid body (SRB) with a specified moment of inertia for the optimization process. Moreover, this work considers the 2D planar model (i.e., sagittal and coronal planes) to the framework as shown in Fig. 2 and Fig. 3. The \(\mathbf{x}\) presents the system state.
\[\mathbf{x}:=[\mathbf{P}_{C}^{T}\quad\mathbf{\Theta}^{T}\quad\mathbf{V}_{C}^{T} \quad^{B}\mathbf{\omega}^{T}]^{T}\in\mathbb{R}^{12} \tag{1a}\] \[\mathbf{Q}:=[\mathbf{q}_{i}\quad\dot{\mathbf{q}}_{i}]\in\mathbb{R}^{24} \tag{1b}\]
where \(\mathbf{P}_{C}\in\mathbb{R}^{3}\) is the position of the robot center of mass (CoM) w.r.t inertial frame; \(\mathbf{\Theta}\in\mathbb{R}^{3}\) represents the Euler angles of the robot; \(\mathbf{V}_{C}\in\mathbb{R}^{3}\) is the velocity of the CoM. \({}^{B}\mathbf{\omega}\in\mathbb{R}^{3}\) is the angular velocity of CoM represented in the robot frame \(B\). \(\mathbf{q}_{i}\in\mathbb{R}^{3}\) and \(\mathbf{q}_{i}\in\mathbb{R}^{3}\) are the joint angles and velocities of each leg. \(i\) is the number of feet. The GRFs \(\mathbf{u}:=[\mathbf{f}_{i}]\in\mathbb{R}^{12},\mathbf{f}_{i}\in\mathbb{R}^{3}\) is the dynamic system control input at each contact point acquired by optimization. \(\mathbf{r}_{i}\) is the vector from CoM to the robot foot. Then the body net wrench \(\mathcal{F}\in\mathbb{R}^{6}\) of the CoM is shown as follows:
\[\mathcal{F}=\left[\begin{array}{c}\mathbf{F}_{c}\\ \mathbf{\tau}_{c}\end{array}\right]=\sum_{i=1}^{4}\left[\begin{array}{c}\mathbf{f}_ {i}\\ \mathbf{r}_{i}\times\mathbf{f}_{i}\end{array}\right], \tag{2}\]
where \(\mathbf{F}_{c}\) and \(\mathbf{\tau}_{c}\) represent the total force and torque of CoM. Moreover, the simplified model for jumping motions decreases the 18 Degrees-of-Freedom (DoFs) to 7 (including 6 leg joints and an angle of the jumping plane, see Fig. 3).
Then the equations of the centroidal dynamics model [22] are given in (3) along with the coordinates defined in Fig. 2.
\[\mathbf{\dot{P}}_{C}(t) =\frac{\sum_{i=1}^{4}\mathbf{f}_{i}}{m}-\mathbf{g} \tag{3a}\] \[\frac{\mathrm{d}(\mathbf{I}\mathbf{\omega})}{\mathrm{d}t} =\mathbf{\tau}_{c}+\mathbf{0}_{3\times 1}\times(\mathrm{m}\cdot\mathbf{g}), \tag{3b}\]
where \(\mathbf{g}\in\mathbb{R}^{3}\) represents gravitational acceleration. \({}^{B}\mathbf{I}\in\mathbb{R}^{3\times 3}\) is the robot's rotation inertial tensor which is assumed as a constant in this work, \(\mathrm{diag}^{(B}\mathbf{I})=[0.07,0.26,0.242]^{T}\). In addition, our framework classifies jumping motions into four phases: four-foot contact, two-foot contact, flying phase, and landing phase.
## III Jumping Motion Planning Framework
To design a time-friendly optimization framework for online motion planning, we improve the meta-heuristic **D**ifferential evolution algorithm by introducing the **L**atin Sampling algorithm to replace the random initial population and the **C**onfiguration space to further reduce the optimization algorithm's search domain. Additionally, the pre-motion library is also used as the warm-start.
Fig. 2: A model of a single rigid body (SRB) utilized in the framework for optimization. The blue arrow represents the CoM to the plantar position vector, while the red arrow represents the Ground Reaction Forces (GRF’s).
### _Optimization Formulation_
The objective of this section is to build the optimization problem and optimization objectives. Additionally, unlike the gradient-based method, our evolutionary-based optimization framework's cost function is a well-designed priority hierarchical fitness function.
\[\underset{D_{\text{opt}}}{\text{minimize}} 10^{L}-\sum_{n=3}^{L}\left(10^{n-3}\sigma_{n}W_{n}\right)+W_{1}\zeta\] (4) subject to \[\mathbf{x}\left(k+1\right)=\mathbf{x}\left(k\right)+\Delta\dot{\mathbf{x}} \left(k\right)\] \[\dot{\mathbf{x}}_{k+1}=g(\mathbf{u}_{k},\mathbf{x}_{k})\] \[\mathbf{x}_{k}\in\mathbb{X},k=1,2,\cdots,N\] \[\mathbf{u}_{k}\in\mathbb{U},k=1,2,\cdots,N\] \[\mathbf{q}_{k}\in\mathbb{Q},k=1,2,\cdots,N\] \[\mathbf{x}\left(0\right)=\mathbf{x}_{0},\mathbf{x}\left(N\right)=\mathbf{x}_{ \text{target}}\]
where \(\zeta=\int_{0}^{T}(|\mathbf{\tau}(t)\mathbf{q}(t)|)dt\) is the energy consumption of the motion. \(D_{\text{opt}}\) is the optimization variables whose meaning and selection are elaborated in the next section. \(\sigma_{n}\in\mathbb{R}\) is the differences between the system state produced by the evolutionary algorithm optimization iteration process and the feasible set specified by C-space. \(W_{n}\in[0,1]\) is a weight that indicates the significance of one constraint to the optimization problem. \(N\) is the evolution population number (ref to Algorithm 1); \(L\in\mathbb{R}\) is the total number of layering priority constraints. \(\mathbb{X}\), \(\mathbb{U}\) and \(\mathbb{Q}\) are the feasible sets according to kino-dynamics constraints. \(\mathbf{x}\left(0\right)\) and \(\mathbf{x}\left(N\right)\) are the initial state and the desired state for the robot; \(\dot{\mathbf{x}}_{k+1}=g(\mathbf{u}_{k},\mathbf{x}_{k})\) represents the centroidal dynamics combination form with respect to (3a) and (3b).
### _Optimization Parameters and Transformation_
The objective of this section is to design optimization variables for the DLC framework of different jump motions. We utilize the system state as the optimization variable instead of employing polynomial parameters.
Here, the scenario of front jumping in the sagittal plane illustrates how to generate the optimization variables and do the optimization variables transformation.
**Assumption 1**: The force along the y-axis is zero.
**Assumption 2**: Leg 0 and leg 1 has the equivalent force, the same as the rear two legs. That is, \(\mathbf{f}_{0}=\mathbf{f}_{1}\) and \(\mathbf{f}_{2}=\mathbf{f}_{3}\).
**Assumption 3**: The x-axis force of the front and back feet are equal when \(t\in[0,t_{1}]\). Based on the assumptions, the equation of GRFs of the front jump can be simplified as follows:
\[\mathbf{f}_{i} =\left\{\begin{array}{cc}a_{1}t+a_{0}&t\in[0,t_{1}]\\ b_{2}t^{2}+b_{1}t+b_{0}&t\in[t_{1},t_{2}]\\ 0&t\in[t_{2},t_{3}]\end{array}\right., \tag{5a}\] \[\Lambda =[a_{0},a_{1},b_{0},b_{1},b_{2}], \tag{5b}\]
By using a 2D simplified model of front jumping motion, the robot state can be represented as \(\mathbf{s}_{\Omega}(t)=[x_{c},z_{c},\theta]\), where \([x_{c},z_{c}]\) are the position of CoM and \(\theta\) is the pitch angle. We can get the analytical expression of \(\mathbf{s}_{\Omega}(t)\) from GRFs given in (5a) using the centroidal dynamics models. In addition, there are 12 polynomial coefficients for one jump motion according to assumption 2. The robot's state should then ideally be utilized for optimization. Furthermore, to easily bound the \(\Lambda\), we convert polynomial coefficients into expressions based on \(\mathbf{s}_{\Omega}(t)\). we choose robot states under four time points together with three durations of different jumping phases (\([\frac{1}{2},t_{1},t_{2},t_{3}]\)). Then, we can select the optimization variables given in (6).
\[\mathbf{D}_{\text{opt}}:=[\mathbf{s}_{\Omega}(\frac{t_{1}}{2}),\mathbf{s}_{\Omega}(t_{1}), \mathbf{s}_{\Omega}(t_{2}),t_{opt}]^{T}\in\mathbb{R}^{12}, \tag{6}\]
```
input :\(\mathbf{s}_{t},\mathbf{O}_{k},\mathbf{D}_{\text{res}}^{*},\text{Maxgen},NP,\mathbf{D}_{\text {opt}},r,\mathbf{\varepsilon}\) output :\(\mathbf{D}_{\text{res}}\)
1\(g\gets 1,\mathbf{k}\leftarrow[\mathbf{s}_{t},\mathbf{O}_{k}],\mathbf{s}_{m}\in\mathbb{R}^{12} \leftarrow\mathbf{D}_{opt}\) ;
2\(\mathbf{\Omega}_{\mathbf{z}}\leftarrow\{\mathbf{s}_{m}\ |\ s_{m,1\sim 9}\in\mathbf{\Omega}_{C},s_{m,10\sim 12} \in\mathbf{\Omega}_{T}\}\) ;
3\(\mathbf{\Omega}_{s}^{*}\leftarrow\{\mathbf{s}_{m}\in\mathbf{\Omega}_{s}\ |\ |\mathbf{s}_{m}-\mathbf{D}_{res}^{*} \|<r\}\);
4if\(\|\mathbf{s}_{t}-\mathbf{s}_{t}^{*}\|_{2}<\mathbf{\varepsilon}\)then
5\(\mathbf{s}_{m}(g)\gets LHS(\mathbf{\Omega}_{s},NP)\);
6
7else
8\(\mathbf{s}_{m}(g)\gets LHS(\mathbf{\Omega}_{s}^{*},NP)\);
9
10 end while
11whileFitness(\(\mathbf{D}_{\text{res}}(g),\mathbf{k})>\varepsilon\)or\(g<\text{Maxgen}\)do
12for\(m\gets 1\)to\(NP\)do
13 Mutation and Crossover;
14for\(n\gets 1\)to\(12\)do
15\(v_{m,n}(g)\gets M(s_{m,n}(g))\);
16\(u_{m,n}(g)\gets C(s_{m,n}(g),v_{m,n}(g))\);
17
18 end if
19\(Selection\);
20ifFitness(\(\mathbf{U}_{m}(g),\mathbf{k})\textless\)Fitness(\(\mathbf{s}_{m}(g),\mathbf{k}\))then
21\(\mathbf{s}_{m}(g)\leftarrow\mathbf{U}_{m}(g)\);
22ifFitness(\(\mathbf{s}_{m}(g),\mathbf{k})\textless\)Fitness(\(\mathbf{D}_{\text{res}}(g),\mathbf{k}\))then\(\mathbf{D}_{\text{opt}}\leftarrow\mathbf{s}_{m}(g)\) ;
23
24else
25\(\mathbf{s}_{m}(g)\leftarrow\mathbf{s}_{m}(g)\);
26
27 end if
28
29 end for
30
31\(g\gets g+1\);
32 end while
33
34 end for
35
36 end for
37
38 end for
```
**Algorithm 1**DLC Algorithm
By using a 2D simplified model of front jumping motion, the robot state can be represented as \(\mathbf{s}_{\Omega}(t)=[x_{c},z_{c},\theta]\), where \([x_{c},z_{c}]\) are the position of CoM and \(\theta\) is the pitch angle. We can get the analytical expression of \(\mathbf{s}_{\Omega}(t)\) from GRFs given in (5a) using the centroidal dynamics models. In addition, there are 12 polynomial coefficients for one jump motion according to assumption 2. The robot's state should then ideally be utilized for optimization. Furthermore, to easily bound the \(\Lambda\), we convert polynomial coefficients into expressions based on \(\mathbf{s}_{\Omega}(t)\). we choose robot states under four time points together with three durations of different jumping phases (\([\frac{1}{2},t_{1},t_{2},t_{3}]\)). Then, we can select the optimization variables given in (6).
\[\mathbf{D}_{\text{opt}}:=[\mathbf{s}_{\Omega}(\frac{t_{1}}{2}),\mathbf{s}_{\Omega}(t_{1}), \mathbf{s}_{\Omega}(t_{2}),t_{opt}]^{T}\in\mathbb{R}^{12}, \tag{7}\]
where \(\zeta=\int_{0}^{T}(|\mathbf{\tau}(t)\mathbf{q}(t)|)dt\) is the energy consumption of the motion. \(D_{\text{opt}}\) is the optimization variables whose meaning and selection are elaborated in the next section. \(\sigma_{n}\in\mathbb{R}\) is the differences between the system state produced by the evolutionary algorithm optimization iteration process and the feasible set specified by C-space. \(W_{n}\in[0,1]\) is a weight that indicates the significance of one constraint to the optimization problem. \(N\) is the evolution population number (ref to Algorithm 1); \(L\in\mathbb{R}\) is the total number of layering priority constraints. \(\mathbb{X}\), \(\mathbb{U}\) and \(\mathbb{Q}\) are the feasible sets according to kino-dynamics constraints. \(\mathbf{x}\left(0\right)\) and \(\mathbf{x}\left(N\right)\) are the initial state and the desired state for the robot; \(\dot{\mathbf{x}}_{k+1}=g(\mathbf{u}_{k},\mathbf{x}_{k})\) represents the centroidal dynamics combination form with respect to (3a) and (3b).
### _Optimization Formulation_
The objective of this section is to build the optimization problem and optimization objectives. Additionally, unlike the gradient-based method, our evolutionary-based optimization framework's cost function is a well-designed priority hierarchical fitness function.
\[\underset{D_{\text{opt}}}{\text{minimize}} 10^{L}-\sum_{n=3}^{L}\left(10^{n-3}\sigma_{n}W_{n}\right)+W_{1}\zeta\] (8) subject to \[\mathbf{x}\left(k+1\right)=\mathbf{x}\left(k\right)+\Delta\dot{\mathbf{x}}\left(k\right)\] (9) \[\dot{\mathbf{x}}_{k+1}=g(\mathbf{u}_{k},\mathbf{x}_{k})\] (10) \[\mathbf{x}_{k}\in\mathbb{X},k=1,2,\cdots,N\] \[\mathbf{u}_{k}\in\mathbb{U},k=1,2,\cdots,N\] \[\mathbf{q}
### _C-space and Kino-dynamic Constraints_
This section aims to build the configuration space (C-space)x. We introduce the kino-dynamic constraints including joint constraints, contact force constraints, and friction constraints [16] to generate the C-space, which makes this optimization problem lie in a much smaller searching region. Inspired by Ding's work [13], we search for \(\mathbf{s}_{\Omega}(\mathbf{t}_{\text{opt}})\) and \(\mathbf{t}_{\text{opt}}\) of \(\mathbf{D}_{\text{opt}}\) into two independent spaces, configuration space (C-space) \(\mathbf{\Omega}_{C}\subset\mathbb{R}^{3}\) and time-space (T-space) \(\mathbf{\Omega}_{T}\subset\mathbb{R}^{3}\), respectively. The definition of \(\mathbf{\Omega}_{C}\) and \(\mathbf{\Omega}_{T}\) are as follows:
\[\mathbf{\Omega}_{C}:=\{\mathbf{s}_{\Omega}\in\mathbb{R}^{3}\ |\ \mathbf{q}_{ \text{min}}<\mathbf{q}(\mathbf{s}_{\Omega})<\mathbf{q}_{\text{max}}, \tag{7}\] \[\mathbf{z}_{\text{hip}}(\mathbf{s}_{\Omega})>\mathbf{z}_{\text{min}},\] \[\mathbf{z}_{\text{knee}}(\mathbf{s}_{\Omega})>\mathbf{z}_{\text{min}}\},\] \[\mathbf{\Omega}_{T}:=\{\mathbf{t}_{\text{opt}}\in\mathbb{R}^{3}\ |\ 0.1<\mathbf{t}_{ \text{opt}}<0.5\},\]
where \(\mathbf{\Omega}_{C}\) is a set of robot's configurations \(\mathbf{s}_{\Omega}\) in different jumping tasks and phases w.r.t. world frame, which satisfies joint angle and joint position constraints. The constraint of joint position (\(z_{\text{hip}}\) and \(z_{\text{knee}}\)) means that the hip joint and knee joint should not be in contact with the ground during the jump. \(\mathbf{\Omega}_{T}\) is the time range of four feet contacts, two feet contacts, and flight jumping phases that are manually selected. For \(\mathbf{\Omega}_{C}\), due to the complex relationship between \(\mathbf{s}_{\Omega}\) and \(\mathbf{q}\), \(\mathbf{z}_{\text{hip}}\), and \(\mathbf{z}_{\text{knee}}\) the shape of \(\mathbf{\Omega}_{C}\) is difficult to describe with analytical formulas. The value range of the three elements in \(\mathbf{s}_{\Omega}\) depends on hardware limitations. Then we split each value range into 50 equal parts to get 125000 points. Finally, the shape of \(\mathbf{\Omega}_{C}\) can be obtained (see Fig. 5) by removing the points that do not satisfy the constraints of joint angles and joint positions. Therefore, for different jumping tasks and feet contact modes, the DLC algorithm can directly optimize \(\mathbf{s}_{\Omega}\) in the corresponding \(\mathbf{\Omega}_{C}\) to speed up the progress of finding local optimal \(\mathbf{s}_{\Omega}\).
### _Pre-motion Library_
This section aims to establish an offline library about a set of the 12 local optimal optimization variables (\(\mathbf{D}_{\text{res}}\)) called the Pre-motion Library to accelerate online optimization.
The central idea for building the Pre-motion Library is to generate a set of \(\mathbf{D}_{\text{res}}\) offline with 12 local optimal optimization variables (\(\mathbf{D}_{\text{res}}^{*}\)). According to the fixed step size (\(\sim 0.05\ m\)), \(\mathbf{s}_{\Omega}\) is uniformly divided to obtain the target state of the robot. Then, the obtained target robot states are input into the DLC framework. The corresponding \(\mathbf{D}_{\text{res}}^{*}\) are saved and collected to form the Pre-motion Library. The library comprises 567 \(\mathbf{D}_{\text{res}}\) representing various kinds of jumping motion (e.g., front/rear/side, backflip, side flip, two/four-leg jump). When re-planning is required, pre-calculated optimization variables can be shared with new evolution as a warm start.
An index file (Yaml-file) maintains all of the CoM's pre-motion trajectories. In addition, the Pre-motion file is \(\sim 30\)
Fig. 4: Overview of Online DLC optimization jumping framework, based on LHS, DE, and C-space. The red dot line means using the trajectory from the Pre-motion Library. The motion planning procedure is shown by the blue blocks. The low-level controller is shown by the green blocks.
Fig. 5: The 3-dimensional (3D) configuration space for different jumping tasks and contact modes. (a) Front jump configuration space with the front, rear, and four feet contact modes. (b) Side jump configuration space with left, right, and four feet contact modes.
Fig. 6: The optimization time with the Pre-motion Library of four-leg front jump motion and convergence comparison of DE algorithm with and without LHS. (a) The DLC algorithm running time of the four contact front jumping task with \(\mathbf{D}_{\text{res}}^{*}\) of \(\mathbf{s}_{\Omega}^{*}=[0.6,0.2,0]\) in Pre-motion Library. The high-level information \(\mathbf{s}_{t}=[x_{c},z_{c},0]\), where \(x_{c}\in[0.5,0.7]\ m,z_{c}\in[0.2,0.4]\ m\) and a sampling point was taken every 0.005 (m) in the two directions. (b) DE algorithm with and without LHS.
Megabytes in size (MB). It will be loaded into memory at the start of the controller's engine. The desired trajectory from the library is based on the minimum Euclidean distance with a specified threshold (0.05 (m)), the input is the high-level information \(\boldsymbol{s}_{\Omega}\).
### _Online DLC Algorithm_
After the C-Space, Pre-motion Library, and optimization variables are proposed, then the online DLC framework can be introduced.
The online DLC algorithm utilizes the prioritization fitness function (see (4)) to search for solutions in C-space. We introduce \(\boldsymbol{\Omega}_{C}\) and \(\boldsymbol{\Omega}_{T}\) into the searching space \(\boldsymbol{\Omega}_{s}\) to limit the mutation region. Then, in contrast to the conventional DE algorithm's random population initiation, we use Latin hypercube sampling (LHS) to produce a more uniform initial population distribution [29], which improves algorithm iteration convergence speed (see Fig. 6(b)). For the objective robot state \(\boldsymbol{s}_{t}\) whose Euclidean distance to \(\boldsymbol{D}_{res}^{*}(t_{3})\) is less than the threshold \(\epsilon\), it is obtained from the Pre-motion library, otherwise, it continues to use the LHS for initialization.
The details of the DLC algorithm are shown in algorithm 1, where \(\boldsymbol{s}_{t}\in\mathbb{R}^{3}\) is the desired position and Euler angular of the CoM and \(\boldsymbol{O}_{k}\in\mathbb{R}^{12}\) is the location of obstacles coming from the high-level information in our framework. Maxgen and \(NP\) represent the maximum generation and population numbers, respectively. \(r\) is the neighborhood radius of \(\boldsymbol{D}_{res}^{*}\). \(\epsilon\) is the fitness value at which the algorithm stops and returns \(\boldsymbol{D}_{res}\), which is usually less than \(\beta\). \(g\) is the number of the DLC generations, \(M(\cdot)\), \(C(\cdot)\), and \(\text{LHS}(\cdot)\) present the mutation, crossover, and Latin hypercube sampling functions. \(\boldsymbol{U}_{m}(g)\) is the unit vector w.r.t. optimization parameters.
## IV Implementation Details and Experiments
This section's primary objective is to experimentally verify the efficacy and adaptability of the proposed framework via various jumping motion types with the open-source MIT Mini-Cheetah [28]. The jumping controller uses a joint-level PD controller with DLC generated torque. And a first-order low-pass filter for \(\boldsymbol{q}\) and \(\boldsymbol{\tau}\) is used for the landing controller. In order to protect the mechanical components of the robot, positive flexible landing control is necessary, here we employ relatively small PD gains [16].
To study the solving efficiency of the proposed framework, we conduct experiments on the trajectory of optimization on both the simulation and the real robot. We do not just repeat the verification of the feasible solutions in the jump library; rather, we conducted a small-scale randomization (\(\pm 0.05(m)\) of the target position) of each offline-obtained feasible solution to evaluate the algorithm's adaptability. The optimization time for general motions (without touching the C-space boundary and hardware limits) is often less than 0.3 (s) (see Fig. 7) with the Pre-motion Library. Still, it will take about 3-9 seconds to optimize actions with extreme boundaries (such as the highest vertical jump or highest double-leg jump height). The average solving time is shown in the Table. I. Using back-flip as an illustration, the average optimization time spans from 197s to 0.79s, while the solution speed is approximately 200 times faster. The DE algorithm with LHS typically requires fewer iterations than the conventional initial population technique (see Fig. 6(b)). Our experiments organize into five categories: jumping motions, flipping motions, flipping from a platform, yaw-spin jumps, and vertical jumps. The supplementary video contains demonstrations of the experiments.
In the flipping motion studies, our framework is employed to validate the back-flip, left-flip, and flip from a platform (see Fig. 8 and Fig. 1). The robot can perform a backflip from a 34-centimeter-high platform and land safely. For the second DLC technique, the offline-generated library is maintained onboard. The experimental data for back-flip jumping is depicted in Fig. 9. The experimental data shows that the torque is restricted at the maximal joint torque of 24 (Nm), showing that the robot requires a great deal of energy to leave the ground. Initially, we optimize the jumps using the robot's
Fig. 7: The DLC framework solving time on four-leg jump, two-leg jump, and back-flip with or without Pre-motion library. The solving time has random noise (\(\pm 0.05\) (m)) at desired \(\boldsymbol{s}_{\Omega}\). (a), (c) and (e) are the back-flip, four-leg jumping, and two-leg jumping solution times with the Pre-motion Library. (b), (d) and (f) are the solution time without the Pre-motion Library of those three jump motions.
own computer (Intel ATOM x5-Z8350); however, with pre-motion, it takes the robot around \(\sim 3s\) or even large to optimize due to computational restrictions. Hence, a remote computer optimized online and sent the robot's trajectory through UDP.
## V Conclusions
In this paper, a novel online evolutionary-based time-friendly optimization motion planning framework has been proposed for quadruped jumping. Experiments show that an evolutionary-based method can be an alternative approach to solving the complicated motion planning problems of legged robots. Optimization variables transformation and C-space compress the DE searching region, and Latin hypercube sampling gives a more uniform initial population with limit points, which enhances the ability of the DE algorithm to escape from the local minimum. Those three core contributions can give a very obvious improvement in the convergence speed of the evolutionary algorithm. In particular, a small-scale random perturbation to the feasible solution in the pre-motion library is nevertheless capable of ensuring the approximate convergence speed. At the same time, the feasible solution in Pre-motion Library as a warm-start can significantly boost the framework optimization progress. Experimental results indicate a significant reduction in the convergence speed compared with our previous work [16].
Additionally, our framework prioritizes optimizing time consumption rather than landing precision of jumping. And, the optimization time for extreme cases which touch the C-space boundary (e.g., jumping to a high enough desk (0.4 (m)) or side-flipping over high enough obstacles) still needs to be shortened.
|
2309.17420 | The Flux Operator | Converged computing brings together the best of both worlds for high
performance computing (HPC) and cloud-native communities. In fact, the economic
impact of cloud-computing, and need for portability, flexibility, and
manageability make it not important, but inevitable. Navigating this uncharted
territory requires not just innovation in the technology space, but also effort
toward collaboration and sharing of ideas. With these goals in mind, this work
first tackles the central component of running batch workflows, whether in
cloud or HPC: the workload manager. For cloud, Kubernetes has become the de
facto tool for this kind of batch orchestration. For HPC, the next-generation
HPC workload manager Flux Framework is analogous -- combining fully
hierarchical resource management and graph-based scheduling to support
intelligent scheduling and job management. Convergence of these managers would
mean the implementation of Flux inside of Kubernetes, allowing for hierarchical
resource management and scheduling that scales impressively without burdening
the Kubernetes scheduler itself. This paper introduces the Flux Operator -- an
on-demand HPC workload manager that is easily deployed in Kubernetes. The work
here highlights design decisions, mapping of components between environments,
experimental features, and shares the results of experiments that compare
performance with an equivalent operator in the space, the MPI Operator.
Finally, discussion closes with a review of challenges remaining, and hopes for
the future for improved technological innovation and collaboration. | Vanessa Sochat, Aldo Culquicondor, Antonio Ojea, Daniel Milroy | 2023-09-29T17:29:35Z | http://arxiv.org/abs/2309.17420v1 | # The Flux Operator
###### Abstract
Converged computing brings together the best of both worlds for high performance computing (HPC) and cloud-native communities. In fact, the economic impact of cloud-computing, and need for portability, flexibility, and manageability make it not important, but inevitable. Navigating this uncharted territory requires not just innovation in the technology space, but also effort toward collaboration and sharing of ideas. With these goals in mind, this work first tackles the central component of running batch workflows, whether in cloud or HPC: the workload manager. For cloud, Kubernetes has become the de facto tool for this kind of batch orchestration. For HPC, the next-generation HPC workload manager Flux Framework is analogous - combining fully hierarchical resource management and graph-based scheduling to support intelligent scheduling and job management. Convergence of these managers would mean the implementation of Flux inside of Kubernetes, allowing for hierarchical resource management and scheduling that scales impressively without burdening the Kubernetes scheduler itself. This paper introduces the Flux Operator - an on-demand HPC workload manager that is easily deployed in Kubernetes. The work here highlights design decisions, mapping of components between environments, experimental features, and shares the results of experiments that compare performance with an equivalent operator in the space, the MPI Operator. Finally, discussion closes with a review of challenges remaining, and hopes for the future for improved technological innovation and collaboration.
## 1 Introduction
Portability, manageability, and modularity of complex, heterogeneous workflows is becoming increasingly important for high performance computing (HPC). In particular, the need for workflows to be extended to cloud environments is a key component of collaboration across an increasingly diverse set of computational resources, and a likely solution for "green computing" to ensure energy efficiency and optimal usage of shared resources [1]. Other demands for flexibility of compute include the increasing use of internet of things "IoT" remote devices to conduct research [2, 3], an explosion of hardware available on cloud platforms [4, 5], and the dynamic addition of external resources [6]. A powerful demonstration of need also comes from a series of events organized by the European Commission [7] to assemble experts for discussion on innovation in the "computing continuum," citing a strong need for flexibility for distributed systems, green and dynamic technologies, and an emphasis on open source software. The discussion continues with workshops [8] emphasizing the importance of shaping Europe's digital future. Given this landscape, any entity involved in the business of scaled computing will fall behind if these technological needs are not prioritized [4].
In cloud computing communities, machine learning workloads are also becoming increasingly important [9, 10, 11, 12], and the cloud container orchestration technology Kubernetes [13] has become the de facto standard for orchestration of these workflows. As of June of 2023, the Kubernetes project had approximately 74,000 contributors, making it the second largest open source project ever after Linux, and the "most widely used container orchestration platform in existence" [13]. Outside of the academic community it is the chosen platform for orchestration, being used at over 70% of Fortune 500 companies [13]. In recent years [14], the growing need for supporting batch workflows [15] has led to the batch working group. This group works on the design and implementation of application programming interfaces (APIs) to enable cloud-native batch workflows and jobs, and provides an interesting transition of Kubernetes from primarily a stateless, service-oriented architecture to one that can support states and a desired completion of work. The first stable release [16] of the Job controller marked an unofficial declaration of Kubernetes supporting what, at face value, looked like more traditional workflows from HPC. This development made the idea of deploying one of the world's top supercomputers in Kubernetes an achievable goal [17], and the needs of the cloud computing communities overlapped better with the needs of HPC than ever before.
In the high performance computing space batch processing has a long history, and consequently the community has deep expertise [18]. The need to embrace traditionally more cloud-like features arguably came down to the demands of the workloads themselves. While a traditional HPC workload might be embarrassingly parallel, meaning running equivalent, scoped tasks across a homogeneous set of resources concurrently, modern workflows include the gamart from simulation to batch processing to artificial intelligence (AI) and services. A standard workflow is no longer a single run that writes to a shared filesystem, but rather an assortment of tasks that vary in their needs for hardware, storage, and running times. Modern workflows are typically provided via directed acyclic graphs (DAGs) that not only indicate direction or, but also utilization of entirely different architectures, services, and virtualization technologies. Indeed, the high performance computing community needed the same portability, flexibility, and automation for these workflows afforded by cloud, and to span both applications and services.
It would be in the best interest of cloud communities to learn from and take on the best technological innovations from HPC,
and vice versa. Thus, this landscape with overlapping interests was a spark for collaboration, and the time was right for the convergence of not just these communities, but the technologies themselves. This collaboration, or the convincing of one community to engage with the other, is arguably more challenging than the development work itself. When approaching those on the HPC side, discussions that suggest using cloud very quickly turn to the matter of performance, costs [19, 20] and security [21]. Approaching the cloud side, there is often lack of understanding for how high performance computing technologies might be useful or needed. In the case of the first, a simple solution that resolves many concerns is the often forgotten reality that cloud setups can be public or private. A technology such as Kubernetes could be deployed on-premises. In the case of the second, arguably the cloud computing community needs convincing that they too can benefit from the adoption of HPC technologies. Increasing performance and efficiency by using techniques from HPC, and providing better transparency of underlying resources by way of low-level performance analysis, would both lower costs and time to completion. A solution that falls in the middle would likely bring together the best of both worlds, but could come with compromises such as paying a performance penalty for flexibility. Ideally, a converged approach would be able to most effectively use hardware and improve performance, with increased flexibility offsetting any potential compromises for said performance.
Anticipating a desired future where cloud and high performance computing communities are collaborating and developing solutions together, the challenges stated above can be inspected to understand what is needed for both sides. First, a solid demonstration is needed that there are benefits for both cloud and HPC to take on attributes of the other side. For HPC, this means more modularity, portability, and automation. For cloud, this means more performant workflows, efficient use of hardware, schedulers, and communication protocols that span networking, applications, and storage. Secondly, examples of such technologies must be prototyped that can bring together the best of both worlds - the performance of HPC and the flexibility of clouds. This vision, or the space of technologies that exists between cloud and HPC can be described with the term "converged computing" [22, 23]. In a converged computing landscape of the future not only will technologies from traditionally disparate communities be brought together, but traditionally disparate communities will be united culturally to identify shared goals and foster deeper, more meaningful collaborations.
While many areas of work can be tackled, it was logical to start with a workflow manager analogous to Kubernetes in the high performance computing community, with the common use case of running simulations or machine learning tasks. The Flux Framework, a novel hierarchical framework for resource management and scheduling, provides similar abstractions that parallel those in Kubernetes, including modularity, well-defined developer and user interfaces, and an ability to integrate and manage different types of resources. It stands out from other resource managers because of its ability to manage the exploding complexity of modern workflows described previously. To start with modularity, in the same way that several components are combined to create Kubernetes [24], components from Flux Framework [25] are assembled together to manifest in a workload manager that is referred to simply as "Flux." This modularity might mean, for example, that a component of Flux could be used in Kubernetes, or vice versa. For developer interfaces, arguably a core ingredient of convergence is having common programming languages or bindings. Kubernetes, as it was developed at Google, chose to use the Go programming language, also designed at Google [26, 27]. Flux also provides a rich landscape of language bindings, one of which is Go. These shared interfaces and modularity make convergence possible.
Given the overlapping need to schedule jobs, the first work in this space was to integrate the Flux scheduler "Fluxion" as a plugin scheduler for Kubernetes called "Fluence" [23]. The rationale for this early work was that Flux could benefit Kubernetes in several ways. Firstly, Flux represents and schedules resources via directed graphs, which is notably different from the default Kubernetes scheduler that selects work for nodes based on a feasibility score [28]. In fact, Flux was created to address significant limitations of traditional HPC resource managers and schedulers by facilitating workload portability, handling high throughput job requests, and employing sophisticated techniques for flexible and fine-grained task placement and binding [29]. Enabling the Flux scheduler in Kubernetes would bring this same graph-based and hierarchical resource-aware approach to Kubernetes, and this early work demonstrated exactly that - improved performance against the default Kubernetes scheduler [30]. More efficient scheduling was demonstrating by enabling MPI-based CORAL2 workloads to run and scale in Kubernetes that, by way of Fluence, avoided pathological resource mappings and resource starvation [22]. This work also demonstrated a valuable point - that the scheduling provided by a workload manager must be able to concretely meet the resource demands of a workflow, but to do so efficiently and effectively to maximally utilize a set of computational resources.
Aside from the technological benefits that might come from convergence of these two specific technologies for end- and developer- users, enabling Flux to run in the cloud would also provide benefits for cloud vendors attempting to attract a larger HPC customer base. Current products that target HPC researchers [31, 32, 33, 34] arguably serve as training wheels to help with a transition to the cloud. Other products that do not deliver a familiar command line interface would require an on-boarding process. The realization that workflows could be seamlessly portable by way of Flux, and Flux could serve as a vehicle for the workflow user-base to move between cloud and HPC, inspired the next round of work discussed in this paper. By making the full Flux workflow manager, with all components assembled, available in Kubernetes, workflow specifications that work on HPC with Flux would also work in a cloud environment also with Flux. For cloud vendors, the HPC user base could more easily make a smooth transition to using cloud too.
This paper introduces the Flux Operator [35], the next step in work to explore integration of a traditional HPC scheduler within Kubernetes. The Flux Operator is a Kubernetes operator [36] that handles all the setup and configuration of a fully fledged Flux cluster inside of Kubernetes itself, allowing the user to efficiently bring up and down an entire HPC cluster for a scoped set of work that optimizes for important aspects of HPC. This paper first reviews the design and architecture of the Flux Operator (Section 2), discussing Kubernetes abstractions for efficient networking and node setup. Discussion then
moves into how these design decisions impact essential needs such as workflows that use message passing interfaces (MPI), and experimental features like scaling and elasticity (Section 3). Finally, experimental work shows the Flux Operator having superior performance over the MPI Operator, the primary available option at the time for running MPI workflows (Section 4). The paper concludes with discussion for anticipated future work, considerations for workflow design, and vision for the future (Section 5).
## 2 Architecture
This section details the architecture of the Flux Operator, first describing the design and needs of Flux, and how those are satisfied in Kubernetes. From the standpoint of a software architect, the task of designing the Flux Operator could be approached as a problem of pattern matching. Knowing that Kubernetes provides a set of components [37] and application programming interfaces [38], a key challenge was to assemble the components in a way that would deploy the full Flux Framework stack running inside of Kubernetes. An ideal design might aim to achieve the keystone properties of Kubernetes applications, including but not limited to fault tolerance, load balancing, and elasticity [39]. Abstractions for storage [40], networking [41], and workloads [42] could be selected for this design, and with a mindset of portability, meaning that the software components would be in containers themselves [43]. The following sections refer to two roles - an operator developer, or someone that designs and implements the Flux Operator itself, and an operator user, an individual that installs the operator and uses it. These architecture sections start with an overview of Kubernetes Operators, and then describe each component of Flux, and a mapping from traditional bare metal solutions to abstractions in Kubernetes.
### Kubernetes Operators
While individual components such as pods or services [42, 41] could be individually implemented and created in the Kubernetes cluster, the advent of programmatic operators [44] in 2016 has hugely simplified this process for the developer user. A Kubernetes operator serves as a controller for one or more Kubernetes objects, meaning that a developer can express all of the custom logic needed for an application of interest in code that is compiled and run in Kubernetes [45]. The operator implements a custom resource whose behavior is dictated by a custom resource definition (CRD), a YAML file with a specification of variables for the controller to use [46]. For the Flux Operator, this custom resource is called a "MiniCluster" [47]. The basic design of a controller is a loop, running a reconciliation process until a cluster reaches a desired state requested by a user via this custom resource definition YAML specification [36]. This is a declarative model [48] in that the operator user can specify high level constructs such as the cluster size and application to run, and they don't need to know the details of setting up a Flux cluster, nor do they need to consider orchestration or update of components. This approach is advantageous in that it exposes only the amount of detail that is needed for some number of jobs, and the complexity that would require niche expertise is hidden.
### Flux Framework
Flux Framework is a novel, graph-based resource manager that is typically deployed on-premises at HPC centers [29]. It won an R&D 100 award in 2021 [49, 50], and is currently stated to be the primary scheduler for the upcoming exascale-class system "El Capitain" at Lawrence Livermore National Laboratory [51]. It is called a framework because several projects combine together to form the resource manager known as Flux. A summary of these projects is described in Table 1, and the interested reader is directed to the learning guide [52] for an in-depth overview.
While Flux can be described in terms of its modules or components, for the work here it will described as it is seen in the space of Kubernetes abstractions.
#### 2.2.1 A Flux MiniCluster
The NodeFlux is typically deployed across nodes on an HPC cluster, where each node can be thought of as an addressable compute or storage unit, and as having a unique network address. Moving into the space of cloud, the physicality of the server goes away, and instead the basis of a node is a virtual machine. However, while Kubernetes itself is deployed on nodes, the notable object is the pod \(-\) an abstract slice of a node that is tasked with some unit of work to run one or more containers, and allocated a particular set of resources [53]. Since the Kubernetes scheduler has no issue slicing up a single node into many pods, the first task in defining this cluster was to ensure that there was a mapping of one pod per actual physical node. The reason for this mapping is due to Flux not being able to detect running on a partial node, which is a result of its use of the portable hardware locality (whloc) [54] library to discover resources. The whole library can only detect the resources of an entire host [55], which in the context of two pods running on one node, would erroneously discover the same set of resources twice, double what is actually available. In practice, this would mean that Flux could schedule too much work on a single physical node that, to the resource manager, is seen as two separate nodes with identical resources. The 1:1 mapping of pods to nodes was originally achieved by any of a resource specification, a strategy that required the user to ask for just under the upper limit of CPU and memory offered by their cloud instance of choice [56]. This strategy was later improved to not require user expertise by way of rules for pod affinity and anti-affinity. These are essentially constraints that tells the scheduler to ensure one pod per node, each with a hostname for Flux [57].
The ClusterWhile it would be possible to deploy individual pods onto nodes, early Kubernetes offered further abstractions for sets of pods such as deployments [58] and Stateful or Replica
\begin{table}
\begin{tabular}{l l} \hline \hline Project & Description \\ \hline flux-core & core services \\ flux-pmix & flux shell plugin to bootstrap OpenMPI v5+ \\ flux-sched & Fluxion graph-based scheduler \\ flux-security & security code \\ flux-accounting & user bank and accounting \\ \hline \hline \end{tabular}
\end{table}
Table 1: Flux Framework Projects
sets [59, 60], and in 2015, an abstraction called a Job that was the first of its kind to emulate a traditional HPC job with the intention to run and complete [61]. As of 2021, the batch working group introduced the indexed mode addition to Job [62] where the same work could be done in parallel, expecting 1 to \(N\) completions [63]. In that each node of a simple Flux cluster would be identical aside from subtle differences in startup, although other abstractions were considered, an indexed job was ultimately chosen. The indexed job is ideal in that in inherits needed features from the base Job such as having states, and adds an ability to create duplicates of the pods. To create pods it uses a batched approach [64], which is also advantageous to introduce an indexed ordering that ensures the index 0 is cleaned up last. This allowed us to easily design the operator to use the index 0 pod as the lead broker, and any scaling up or down of the cluster (Section 3.1) would never risk deleting the lead broker. Within this cluster context, given the assignment of one pod to one node, for the remainder of this paper the terms "node" and "pod" are used interchangeably as they are mapped to the same resources, memory and CPU.
NetworkingWhile it might be thought that the core of Flux is the project "flux-core," one of the foundational components of Flux Framework is the scalable tree-based overlay network, or "TBON" that connects the core modules for scheduling and resource management. The Flux TBON is a rooted tree [65] that features leader and follower processes (brokers), each of which is typically mapped to one node. The leader expects follower brokers to connect to it. Mapping this design to the indexed job, the role of lead broker can be assigned to index 0, and the follower brokers to indices 1 through N. Along with being easy to remember, this design decision allows pods to be created in order of their index with the lowest first [66], and this setup is ideal to have the lead broker up earlier for the follower brokers to find. The initial networking of the cluster is done with ZeroMQ [67], where the follower brokers identify their place in the cluster by way of their rank in a shared system configuration file, and then connect to the lead broker on a specific port via transmission control protocol (tcp) [68]. If the lead broker is not up first, while the worker will wait to try connecting again, by default the ZeroMQ library falls back to a tcp default retry timeout that increases exponentially [69]. In practice this means delayed cluster startup times waiting for the follower brokers to retry. The scheduler and resource manager combined with this set of brokers that can communicate to run jobs is called a Flux instance [70]. A Flux instance can be on the system level, meaning shared by many users, or owned by a single user. In both cases, the Flux instances handle user requests for submiting or otherwise interacting with jobs. The instance itself is hierarchical because it can spawn sub-instances whose resources are a subgraph of their parent.
The above networking setup can give each pod a unique address that can be written into the Flux system configuration, and used to identify lead and follower brokers. For the first naive implementation, the Flux Operator created the pods, retrieved the IP addresses after creation, and then added corresponding entries to the "/etc/hosts" file for DNS resolution. Automated management of the hosts file proved to be a bad design because it required restarting all the pods. Instead, a later design created a headless service for the MiniCluster [71], meaning that each pod could be given a label, a key value pair, that was known to the headless service, and discovering the labeled pod would add it to the network provided by the service. The headless service can then also assign a predictable hostname, which is essential for Flux to identify it. This simplified the creation of the cluster, and allowed for having the networking ready as soon as the service object was ready. Once this is done and the brokers have connected over TCP, further communication for the overlay network is done via ZeroMQ sockets [29]. However, for the cases of workflows that use a message passing interface (MPI) [72], Flux has built-in modules for MPI and communication, meaning that Flux simply uses standard MPI libraries that can rely on sophisticated networking fabrics or other high speed interconnects [73]. This reliance on cloud hardware has proven to be a challenge when deploying the Flux Operator to different cloud providers, and is a focused area of collaborative work.
Volume MountsIn terms of configuration, Flux requires a system configuration file, a ZeroMQ CURVE certificate used to encrypt communication [74], and some means to start the brokers. In a traditional HPC setup, this means could be using a system service [75], however in a Kubernetes environment with containers, it means a start command for the container with conditional logic for the lead vs. follower brokers. In Kubernetes, all of the above configuration can be achieved via volume mounts provided via ConfigMap [76] objects. By way of mounting configuration files and other needed files to each pod container as read-only volumes, all nodes in the cluster have access to them. These are mounted at /etc/flux and/flux_operator for configurations and the starting script, respectively, and the choice of a root path affords discoverability.
The curve certificate presented a bootstrapping design problem, as the standard way to generate it is usually via Flux itself (ZeroMQ is compiled within and exposed via the flux keygen command). However, this content was also required to exist for the read-only volume before starting the pod container. For the earliest design of the Flux Operator, a one-off certificate generator container was brought up that ran this key generation command, and the key was printed to the log to be retrieved by the operator. It could then be straightforward to write into the ConfigMap to be shared by the MiniCluster pods. In a later design, by way of collaboration with authors of this paper following Kubecon Amsterdam '23 [77] this bootstrapping problem was further improved by compiling ZeroMQ directly into the Flux Operator, and using ego [78] to interact with it directly to generate the certificate content for the ConfigMap inside the operator. This removed an entire step to generate the one-off pod, and is a beautiful example of how sharing ideas and collaboration can lead to improvements in design and functionality.
A Flux ContainerThe libraries and software needed for Flux along with configuration steps must happen in a common Flux container that is replicated by the indexed Job. This container would need to come pre-built not only with Flux and needed modules, but also with any application of interest to be run by the user. This is a design flaw in that most containerized applications for HPC have not been built with Flux, and would need to be built again. While the HPC community is attuned to building and optimizing components for different architectures, ideally a more modular, cloud-native solution would not require investing
time to do that. Once required software and configuration files are present, setup continues to create either a single-user or site-wide installation of Flux. The Flux Operator opts for a single-user design, and enables customization via variables exposed on the CRD. This customization includes (but is not limited to) archiving of data, creating multiple users, starting in an interactive mode, starting a RESTful application programming interface, or creating a custom batch job. The final component of the container is the "entropoint" or the command that is run when the container is started. This varies between the lead and follower brokers, where the lead broker typically starts with a command to run a job, and the follower broker starts expecting to connect and receive work.
## 3 Features
Once it was possible to run and complete a basic workflow (discussed in Section 4), development thinking moved toward adding desired features for such a workload manager in Kubernetes. This section will describe early work to enable scalability and saving state, elasticity and auto-scaling along with workflow integration. These features for workflows, along with the core design of the Flux Operator, are considered experimental in the sense that they are implemented with the goal of testing and improvement in mind. The below represents a sample of this work, and more experiments can be found in the examples directory of [https://github.com/flux-framework/flux-operator](https://github.com/flux-framework/flux-operator).
### Saving State
The goal of experiments to save state would involve starting a Flux MiniCluster, running some number of jobs, pausing them, saving the state of the job queue, and then bringing it down to bring up a differently sized cluster (larger or smaller) to load the jobs into, where they would continue running. This concept of saving state is similar to forensic container checkpointing [79], an experimental idea for Kubernetes, and would be useful for pausing workflows for cost savings or waiting on resource availability. These experiments varied based on when the queue was paused. In the earliest tests, job completion was required before saving state, while for later tests, jobs were stopped mid-run.
In practice saving state meant waiting for the queue, pausing, and then saving to an archive in a shared volume between two MiniClusters. The first MiniCluster pods were then waited for to terminate, and the new MiniCluster was waited for to come up and restore the jobs. It was observed that jobs would successfully save and load into the new cluster, maintaining job identifiers and size, however when stopping a running queue, 1-2 jobs could be lost between the transfer. While the reason for this loss would be interesting to understand, as it is an experimental prototype for a feature, the work is beyond the scope of this paper, and akin to other features discussed here, should be pursued with a compelling research use case. When this time comes, more analysis would be needed to understand exactly what is going on. The majority of jobs (e.g., a rough estimate of 9 out of 10) transition nicely, meaning that a job on the previous queue can get scheduled to a new larger or smaller cluster. As would be expected, if a job is moved onto a cluster lacking enough resources, it would logically not be scheduleable. A write-up and tutorial to reproduce this work is available [80].
While this early work to save state was simple, it was a glimpse into the idea that scheduled workflows could in fact be moved. In changing the size of the resources available by way of creating a new MiniCluster, it was the earliest prototype for what might be called scaling or elasticity, discussed next.
### Elasticity
Elasticity can be thought of as automated dynamic scaling [81]. Instead of making a cluster larger or smaller by way of saving state and loading into a different size, true elasticity means changing the size of a single cluster, which in the context of the Flux Operator means that Flux must adapt dynamically. To accomplish this, ideally Flux needed to have support for resource dynamism, however it did not. Short of that there was a way - one that might be considered a hack - to get this to work. The following steps enable an elastic Flux MiniCluster:
* A max size variable is added, meaning more nodes are defined in the system configuration file than actually exist.
* Flux is told to create a cluster at a size that is between 1 and this max size.
* Any change request to the CRD (from a user or API) validates the request, and updates the indexed job.
* An update to increase in size creates new pods.
* An update to decrease in size terminates pods.
The above also carries the constraints that the cluster cannot be smaller than one node (only a lead broker) or larger than the initial "maxSize." The larger indices are terminated first and the operator does now allow reduction to size zero, so the lead broker is never at risk of deletion - such a request would delete the entire MiniCluster that relies on it. The reason this works is that Flux sees the initial set of pods that do not exist as simply being down, which is happens frequently in a high performance computing environment. When the nodes are created their corresponding follower brokers start, ping the lead broker on the port to connect (typically port 8050) and then they seamlessly join the cluster. From the standpoint of the user, they change the "size" value in their MiniCluster CRD, apply it, and then see their cluster grow or shrink. The Flux instance run by the lead broker simply sees a node come online. On the Kubernetes side, this ability for the indexed job to have elasticity requires a minimum Kubernetes version of 1.27 [82].
At this point, it needed to be decided what might trigger this change in size. Elasticity was implemented in two ways, first with an application-driven approach [83] that required extra permissions to be given to the in-cluster service account, allowing the application inside the cluster to ask for more or fewer pods directly. It was then discovered that Kubernetes has autoccaling APIs intended for this use case. This autoccaling approach is discussed in the next section.
### Autoscaling
In Kubernetes there are two types of scaling - horizontal and vertical. Horizontal typically refers to adding pods, while vertical refers to adding resources to existing pods [84]. Both are based on the idea that resources should change in response to changing
workload needs - if cluster or resources are too big, they are made smaller, and vice versa. In the case of the Flux Operator the primary interest was horizontal auto-scaling, or changing the number of pools to dynamically increase or decrease the size of the MiniCluster to respond to the demands of a workload. This led to a first try to implement horizontal pod auto-scaling (HPA) [84] using the HorizontalPodAutoscaler API resource, a cluster API to watch a selected set of pods for resource consumption and increase or decrease the number depending on utilization. For the simplest cases, a default autoscaler was first deployed that considers a metric such as percent CPU usage and uses an algorithm [85] to calculate a target scale value for the number of pods. This could be tested by running a CPU intensive simulation [86] to observe the autoscaler adding and removing pods. However, the approach was not fine-tuned enough to the potential needs of an application being run by Flux. Instead of an arbitrary decision to add or remove pods based on CPU, a design more specific to Flux is warranted. As an example, one design might be that the Flux lead broker makes decisions about when and how to scale depending on the content of the queue. Another valid design would be to allow for changing the size of a single running job. Both of these ideas, and more generally designs for autoscaling, are valid and prime for future work.
With this in mind, a custom metrics API was implemented [87], meaning implementing an equivalent API endpoint controller that would be called by an autoscaler with instructions for how to scale the cluster. This resulted in the Flux metrics API [88], a standalone API that runs directly from the lead broker pod and provides decisions about scaling up or down based on the size or other metrics about the queue. With this API, it was possible to demonstrate an auto-scaling operation running based on a trigger coming directly from Flux. More work will be needed to test this setup with real workflows. In the meantime, more details about this setup and basic elasticity are available in an external post [86].
One notable feature about the implementation of the autoscaling approaches described above is that regardless of whether the request comes directly from a user changing a value in a file or an application or a programmatic autoscaler, the same internal logic (functions) are used to validate and then perform the patch.
### Multi-Tenancy
Multi-tenancy refers to the ability to support multiple users on the same resources. This is not a common design in Kubernetes, as ownership of resources is typically designated by namespaces, custom permissions on connected resources like storage, and role based access controls (RBAC) [89]. Recognizing these challenges, as an early approach there are several modes of interaction:
* Single user: the user owns an entire MiniCluster, and uses the default Flux user in the container
* Multiple users: controlled via PAM [90] authentication
* Multiple users: controlled via RESTFull API access
In anticipation of the last two cases that implement multitenancy, a RESTFull application programming interface (API) [91, 92] was designed that runs from the lead Flux broker pod, and thus exposes interactions with Flux to submit, get info for, cancel, and otherwise interact with jobs via Flux Python bindings exposed by the API. This is made possible by exposing the internal port that the API is running on via an external NodePort [93] and port forwarding [94] an external client outside of the cluster can interact with it.
In all cases of requiring authentication, the Flux RESTful API uses an OAuth2 based approach, storing a database of user identifiers and encoded passwords [95], and first authenticates by using a base64 encoded username and password (typical of a basic authentication scheme [96]), and then provides the user with an expiring token that can be used for further interaction. In the case of a single Flux user behind a multi-tenant API, this authentication and authorization happens, and then all users submit jobs to a shared queue. In the case of more true multi-tenancy with PAM, the custom resource definition asks for usernames (and optionally, passwords) in advance, and then creates the accounts on the system that are checked after authentication. The installation of flux-accounting [97] can then be enabled for the lead broker's queue, and use a traditional fair-share algorithm to determine job priority [97]. This work can be extended with more cloud-native approaches that take advantage of namespaces and roles, such as is described later (Section 3.6).
### Bursting
To complete the early work in autoscaling, the concept of bursting was considered [98], which means not just extending the size of a local cluster, but actually extending work to external resources. The bursting work for Flux would extend this approach to not just deploy external resources, but allow the lead broker to connect to brokers that are deployed in the other clusters. As an example, a Kubernetes cluster running on Google Cloud might burst to a cluster running on Compute Engine (CE), or to a cluster on Amazon Elastic Kubernetes Service (EKS).
To implement a prototype for bursting, a simple design was chosen first. A plugin service would be running from the lead broker of a primary cluster, and the running user would load one or more bursting plugins into it. Each bursting plugin is targeted to a particular provider (e.g., EC2 or CE). While there are many ways to trigger a burst, a simple approach of looking for an attribute "burstable" on a job set to true was chosen first. This request could be done on the command line. Upon discovery of this attribute, the bursting service attempts to schedule the job with the plugin. Each plugin is free to decide if the request is satisfiable by its own custom terms. If the burst is satisfiable, the job is assigned to the bursting plugin, and the plugin creates a new cluster or assign the job to an existing cluster. In the case of creation, the technique of telling the primary cluster that there are more nodes that are expected (and start down) than there actually are is used, and assign them namescap hostnames that will correspond to the bursted cluster. The calls that are necessary to bring up the second cluster are run, which might mean deploying Terraform [99] configuration files or creating a second Kubernetes cluster via API calls, and then the cluster starts just as a local MiniCluster would. The key difference, however, is that the lead broker of the primary cluster is exposed as a NodePort [93] service that can be discovered by the external cluster. The secondary brokers, all followers, then come up, find their hostnames in the ranked system configuration, and connect
to the lead broker IP address from another cluster. To the user, they simply see that the nodes are down, and then they come up when the cluster is ready. Jobs that are scheduled on the primary broker queue that possibly could not run due to insufficient resources can then run. At the time of this writing, the main bursting service is implemented along with four bursting plugins for each of GKE, EKS, CE, and a local burst [100].
Finally, the bursting service is designed to be modular and flexible. Aside from being able to load different plugins, it allows for customization of the function provided to select a burstable plugin, to interact with the queue, and to select jobs. A mock version of a Flux job is also available for development. The work in bursting is still early, and akin to elasticity, work on these prototypes should continue to eventually develop more hardened Flux modules and algorithms for bursting.
### Workflow Integration
Running a single MiniCluster to create an isolated Flux cluster in Kubernetes is a good first step, but not sufficient for real-world use cases of complex workflows. While it would be possible to shell into or otherwise interact with the cluster and run a workflow tool that implements Flux as an executor [101, 102, 103], this also does not enable features needed for complex, heterogeneous workflows that might require different sizes or configurations of MiniClusters. For this reason, the authors of this paper have started to think about how to integrate the MiniCluster custom resource definition as a first-class citizen into workflow tools.
After the Kubecon Amsterdam '23 presentation, collaborators (including author AC) were quickly motivated to add the Flux Operator as a job type to the workflow tool Kueueue, a Kubernetes-native job submission operator that handles managing multi-tenancy of Kubernetes batch jobs that can be produced by a number of operators [104]. A similar approach is being developed to have a workflow tool control creation and management of MiniCluster, an idea being implemented into the Shackemake [101] workflow tool as a Kueueue executor plugin [105]. Defining even an example workflow for high performance computing is a non-trivial problem, as many codes are either private or not portable. This initial work with the Flux Operator is hopefully setting the stage for doing more exploratory work for integrations of this type. Tackling this early problem is a two-fold challenge to design technologies and inspire more collaborative opportunities for the HPC community.
## 4 Experiment
The Flux Operator is compared to another state-of-the-art Kubernetes Operator, the MPI Operator, which at the time of the experiments was considered the main option in the field for running MPI workloads [106, 107]. The MPI Operator started as part of the Kubeflow project and defines an "MPIJob" custom resource [108]. Unlike the Flux Operator that coordinates between brokers with ZeroMQ, the MPI operator coordinates workers via secure shell (SSH) [109]. It requires an extra launcher node that serves the sole purpose of coordinating the workers, and akin to the Flux Operator, uses dedicated hostnames with an equivalent headless service. This launcher node conceptually could be thought of as analogous to the lead broker of the Flux instance in that it serves as an entrypoint for submititing jobs, however the main difference is that the Flux lead broker is considered part of the cluster to perform work. The MPI launcher node is not, and in practice this means the user will need to always incur the costs of an extra node just for the launcher.
### Methods
Experiments were conducted on Amazon Web Services Elastic Kubernetes Engine (EKS) [110], using the hpc6a. 48xlarge instance type [111] with the elastic fiber adapter (EFA) [112], intending to test the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) [113] in a strong-scaling configuration across cluster sizes and ranks of 64/0016, 32/3008, 16/1504, and 8/752, respectively (Figure 1). This design was chosen to mirror previous work [30]. LAMMPS is used by the Collaboration of Oak Ridge, Argonne, and Livermore (CORAL) as a representative scalable science benchmark as part of CORAL-2 [114]. In testing, LAMMPS runs in parallel on MPI ranks (processes) across nodes, a molecular simulation [115] and problem size 64x16x16 was chosen that would adequately test strong scalability across the chosen rank and node counts. LAMMPS scalability depends on network latency, and the experiment results report the total wall time recorded by the LAMMPS run as a metric for performance. A cluster setup that enables lower latency will run faster, and ideally the simulation should get faster as the number of nodes is increased with strong scaling. A second metric time of interest is the time for the launcher to submit and complete a job. For Flux this means timing the flux subunit command that is given the command to run LAMMPS, and for the MPI operator it means timing thepirum command that does the same. The final metric time of interest was the total cluster creation and deletion times, which can be calculated based on the total runtime of the MiniCluster minus the LAMMPS total wall time. This time would include each pod preparing the broker, starting Flux, and networking with the lead broker. The runtime would ideally decrease across these chosen rank and node sizes.
To ensure the nodes of the cluster are consistent and do not influence results, experiments were run on the same Kubernetes cluster, and simply used smaller portions of it. As the MPI Operator requires an extra launcher node, the maximum cluster size needed (64) was increased by 1, resulting in a size 65 node cluster for these experiments. Finally, a modified version of the MPI Operator was used [22] to allow it to scale to over 100 MPI ranks.
The experiments proceeded as follows. The main Kubernetes cluster of size 65 is first created. Then, for each of the Flux Operator and MPI Operator:
1. Launch Job / create MiniCluster for sizes 64, 32, 16, 8
2. Run LAMMPS x 20
3. Record timings and save all configuration files and logs
For each experiment run, a single "throwaway" run is first performed to pull the container with Flux and LAMMPS to the node, where it is cached for further runs [116]. This ensures that time recorded in creating the MiniCluster does not include pulling the container image, which would have variability depending on the image size. The experiments are then run in an automated fashion using Flux Cloud [117], a simple experiment orchestration tool for running experiments with Flux MiniClusters on Kubernetes. All experiment code, configuration files, and tagged containers [118, 119] are available [120].
### Results
Cluster Creation and DeletionBringing up and down clusters of sizes 8, 16, 32, and 64 across 20 runs each, a weak linear scaling was observed (Figure 2) indicating that the indexed job could efficiently create pods. All sizes were created and ready in under a minute with variability of approximately 5 seconds. Likely this variability reflects the slowest or last node to come up, as the cluster is not considered to be completely up until all nodes are ready. This result was surprisingly good, as it could have scaled differently.
Launcher TimesComparing launchers (flux submit for the Flux Operator, and mpirun for the MPI Operator) there is a slightly larger time difference (Supplementary Figure 5), where both generally perform well under strong scaling (the time goes down). What is unknown is whether there is an inflection point at larger scales where the MPI Operator might plateau or otherwise show a different pattern. The resources were not available to run these larger experiments at the time, but these patterns and scaling can be explored in future work.
Design ConsiderationsAnticipating interest in running experiments of this type, where there is generally some operator in Kubernetes that is going to pull one or more containers to Kubernetes and perform scoped work, a visualization of the steps that have salient times are provided, and the the interested reader is encouraged to think about them (Figure 4).
In Figure 4 above, there is a distinction between an operator setup that is using autoscaling (right) vs. not (left), and costs that are incurred once (blue) vs. repeated (green). Cluster creation generally means the start and setup of instances, along with any networking and devices that are needed. The primary difference between the two scenarios is that an autoscaling cluster is going to be adding new nodes, meaning that the cluster will need to provision those nodes, and the user will be required to wait. Thus, this update process for the cluster that requires the user to wait (and pay for the time) becomes a repeated cost. This also means that a typically one time cost of pulling a container may occur several times, primarily when new nodes are added. Note that this diagram assumes experiments running on one cluster. An autoscaling setup that employs bursting to new clusters would need to consider the additional time of creating and deleting the new clusters.
It is suggested to the reader to consider these times, along with the differences between a setup with autoscaling versus one without or potentially bursting, for future experiments when anticipating costs. Notably, the setup time for any particular operator could be generally consistent, and variance has been seen (but is not reported here) in the other steps between cloud providers. A more critical study and understanding of these times is warranted for future work.
## 5 Discussion
This work demonstrates improved performance using the Flux Operator for running an HPC workflow in a cloud-native environment. This early innovation comes with strengths, limitations, and and desires for future work that include the topics of
Figure 4: Times to consider when performing experiments with a Kubernetes operator, from the point of creating the cluster to deleting it. For a single cluster without autoscaling (left) many of the operations are one time costs (blue), while for an auto-scaling cluster that needs to provision new nodes and pull containers to them, the costs become repeated (green).
Figure 3: Total wall time, as reported by the LAMMPS software, between the Flux Operator and MPI Operator. The Flux Operator is consistently faster, a result that could be more impactful for longer running experiments.
workloads and scheduling, storage, tenancy, and cost-estimation, among others.
A discussion of limitations and further hopes for innovation is needed for transparency of this work. First, it is a design flaw that the main execution container is required to have Flux and the application of interest. This means that any user of the Flux Operator is required to rebuild their container in full to include Flux, which not only requires the work to do the re-build, but some limited knowledge of Flux. While container requirements are provided alongside the documentation [121], this is not good enough. While the dependencies and complexity exist to enable advanced capabilities, the authors believe there are approaches that can improve upon this strict requirement.
A next limitation is the creation of the entire MiniCluster using a single indexed Job. While this is the ideal for the time being, as the indexed Job is released with core Kubernetes, an eventual refactor to use a JobSet [122] would be desired, which allows for the same indexed Job to be used, however with better ownership of assets, and an ability to define different groups of nodes each as a Replicated Job under the same network namespace. Allowing for different sets of nodes would not only make it possible to separate logic between the lead and follower brokers, but also allow for creation of a MiniCluster with different pod specifications mapped to different resource needs. JobSet would also allow for better definition of a success policy, or explicitly saying that the job is completed when the lead broker exits. Author VS has created a prototype using JobSet, and hopes to continue in this direction when it is considered for Kubernetes core.
For next steps of work for experimental features, the Flux software itself needs innovation for the set of the hacks that were implemented. The ability to scale up and down dynamically without "registering" the non-existent nodes in the Flux system configuration is a good example, along with a more hardened approach for bursting that likely comes down to plugins written directly in C or C++ alongside Flux. Especially the work in bursting is early and exciting, and will be continued in future work. Notably, the experiment application did not require use of storage, and while there are several tutorials and examples for different cloud solutions, this is an entire area of the design that requires further work and thinking.
Another challenge is that of poor workflow consistency and reproducibility. While there are scattered workflow tools that are used by HPC centers (e.g., national labs), these authors consider much of the HPC community behind with respect to the reproducible workflow movement. Part of this work moving forward is to not only identify proxy applications and workflows, but also to containerize them, and make it quick for an interested party to run them easily in a cloud environment. Part of this work will not only be understanding how they work in containers and across a Kubernetes cluster, but also developing means to assess performance.
An understated challenge in the converged computing space is also culture and communication. As stated in the introduction, convincing one side to be open to ideas from the other is a non-trivial task. For basic communication, if there is discussion between HPC and cloud community members, a simple term like "node" or "scheduler" can mean something different. This might be tackled through discussion, and creation of a shared lexicon that allows for talking about comparable distractions. This introduces a further challenge when looking at the means for communication. Academic groups tend to write papers (and industry groups less so), and developing software in research is made more complex by the publication incentive structure that wants to highlight new research results [123]. Practically speaking, both the HPC and cloud communities will need to meet one another half way. This might mean researchers presenting work at (traditionally) more cloud-oriented conferences and venues, or cloud developers participating in more traditionally research-oriented venues. For both, it means distributing knowledge through blogs and other common mediums. This work calls out to cloud vendors an immense desire to work together. While it is understandable that there is a primary concern about direct comparison, there is a path for respectful collaboration, developing technologies that can work across clouds, and learning from one another.
These experiments were run on one cloud with a particular networking fabric and instance type, and at a maximum scale of 64 nodes, which is very small for traditional HPC. However, the recent work to train large language models [124] provides a common use case for needing scaled resources, and might allow for shifting incentives toward that. One of the most challenging decision points for running experiments of this nature is the sheer size of the space of potential experiments that could be run. As the goal of this work was to compare the two operators with an HPC application, the choices made reflect that goal, and a desire to optimize performance (choosing a configuration to support low network latency) as much as possible. These same experiments run on other instance types, interconnects, or even regions could have different results. Further experiments should be pursued that continue to test scale, elasticity, and performance in the space of networking, IO, and application metrics.
The Flux Operator brings several features that could be helpful to more general Kubernetes workflows. The first is that using a Flux cluster inside of Kubernetes gets around some of the infamous etch limits or bottlenecks [125]. Submitting to Flux does not stress Kubernetes application programming interfaces or etch, and could scale to hundreds of thousands to potentially millions of jobs [50, 29]. The second is the hierarchical way of looking at heterogeneous tasks. Kubernetes would benefit from having more flexibility about telling tasks where they can go, and then binding them to exactly the resources needed. This brings up tension between a more manual vs. automated decision made by the Kubernetes kubelet. The Flux Operator does something that is not native to Kubernetes to help this issue. By way of allocating a pod to a node and giving control to Flux, possibly ineffective bindings decisions that are made by the kubelet can be avoided. The Flux Operator allows Flux, a workload manager accustomed to making intelligent resource bindings, to take control. To step back, ideally there should (or could) be a mechanism in Kubernetes to enable more performance oriented decisions that the kubelet makes. Having a consistent view of resources that the Kubelet is exporting as the final truth via cgroups is not necessarily desirable, and there are several reasons why. The first is that there are many ways to slice up a node, and a "best" way depends entirely on the application in question. For example, some applications may perform well given an equal split, while others might be optimally be broken
across sockets. This suggests that granularity on the level of the socket is needed, which is not currently exposed in Kubernetes. A concrete example comes from the MuMMi workflow [126] that requires one dedicated CPU and socket to be close to a PCI express us for a particular GPU to minimize trading data back and forth between CPU and GPU memory. This level of granularity is not currently exposed in Kubernetes, nor are there efforts to understand applications on this level. This is a key area for innovation and collaboration, and understanding basic design patterns for networking, IO, and application performance is likely a good start. Ideally applications that are running in Kubernetes today could be better understood via performance analysis, and a decision made about if the time required to optimize is worth to be invested for the potential benefit.
## 6 Conclusion
The popularity and economic cloud behind cloud computing presents a challenge for the high performance computing community - to resist new paradigms of cloud-native technologies and fall behind, losing talent and customers, or to embrace it, pursuing technological innovation, and viewing the shift as an opportunity for collaboration and growth. The latter approach, a movement identified as "converged computing" is a good path to take, and this work represents an early start towards that desired future. The work here starts with one of the highest levels for running workflows - the orchestration tool - and has thought about the convergence of the HPC workload manager Flux Framework with the cloud orchestration framework Kubernetes. The Flux Operator is an example of convergence of technologies in this workload management space, and has demonstrated superior performance to the current equivalent in the space. In sharing this process of thinking about design to implementation, the authors of this paper hope to start discussion with the larger community that spans cloud and HPC for how to think about working on convergence for other types of software and technology. This work is a shining example of the collaboration and fun possible. The sharing of these results at Kubecon Amsterdam '23 [77], inspired collaboration, discussion, and excitement about the converged computing movement. Projects and work are underway to address gaps that have been identified, and each a collaboration between computer scientists and cloud developers. The authors of this paper hope that this early work inspires, and allows for continued discussed for innovation in this space for not just workloads and scheduling, but also the challenges around it - storage, tenancy, and cost-estimation, among others. Through collaboration and creativity of design, pathways can be discovered for moving seamlessly between the spaces of cloud and HPC. Converged computing is a paradigm shift, and it's up to the HPC and cloud communities to decide to embrace change and grow from it, or fight it off. This work chooses the first approach, and embraces it with hopes for a better future, and stronger more reproducible science.
## Acknowledgements
We would like to thank the entire Flux Framework team for regular discussion on design points, their immense expertise and experience in this space, and a safe, fun environment to learn and grow. We give our gracious thank you to both Amazon Web Services and Google Cloud for their support. This project was supported by the LLNL-LDRD Program under Project No. 22-ERD-041.
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-JRNL-855145-DRAFT.
Figure 5: Launch time measuring flux submit (the Flux Operator) or mpirun (the MPI Operator). The Flux Operator is consistently faster, a result that could be more impactful for longer running experiments. |
2309.07155 | Maximizing the performance for microcomb based microwave photonic
transversal signal processors | Microwave photonic (MWP) transversal signal processors offer a compelling
solution for realizing versatile high-speed information processing by combining
the advantages of reconfigurable electrical digital signal processing and
high-bandwidth photonic processing. With the capability of generating a number
of discrete wavelengths from micro-scale resonators, optical microcombs are
powerful multi-wavelength sources for implementing MWP transversal signal
processors with significantly reduced size, power consumption, and complexity.
By using microcomb-based MWP transversal signal processors, a diverse range of
signal processing functions have been demonstrated recently. In this paper, we
provide a detailed analysis for the processing inaccuracy that is induced by
the imperfect response of experimental components. First, we investigate the
errors arising from different sources including imperfections in the
microcombs, the chirp of electro-optic modulators, chromatic dispersion of the
dispersive module, shaping errors of the optical spectral shapers, and noise of
the photodetector. Next, we provide a global picture quantifying the impact of
different error sources on the overall system performance. Finally, we
introduce feedback control to compensate the errors caused by experimental
imperfections and achieve significantly improved accuracy. These results
provide a guide for optimizing the accuracy of microcomb-based MWP transversal
signal processors. | Yang Sun, Jiayang Wu, Yang Li, Xingyuan Xu, Guanghui Ren, Mengxi Tan, Sai Tak Chu, Brent E. Little, Roberto Morandotti, Arnan Mitchell, David J. Moss | 2023-09-10T14:37:54Z | http://arxiv.org/abs/2309.07155v1 | # Maximizing the performance for
###### Abstract
Microwave photonic (MWP) transversal signal processors offer a compelling solution for realizing versatile high-speed information processing by combining the advantages of reconfigurable electrical digital signal processing and high-bandwidth photonic processing. With the capability of generating a number of discrete wavelengths from micro-scale resonators, optical microcombs are powerful multi-wavelength sources for implementing MWP transversal signal processors with significantly reduced size, power consumption, and complexity. By using microcomb-based MWP transversal signal processors, a diverse range of signal processing functions have been demonstrated recently. In this paper, we provide a detailed analysis for the processing inaccuracy that is induced by the imperfect response of experimental components. First, we investigate the errors arising from different sources including imperfections in the microcombs, the chirp of electro-optic modulators, chromatic dispersion of the dispersive module, shaping errors of the optical spectral shapers, and noise of the photodetector. Next, we provide a global picture quantifying the impact of different error sources on the overall system performance. Finally, we introduce feedback control to compensate the errors caused by experimental imperfections and achieve significantly improved accuracy. These results provide a guide for optimizing the accuracy of microcomb-based MWP transversal signal processors.
Microwave photonics, optical microcombs, optical signal processing.
## I Introduction
Ever-increasing data capacity in the information age is driving the demand for high-speed information processing. In contrast to conventional microwave signal processing based on electronics, that faces intrinsic bandwidth bottlenecks [1, 2], the use of photonic hardware and technologies to process high-bandwidth microwave signals, or microwave photonic (MWP) processing, can provide speeds orders of magnitude faster [3, 4], which is critical for high-speed processing applications [3-6].
In the past two decades, a range of high speed MWP processors have been demonstrated by employing different optical approaches, in both discrete and integrated form, as optical filtering modules to process microwave signals modulated on a single optical carrier [3, 7-16]. While successful, featuring high performance with dynamic tuning, these approaches provided only single processing functions with limited reconfigurability and fixed parameters. In contrast, MWP transversal signal processors, where the microwave signal is modulated onto multiple optical carriers with adjustable delays and weights before summing via photodetection [17, 18], have significant advantages in achieving highly reconfigurable processing [17, 18].
For MWP transversal signal processors, a large number of optical carriers forming discrete taps to sample the input microwave signal are needed to achieve a high accuracy. Despite the use of conventional multi-wavelength sources, such as discrete laser arrays [19-21] and fibre Bragg grating arrays [22-24], to offer the discrete taps, the numbers of available taps they can provide are normally restricted to be less than 10 - mainly due to the dramatic increase of the system size, power consumption, and complexity with the tap number. Recent advances in optical microcombs [25, 26] provide an effective way to circumvent such problem by generating a large number of wavelengths equally spaced by
large microwave bandwidths from single chip-scale devices. This opens new horizons for implementing MWP transversal signal processors with significantly reduced size, power consumption, and complexity. By using microcomb-based MWP transversal signal processors, a range of signal processing functions have been demonstrated recently, first for basic functions including differentiations [27, 28], integration [29], and Hilbert transforms [30-32], followed by more complex functions such as phase encoding [33], arbitrary waveform generation [34], and computations within the framework of optical neural networks [35-37].
For signal processors, processing accuracy is a key parameter. For microcomb-based MWP signal processors, processing errors are induced by both theoretical limitations and imperfect response of practical components. Recently, we presented an analysis quantifying the errors induced by theoretical limitations [38]. In this paper, we provide a complementary analysis to that work, focusing on errors induced by experimental imperfections. First, errors arising from imperfect microcomb characteristics, chirp in the electro-optic modulator, chromatic dispersion in the dispersive module, shaping errors of the spectral shaper, and noise of the photodetector are investigated. Next, a global picture is presented to show the influence of different error sources by quantifying their contributions to the overall system performance. Finally, we introduce feedback control to compensate errors induced by imperfect response of experimental components, and in doing so we achieve a significant improvement in the processing accuracy. These results are useful for understanding and optimizing the accuracy of microcomb-based MWP transversal signal processors.
## II Microcomb-based MWP transversal signal processors
Microwave transversal signal processors are implemented based on the transversal filter structure in digital signal processing that features a finite impulse response [37]. Implementing them with photonic technologies yields a significantly increased processing bandwidth compared to their electronic counterparts [17]. Fig. 1 shows the schematic diagram and signal processing flow of a typical MWP transversal signal processor. An optical microcomb, serving as a multi-wavelength source, provides a large number of wavelength channels as discrete taps. An input microwave signal is multicast onto each channel via an electro-optic modulator (EOM) to generate multiple microwave signal replicas. Next, time delays between adjacent wavelength channels are introduced by optical delay elements, and the delayed replicas at different wavelength channels are weighted through spectral shaping. Finally, the delayed and weighted replicas are summed via photodetection to generate the final microwave output of the system.
For the MWP transversal signal processor in Fig. 1, each of the taps can be regarded as a discrete sample of the system's impulse response, _i.e._, the system's impulse response can be expressed as [17]
\[H(t)=\sum\limits_{n=0}^{M\cdot 1}a_{n}\delta(t-n\Delta T), \tag{1}\]
where \(M\) is the tap number, \(a_{n}\) (\(n\) = 0, 1, 2,..., \(M\)-1) is the tap weight of the \(n^{th}\) tap, and \(\Delta T\) is the time delay between adjacent wavelength channels. Therefore, the output microwave signal \(s(t)\) can be given by [39]
\[s(t)=f(t)*h(t)=\sum\limits_{n=0}^{M\cdot 1}a_{n}f(t-n\Delta T), \tag{2}\]
where \(f(t)\) is the input microwave signal. After Fourier transformation from Eq. (1), the spectral transfer function of the MWP transversal signal processor is
\[H(o)=\sum\limits_{n=0}^{M\cdot 1}a_{n}e^{jton\Delta T}, \tag{3}\]
which shows agreement with the spectral response of a typical microwave transversal filter [39].
As can be seen from Eqs. (1) - (3), by simply altering the tap weights \(a_{n}\) (\(n\) = 0, 1, 2,..., \(M\)-1) through comb shaping, different signal processing functions can be achieved without any changes of the hardware [17]. This allows for a high
Fig. 1: Schematic diagram and signal processing flow of a MWP transversal signal processor with an optical microcomb source. EOM: electro-optic modulator. PD: photodetector.
degree of reconfigurability for the MWP transversal signal processor.
Fig. 2 shows a schematic of the experimental implementation of the MWP transversal signal processor in Fig. 1, which includes a microcomb generation module and a transversal signal processing module. In the microcomb generation module, a continuous-wave (CW) laser, amplified by an erbium-doped fibre amplifier (EDFA) with a polarization controller (PC) to adjust its polarization, is used to pump a high-Q nonlinear microring resonator (MRR) to generate optical microcombs. The output from this module is sent to the transversal signal processing module, which executes the signal processing flow depicted in Fig. 1. The processing module involves a PC, an EOM, a spool of single-mode fibre (SMF) as the optical delay module, an optical spectral shaper (OSS) to shape the comb lines, and a balanced photodetector (BPD) for photodetection. The BPD connected to the two complementary output ports of the OSS divides all the wavelength channels into two groups with a phase difference of \(\pi\), which introduces positive and negative signs onto the tap coefficients \(a_{n}\) (\(n\) = 0, 1, 2,..., _M_-1) in Eqs. (1) - (3). It is worth noting that the particular processing function is determined not only by the absolute values of the tap coefficients but also by their signs. As an example, temporal integration is realized when all tap coefficients are set to 1 [29], whereas phase encoding can be achieved through the adjustment of specific coefficients to -1, while retaining the others at 1 [33].
For experimentally implemented MWP transversal signal processor in Fig. 2, processing errors arise from both theoretical limitations and imperfect response of practical system. The former refers to the theoretical approximation of a continuous impulse response (which corresponds to infinite tap number _M_) using a practical system with a finite tap number, and was the subject of our previous paper mentioned above [38]. The latter refers to errors induced by imperfect performance of different components, such as the noise of microcomb, chirp of the EOM, second- (SOD) and third-order dispersion (TOD) of the SMF, shaping errors of the OSS, and noise in the BPD.
To quantify the processing errors, the root mean square error (RMSE) is used to compare the deviation between the processor's output and the ideal result, which is defined as [40].
\[\text{RMSE}=\sqrt{\sum\limits_{i=1}^{k}\frac{(Y_{1}-y_{i})^{2}}{k}} \tag{4}\]
where \(k\) is the number of sampled points, \(Y\)1, \(Y\)2,..., _Y_n are the values of the ideal processing result, and \(y\)1, \(y\)2,..., _y_n are the values of the output of the microcomb-based MWP transversal signal processors.
Fig. 3(a) shows the RMSEs induced by theoretical limitations as a function of tap number \(M\) for three different signal processing functions, including first-order differentiation (DIF), integration (INT), and Hilbert transform (HT). The theoretical limitations were calculated based on
Figure 2: Schematic of a practical microcomb-based MWP transversal signal processor. The main error sources are labelled as I – V. CW laser: continuous-wave laser. EDFA: erbium-doped fibre amplifier. PC: polarization controller. MRR: microring resonator. EOM: electro-optic modulator. SMF: single-mode fibre. OSS: optical spectral shaper. BPD: balanced photodetector. SOD: second-order dispersion. TOD: third-order dispersion.
Eqs. (1) - (4), assuming a perfect response for all the experimental components in Fig. 2. More details about this can be found in Ref. [41]. These theoretical RMSEs were calculated assuming a perfect response for all the components in Fig. 2. As can be seen, the theoretical RMSEs are small for a large tap number \(M\geq 80\), indicating that the theoretical errors can be greatly reduced by increasing the tap number. Fig. 3(b) compares the theoretical and experimentally measured RMSEs for \(M=80\), showing that the former is much lower, reflecting that experimental errors typically dominate the system performance of microcomb-based MWP transversal signal processors. In the following Section III, we provide a comprehensive analysis of the experimentally induced processing errors, and in Section IV we provide approaches to mitigate these errors.
## III Errors induced by imperfections of practical systems
In this section, we provide a detailed analysis of the processing errors induced by different sources outlined in Fig. 2. This is achieved by modeling the imperfect response of the experimental components to calculate the output waveforms based on Eqs. (1) - (4). In subsections A - D, we investigate the influence of specific error sources, assuming the other sources are error-free. In subsection E, we compare the contributions of the different error sources to the overall system performance.
In the following analysis, we use first-order DIF, INT, and HT as examples to quantify the experimentally induced errors. Their spectral transfer functions are given by [27, 29, 31]
\[H_{DIF}(\omega)=jo, \tag{5}\]
\[H_{INT}(\omega)=\frac{1}{j\omega}\, \tag{6}\]
\[H_{HT}(\omega)=\{\begin{array}{ll}e^{j\pi/2},&0\leq\omega<\pi\\ e^{j\pi/2},&\cdot\pi\leq\omega<0\end{array} \tag{7}\]
where \(j=\sqrt{\text{-}1}\) and \(\omega\) is the angular frequency.
For comparison, in our analysis we assume the processors have the same tap number (\(M=80\)), comb spacing (\(\Delta\lambda=0.4\) nm), and length and SOD for the SMF (\(L=4.8\) km and \(D_{2}=17.4\) ps/nm/km). These parameters are the same as those in our previous papers [27, 29, 31]. The input microwave signal is taken as a Gaussian pulse with a full width at half maximum (FWHM) of \(\text{-}0.17\) ns, whose spectral bandwidth (\(\text{-}5\) GHz) is within the processing bandwidth of the signal processors (_i.e._, \(FSR_{MW}=1/\left(\Delta\lambda\times L\times D_{2}\right)=\text{-}30\) GHz). For microcomb-based MWP transversal signal processors, the processing bandwidth is \(min\{\Delta\lambda/2\), \(FSR_{MW}/2\}\), where \(min\) {-} represents taking the minimum value between the two. Detailed elaboration on this can be found in Refs. [17, 18].
### Influence of the optical microcombs
In this section, we analyze the influence of microcomb imperfections on the system performance for different processing functions. These imperfections generate intensity and phase noise in the comb channels. The intensity noise includes power fluctuations of the comb lines and the intensity noise floor, which mainly arise from photon shot noise and spontaneous emission beat noise [42]. For MWP transversal signal processors, the microcomb intensity noise results in inaccuracy of the tap coefficients, thereby degrading the system accuracy.
To characterize the microcomb intensity noise, the optical signal-to-noise ratio (OSNR) is introduced, which is the ratio of the maximum optical signal to the noise power in each of the comb lines. Fig. 4(a) shows the simulated output waveforms from processors that perform DIF, INT, and HT, where flat intensity noise floors are assumed for the microcombs with different OSNRs. For comparison, the ideal processing outcome without theoretical errors, and the results that only account for theoretical errors (corresponding to OSNR \(=\)\(\infty\)) are also shown. As the OSNR of the comb lines increases from 10 dB to \(\infty\), the processors' output waveforms match the ideal results better for all three processing functions, reflecting the reduced error achieved by increasing the OSNR. To better reflect the intensity envelop of the microcombs, a sinc-shaped intensity noise floor is introduced. The
Fig. 3: (a) Root mean square errors (RMSEs) induced by theoretical limitation for differentiation (DIF), integration (INT), and Hilbert transformation (HT) as a function of tap number \(M\). (b) Comparison of RMSEs induced by theoretical limitations and practical measured RMSEs for DIF, INT, and HT when \(M=80\). In (a) - (b), the comb spacing, length of dispersive medium, and second-order dispersion (SOD) parameter are \(\Delta\lambda=0.4\) nm, \(L=4.8\) km, and \(D_{2}=17.4\) ps/nm/km, respectively. The input microwave signals are assumed to be Gaussian pulses with a full width at half maximum (FWHM) of \(\text{-}0.17\) ns.
corresponding results are shown in Fig. 4(b), showing a trend similar to that in Fig. 4(a).
Fig. 4(c) shows the RMSEs between the simulated processors' output waveforms and the ideal processing results as a function of the OSNR. As expected, for both the flat and sinc-shaped intensity noise floor, the RMSEs decrease with the microcomb OSNR for all three processing functions, showing agreement with the trend in Figs. 4(a) and (b). For OSNRs less than 20 dB, the RMSEs decrease more steeply. As the OSNR increases, the decrease in RMSE is more gradual, and there is only a very small reduction in error beyond an OSNR of 20 dB. For the DIF and INT, the RMSE for microcombs with sinc-shaped intensity noise floors is higher than for flat intensity noise floors, whereas the opposite trend is observed for the HT. This reflects the fact that the impact of the microcomb intensity envelope errors depends on the processing function.
The phase noise of microcombs, which manifests as a broadened linewidth, an appearance of multiple repetition-rate beat notes, and a reduction in temporal coherence [43], is affected by several factors, such as the noise of the CW pump as well as the mechanical and thermal noise of the MRR [44, 45]. These sources of error are difficult to quantitatively analyze. For mode-locked microcombs with extremely low phase noise, the phase noise induced errors are negligible [35, 36]. Therefore, to achieve a high accuracy over long periods, it is necessary to use microcombs with low phase noise, high coherence, and stable mode locking. A number of mode-locking approaches have been reported [17, 18]. It is worth noting that even with relatively incoherent microcombs, processors can still achieve an acceptable accuracy because the microcomb mainly serves as a multi-wavelength source and the optical powers of different wavelength channels are detected incoherently by a BPD.
### _Influence of the electro-optic modulator_
In Fig. 2, an electro-optic modulator is used to modulate the input microwave signal onto different wavelength channels. The most commonly used electro-optic modulators are Mach-Zehnder modulators (MZMs), owing to their high modulation efficiency, low insertion loss, and large operation bandwidth [46]. Due to the asymmetry in the electric field overlap at each electrode [47], practical MZMs not only produce intensity modulation, but also give rise to undesired phase modulation, known as modulation chirp. The chirp leads to distortions in the modulated optical signals, thus resulting in processing errors. Here, we analyze the influence of modulator chirp on the accuracy for different processing functions.
The chirp of a MZM can be characterized by the chirp parameter given by [48]
Fig. 4: Influence of microcombs’ intensity noise on errors of differentiation (DIF), integration (INT), and Hilbert transformation (HT). (a) – (b) Temporal waveform of Gaussian input pulse and output waveforms from the transversal signal processors performing (i) DIF, (ii) INT, and (iii) HT, where the intensity noise floors of the microcombs are (a) flat and (b) sinc-shaped, respectively. Different curves show the results for different optical signal-to-noise ratios (OSNRs) of the comb lines. The ideal processing results are also shown for comparison. (c) Corresponding RMSEs between the ideal results and the processors’ output waveforms as a function of microcomb’s OSNR. In (a) – (c), the Gaussian input pulse has a FWHM of \(-0.17\) ns. The tap number, comb spacing, length of dispersive medium, and SOD parameter are \(M=80\), \(\Delta\lambda=0.4\) nm, \(L=4.8\) km, and \(D_{2}=17.4\) ps/nm/km, respectively.
\[\alpha=\frac{\gamma_{1}+\gamma_{2}}{\gamma_{1}\gamma_{2}} \tag{8}\]
where \(\gamma_{1}\) and \(\gamma_{2}\) are the voltage-to-phase conversion coefficients for the two arms of the MZM. When \(\alpha=0\) (_i.e._, \(\gamma_{1}\) = \(-\gamma_{2}\)), pure intensity modulation is achieved. Figs. 5(a) - (c) show the output waveforms from microcomb-based MWP transversal signal processors that perform DIF, INT, and HT for different chirp parameters \(\alpha\). The ideal processing result without theoretical errors and the results that only account for theoretical errors (corresponding to \(\alpha=0\)) are also shown for comparison. For all processing functions the output waveforms approach the ideal results as \(\alpha\) decreases from 1 to 0, indicating the reduced system error for a lower modulator chirp.
Fig. 5(d) shows the calculated RMSEs versus modulator chirp \(\alpha\). As expected, the RMSE increases with \(\alpha\) for all processing functions, which agrees with the trend in Figs. 5(a) - (c). We also noted that the impact of the modulation chirp on the system performance is more significant for the DIF and INT functions as compared to the HT.
### _Influence of the single-mode fibre_
In Fig. 2, a spool of SMF is employed as the dispersive module of the MWP transversal signal processor, which introduces both amplitude and phase errors due to its chromatic dispersion, including both SOD and TOD. SOD induces a uniform time delay between adjacent taps, which is required for MWP transversal signal processors without alignment errors. However, SOD also introduces a time delay between the modulated sidebands, which leads to a power degradation of the microwave output after photodetection, and hence system errors [49]. On the other hand, the SMF TOD introduces non-uniform time delays between adjacent taps, thus resulting in undesired phase errors. In this section, we analyze the influence of the SMF's SOD and TOD on the accuracy for different processing functions.
A MZM generates two modulated sidebands, with the output termed a double-sideband (DSB) signal. The SOD of the SMF generates different phase shifts for the two sidebands resulting in different phase shifts between the carrier and the two beat microwave sidebands. Therefore, the final microwave output after photodetection experiences a power degradation, with its power given approximately by [49]
\[P_{MW}\propto\cos\left(\frac{\pi L\,D_{2}}{c}\,\dot{\gamma}_{c}^{2}\,\dot{ \gamma}_{MW}^{2}\right) \tag{9}\]
where \(c\) is the speed of light in vacuum, \(\dot{\gamma}_{c}\) is the center wavelength of each channel, and \(f_{MW}\) is the frequency of the input microwave signal.
Figs. 6(a) - (c) show the output waveforms from the processors for the DIF, INT, and HT functions, with and without including the power degradation caused by SOD. The SOD parameter is kept constant at \(D_{2}\) = 17.4 ps/nm/km. For all processing functions, there are only slight differences induced
Fig. 5: Influence of the modulator chirp on errors of differentiation (DIF), integration (INT), and Hilbert transformation (HT). (a) – (c) Temporal waveform of Gaussian input pulse and output waveforms from the transversal signal processors performing (a) DIF, (b) INT, and (c) HT. Different curves show the results for different chirp parameter \(\alpha\). The ideal processing results are also shown for comparison. (d) Corresponding RMSEs between the ideal results and the processors’ output waveforms as a function of \(\alpha\). In (a) – (d), the Gaussian input pulse has a FWHM of \(-\)0.17 ns. The tap number, comb spacing, length of dispersive medium, and SOD parameter are \(M=80\), \(\Delta\lambda=0.4\) nm, \(L=4.8\) km, and \(D_{2}\) = 17.4 ps/nm/km, respectively.
Fig. 6: Influence of SMF’s SOD on errors of differentiation (DIF), integration (INT), and Hilbert transformation (HT). (a) – (c) Temporal waveform of Gaussian input pulse and output waveforms from the transversal signal processors performing (a) DIF, (b) INT, and (c) HT. Different curves show the results with and without the influence of power degradation induced by SOD. The SOD parameter is \(D_{2}\) = 17.4 ps/nm/km. The ideal processing results are also shown for comparison. (d) Power degradation of the output microwave signal \(P_{MW}\) as a function of the SOD parameter \(D_{2}\). (e) Corresponding RMSEs between the ideal results and the processors’ output waveforms as a function of \(D_{2}\). In (a) – (e), the Gaussian input pulse has a FWHM of \(-\)0.17 ns. The tap number, comb spacing, and length of dispersive medium are \(M=80\), \(\Delta\lambda=0.4\) nm, and \(L=4.8\) km, respectively.
by SOD. Fig. 6(d) shows the power degradation \(P_{MW}\) as a function of \(D_{2}\), which is calculated based on Eq. (9). As can be seen, the power degradation induced by SOD is very small, being \(<10^{-3}\) dB for \(D_{2}=17.4\) ps/nm/km in Figs. 6(a) - (c).
Fig. 6(e) shows the RMSE as a function of \(D_{2}\), showing that the RMSEs only vary very slightly (\(<10^{-4}\)) with \(D_{2}\) for all processing functions, in agreement with Figs. 6(a) - (c). These results indicate that although the SOD of SMF induces power degradation of the microwave output, its influence on the system accuracy is very small.
The TOD of the SMF introduces additional non-uniform time delays between the modulated replicas in the wavelength channels, thus resulting in alignment errors in the processing results. The additional time delay of the \(n^{\text{\#}}\) tap is given by [50]
\[\Delta T_{TOD}\!\!=\!\!D_{3}\,L\,\Delta\lambda^{2}\,n^{2} \tag{10}\]
where \(D_{3}\) is the TOD parameter.
Figs. 7(a) - (c) show the output waveforms from processors that perform DIF, INT, and HT, versus the TOD parameter \(D_{3}\). The ideal processing result without theoretical errors and the results that only account for theoretical errors (corresponding to \(D_{3}=0\)) are also shown for comparison. For all processing functions, the processors' outputs approach the ideal processing results as \(D_{3}\) decreases from \(0.5\) ps/nm\({}^{2}\)/km to zero, indicating that improved accuracy can be achieved for a smaller TOD.
Fig. 7(d) shows the RMSE as a function of \(D_{3}\), where, as expected, the RMSE increases with increasing \(D_{3}\) for all functions - agreeing with the trend in Figs. 7(a) - (c). The influence of TOD on the system performance is more significant than that of the SOD. We also note that the INT function is more susceptible to errors induced by the TOD as compared to the DIF and HT functions, reflecting that INT has a more stringent requirement for the accuracy of the phase of the different taps.
### _Influence of optical spectral shapers and photodetectors_
In Fig. 2, an OSS is used as a spectral shaping module to weight the delayed signals across different wavelength channels according to the designed tap coefficients. This is followed by a BPD that sums the delayed and weighted signals to generate the microwave output of the processor. The OSS induces shaping errors, which result in inaccurate tap coefficients and hence output errors. On the other hand, noise and an uneven transmission response of the BPD lead to variations of the power of the microwave output. In this section, we analyze the influence of these error sources for the different processing functions.
We introduce random tap coefficient errors (RTCEs) within a certain percentage range of \(\Delta PR\) to characterize the shaping errors of the OSS. Figs. 8(a) - (c) show the output waveforms from the processors for all functions and for the RTCEs in different ranges, together with the ideal processing result without theoretical errors and the results that only account for theoretical errors (corresponding to \(\Delta PR=0\)). For all the three processing functions, the processors' output waveforms show better agreement with the ideal results for a smaller \(\Delta PR\), reflecting an improved accuracy associated with reduced RTCEs.
Fig. 8(d) shows the RMSE as a function of \(\Delta PR\), showing that the RMSE increases with \(\Delta PR\) for all functions, agreeing with the trend in Figs. 8(a) - (c). The shaping errors of the OSS have a more obvious impact on the accuracy for DIF as
Fig. 8: Influence of shaping errors induced by the OSS on accuracy of differentiation (DIF), integration (INT), and Hilbert transformation (HT). (a) – (c) Temporal waveform of Gaussian input pulse and output waveforms from the transversal signal processors performing (a) DIF, (b) INT, and (c) HT. Different curves show the results for different percentage ranges (\(\Delta\)PRs) of random tap coefficient errors (RTCEs). The ideal processing results are also shown for comparison. (d) Corresponding RMSEs between the ideal results and the processors’ output waveforms as a function of \(\Delta PR\). In (a) – (d), the Gaussian input pulse has a FWHM of \(\sim\)0.17 ns. The tap number, comb spacing, length of dispersive medium, and SOD parameter are \(M=80\), \(\Delta\lambda=0.4\) nm, \(L=4.8\) km, and \(D_{2}=17.4\) ps/nm/km, respectively.
Fig. 7: Influence of SMF’s TOD on errors of differentiation (DIF), integration (INT), and Hilbert transformation (HT). (a) – (c) Temporal waveform of Gaussian input pulse and output waveforms from the transversal signal processors performing (a) DIF, (b) INT, and (c) HT. Different curves show the results for different TOD parameter \(D_{3}\). The ideal processing results are also shown for comparison. (d) Corresponding RMSEs between the ideal results and the processors’ output waveforms as a function of \(D_{3}\). In (a) – (d), the Gaussian input pulse has a FWHM of \(\sim\)0.17 ns. The tap number, comb spacing, length of dispersive medium, and SOD parameter are \(M=80\), \(\Delta\lambda=0.4\) nm, \(L=4.8\) km, and \(D_{2}=17.4\) ps/nm/km, respectively.
compared to the other two functions, indicating that DIF has a more stringent requirement for the accuracy of the tap amplitudes.
In Fig. 2, the use of a BPD greatly suppresses the common-mode noise of the optical signal, which largely cancels out the intensity noise caused by the photodetector. Therefore, the errors induced by the BPD mainly come from its limited response bandwidth and uneven transmission response, which introduce additional errors in the tap coefficients after spectral shaping. Similarly, the limited bandwidth and uneven response of the EOM could also introduce additional errors to the tap coefficients before spectral shaping. These errors, together with the shaping errors of the OSS, can be effectively mitigated through feedback control, which will be discussed in section IV. Finally, we note that the BPD shot noise can induce random power fluctuations in the output microwave signal which limits the lowest achievable phase noise floor [51]. The influence of this on the system performance is similar to the microcomb noise, and can be reduced by using a BPD with higher sensitivity [52].
### _Contributions of different error sources_
In this section, we analyze the contribution of the error sources discussed above to the overall processing errors of microcomb-based MWP transversal signal processors and provide a global picture to show the impact of different error sources.
Fig. 9(a) shows the simulated output waveforms for all functions, including errors induced by the sources from I to V in Fig. 2, with the ideal results shown for comparison. Based on the measurements and parameters of the components in our previous experiments [27, 30, 34], the chirp parameter of the EOM, SOD and TOD parameter of the SMF, and range of RTCEs are set to \(\alpha=0.5\), \(D_{2}=17.4\) ps/nm/km, \(D_{3}=0.083\) ps/nm\({}^{2}\)/km, and \(\Delta P=5\%\), respectively. In our simulations, we also used the OSNRs of the comb lines that were measured by an optical spectrum analyzer. As expected, the overall output errors become larger with the accumulation of errors induced by these sources for all processing functions.
In order to quantify the contributions of the different sources of error, we calculate the RMSEs from the simulation results Fig. 9(a) and plot them in Fig. 9(b). The experimentally measured RMSEs are also shown for comparison. In our simulations, we used the input microwave signal waveform
Fig. 9: Contributions of different error sources to the overall errors of differentiation (DIF), integration (INT), and Hilbert transformation (HT). (a) Temporal waveform of Gaussian input pulse and output waveforms from the transversal signal processors performing (i) DIF, (ii) INT, and (iii) HT. Different curves show the results after accumulating errors induced by different sources from I to V. The ideal processing results are also shown for comparison. (b) Corresponding RMSEs between the ideal results and the processors’ outputs. The practical measured RMSEs are also shown. In (a) and (b), the microcomb has an OSNR of 30 dB. The chirp parameter, SOD parameter, TOD parameter, and tap coefficient fluctuations are \(a=0.5\), \(D_{2}=17.4\) ps/nm/km, \(D_{3}=0.083\) ps/nm\({}^{2}\)/km, and \(\Delta P=5\%\). The Gaussian input pulse has a FWHM of \(\sim\)0.17 ns. The tap number, comb spacing, and length of dispersive medium are \(M=80\), \(\Delta\lambda=0.4\) nm, and \(L=4.8\) km respectively.
measured by a high-bandwidth real-time oscilloscope to calculate the RMSEs, this can minimize the errors induced by the discrepancy between the experimentally generated and ideal Gaussian pulses. The RMSEs of the simulation results increase with the accumulation of errors, which agrees with the trend in Fig. 9(a). There are margins between the RMSEs of the simulation results and the experimental results. They are mainly caused by deviations between the simulation and experiment parameters as well as factors that are not accounted for in our simulation, such as the phase noise of the microcomb, the limited response bandwidth and uneven transmission response of the EOM and BPD, and the shot noise of the BPD. As shown in Fig. 9(b), different processing functions show distinct errors induced by the experimental imperfections. This is mainly induced by the differences in their spectral transfer functions, as indicated by Eqs. (5) - (7), which lead to different responses to the experimental error sources. As can be seen, the system error for the DIF is mainly induced by the microcomb imperfections and EOM chirp. For the INT, the main error sources are the EOM chirp and the SMF TOD. As compared to the DIF and INT, the theoretical errors have a more significant influence on the accuracy for the HT.
### _Performance comparison of processors implemented by discrete versus integrated components_
Early implementations of microcomb-based MWP transversal signal processors simply replaced conventional multi-wavelength sources with optical microcombs while retaining all other components as discrete devices [18, 36]. Recently, several processors comprised entirely of integrated components have also been demonstrated [54, 55]. Despite being based on the same operation principle, the processors implemented with discrete versus integrated components exhibit different processing performance. In this section, we compare their processing accuracy. To simplify our discussion, we refer to the processors implemented in these two forms as discrete versus integrated processors.
Table I summarizes parameters of the components in three processors that we investigate, including a discrete processor (Processor 1) and two integrated processors (Processors 2 and 3). There are two integrated processors: one with the same tap number as that in Ref. [54] and the other with an increased tap number to demonstrate the potential for improvement. Although the size, weight, and power consumption (SWaP) of integrated processors are greatly reduced compared with the discrete processors, the state-of-the-art integrated processors suffer from limited tap numbers due to the restrictions imposed by the integrated components. Currently, integrated processors with only 8 [54] and 12 taps [55] have been demonstrated, whereas discrete processors have been implemented with up to 80 taps [18, 41]. To characterize the errors induced by imperfect response of experimental components, OSNR of microcombs, chirp parameters (\(\alpha\)), error of the delay element (\(t_{v}\)), and random tap coefficient errors (RTCEs) induced by the spectral shaping module were introduced. All of these parameters were set based on the practical processors in Refs. [41, 48, 53, 55, 56]. For comparison, we assumed that the three processors have the same comb spacing of \(\sim\)0.4 nm and the same time delay between adjacent taps of \(\Delta T=\sim\)33.4 ps.
Figs. 10(a) - (c) show the outputs of Processors 1 - 3 in Table I that perform DIF, INT, and HT, respectively. Here we show the processors' outputs with errors induced by (1) only limited tap numbers and (2) both limited tap numbers and experimental errors. The ideal processing results are also shown for comparison. Deviations between the processors' outputs and the ideal results are observed for all three functions, and the deviations become more significant when taking into account the experimental errors. Fig. 10(d) compares the RMSEs of the processors in Figs. 10(a) - (c). The higher processing accuracy of the discrete processor, compared to the integrated processors, is reflected by the lower RMSEs of Processor 1 for all three processing functions. In addition, the RMSEs of Processor 3 are lower compared to Processor 2, which indicates a higher processing accuracy achieved by increasing the tap number. According to Fig. 10(d), the primary factor that contributes to the degradation of accuracy for integrated processors is the limited tap number. Whereas for discrete processors with a sufficiently large tap number, the processing inaccuracy is mainly induced by the imperfect response of experimental components. We also note that the differences in RMSEs among Processors 1 - 3 are more prominent for the INT than
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Discrete processor} & \multirow{2}{*}{No.} & \multirow{2}{*}{Tap No.} & \multirow{2}{*}{OSNR of microcombs} & \multirow{2}{*}{Chip parameter of the EOM} & \multirow{2}{*}{Errors of the delay element} & \multirow{2}{*}{RTCE of the spectral shaping module} \\ & & & & & & \\ \cline{2-2} \cline{6-7} & 1 & \(M=80\)[41] & \(OSNR:20\) dB [41] & \(a:0.1\)[53] & \(t_{v}:4\) [41] & \(RTCE:5\%\)[41] \\ \hline \multirow{4}{*}{Integrated processors} & \multirow{2}{*}{No.} & \multirow{2}{*}{Tap No.} & \multirow{2}{*}{OSNR of microcombs} & \multirow{2}{*}{Chip parameter of the EOM} & \multirow{2}{*}{Error of the delay element} & \multirow{2}{*}{RTCE of the spectral shaping module} \\ & & & & & & \\ \cline{1-1} \cline{4-7} & 2 & \(M=8\)[54] & \(OSNR:20\) dB [41] & \(a:0.8\)[48] & \(t_{v}:3\%\)[55] & \(RTCE:9\%\)[56] \\ \cline{1-1} \cline{4-7} & \multirow{2}{*}{No.} & \multirow{2}{*}{Tap No.} & \multirow{2}{*}{OSNR of microcombs} & \multirow{2}{*}{Chip parameter of the EOM} & \multirow{2}{*}{Error of the delay element} & \multirow{2}{*}{RTCE of the spectral shaping module} \\ & & & & & & \\ \cline{1-1} \cline{4-7} & 3 & \(M=20\) & \(OSNR:20\) dB [41] & \(a:0.8\)[48] & \(t_{v}:3\%\)[55] & \(RTCE:9\%\)[56] \\ \hline \hline \end{tabular}
\end{table} TABLE I: COMPARISON OF COMPONENTS’ PARAMETERS IN DISCRERE AND INEGRATED PROCESSORS
the other two processing functions, indicating a higher requirement for a greater number of taps to improve the processing accuracy of INT. In addition, experimental errors have a substantial impact on the RMSEs of DIF, whereas their impact on HT is very small.
## IV Error Compensation via Feedback Control
In this section, feedback control is introduced to compensate for errors induced by the imperfect response of experimental components. The benefit of feedback control is
Fig. 10: Temporal waveform of Gaussian input pulse and output waveforms from Processors 1 – 3 that perform (a) differentiation (DIF), (b) integration (INT), and (c) Hilbert transform (HT). (d) Comparison of corresponding RMSEs. In (a) – (d), we show the results with errors induced by (1) only limited tap numbers and (2) both limited tap numbers and experimental errors, together with the ideal processing results for comparison.
quantiatively analyzed by comparing the system errors with and without feedback control.
As shown in Fig. 11, we classify the error sources discussed in Section III into two categories, depending on whether amplitude or phase errors are introduced in the taps. The amplitude and phase errors refer to errors in the tap coefficients (_i.e._, \(a_{n}\) in Eqs. (1) - (3)) and time delays (_i.e._, \(n\Delta T\) in Eqs. (1) - (3)) for different taps, respectively. The sources of amplitude errors include the microcomb intensity noise, EOM chirp, TOD and SOD of the SMF, OSS shaping errors, BPD shot noise, and the bandwidth response of the EOM and BPD. The sources of phase errors include microcomb phase noise, TOD of the SMF, and BPD shot noise. We note that some of the error sources in Fig. 11 are static or slowly varying, _e.g._, chirp of EOM, SOD and TOD of SMF, and shaping errors of OSS. In contrast, the fluctuations in the amplitude and phase caused by microcombs and the BPD are normally faster - on the order of 10 GHz.
The static and slowly varying errors in Fig. 11 induced by different error sources can be compensated for by introducing feedback control to calibrate the tap coefficients set for the OSS. Fig. 12(a) shows a schematic of a MWP transversal signal processor with feedback control. A feedback control loop including all the components of the signal processor is introduced to calibrate both the amplitude and phase of each comb line based on the ideal impulse response. This allows for the compensation of the errors induced by different components in the feedback loop. During the amplitude calibration process, a microwave signal is employed as the input signal to test the impulse response of the processor channel by channel, where the same input microwave signal is modulated onto the corresponding comb line. The intensities of the microwave signals after photodetection are recorded by an oscilloscope and sent to a computer, where they are subtracted from the designed tap weights to generate error signals. Finally, the generated error signals are sent to the OSS to calibrate the attenuation of comb line intensity. After several iterations of the above process, the amplitude errors caused by the non-ideal impulse response of the system can be effectively reduced. Similarly, the static and slowly varying phase errors can be mitigated by exploiting the programmable phase characteristics of the OSS to compensate the deviation between the measured and desired phase response.
In Fig. 12(b), we compare the RMSEs for all functions with and without feedback control. The RMSEs caused by theoretical errors are also shown for comparison. As expected, the measured RMSEs with feedback control are much lower than those measured without calibration and approach the theoretical RMSEs more closely. After calibration, there are still discrepancies between the measured RMSEs and theoretical RMSEs, reflecting that there are still residual errors that cannot be compensated for with feedback control. We infer that these errors are mainly induced by rapidly varying error sources, by deviations between the simulated and experimental parameters, and by the limited resolution of the instruments such as the OSS and oscilloscope.
To further improve the system accuracy, multiple-stage feedback control can be employed. For example, another feedback loop with one more OSS can be introduced in the microcomb generation module to flatten the comb lines of the initially generated microcomb. This allows for uniform wavelength channel link gain and can also reduce the loss control range for the spectral shaping in the transversal signal processing module. Recently, self-calibrating photonic integrated circuits have been demonstrated [57, 58], where the impulse response calibration was achieved by incorporating an optical reference path to establish a Kramers-Kronig relationship and then calculate the amplitude and phase errors
Fig. 11: Amplitude and phase errors induced by different components in microcomb-based MWP transversal signal processors. EOM: electro-optic modulator. SMF: single-mode fiber. OSS: optical spectral shaper. BPD: balanced photodetector. RB: response bandwidth. TR: transmission response. SOD: second-order dispersion. TOD: third-order dispersion.
based on a Fourier transform. This offers new possibilities to achieve precise feedback control in microcomb-based MWP transversal signal processors.
Apart from implementing feedback control, there are some other methods to reduce the errors induced by experimental imperfections. For example, employing advanced mode-locking approaches [18] to reduce the noise of microcombs could be beneficial for both discrete and integrated processors. For integrated processors, the chirp of silicon EOM can be mitigated by using push-pull configurations as well as p-n depletion mode structure [53], and proper methods to calibrate the bias point [55]. The shaping errors of integrated spectral shapers can be alleviated via calibration procedures and gradient-descent control [55]. Integrated delay elements introduce additional loss especially when using a waveguide with high propagation loss, and adiabatic Euler bends can be employed to achieve low-loss and low-crosstalk waveguide bends [59]. The use of a wavelength-addressable serial integration scheme can also enable large-scale integration [60].
## V Conclusion
In summary, we analyze the processing errors induced by experimental imperfections for microcomb-based MWP transversal signal processors. We first investigate the errors arising from imperfect microcomb characteristics, EOM chirp, chromatic dispersion in the dispersive module, errors in the
Fig. 12: (a) Schematic of a microcomb-based MWP transversal signal processor with feedback control. CW laser: continuous-wave laser. EDFA: erbium-doped fiber amplifier. PC: polarization controller. MRR: microring resonator. OSS: optical spectral shaper. OC: optical coupler. EOM: electro-optic modulator. SMF: single-mode fiber. BPD: balanced photodetector. OSC: oscilloscope. (b) Comparison of measured RMSEs for DIF, INT, and HT with and without feedback control. The corresponding theoretical RMSEs are also shown for comparison. The tap number, comb spacing, length of dispersive medium, and SOD parameter are \(M=80\), \(\Delta\dot{z}=0.4\) nm, \(L=4.8\) km, and \(D_{2}=17.4\) ps/nm km, respectively. The input microwave signals are Gaussian pulses with a FWHM of \(\sim\)0.17 ns.
OSS, and photodetector noise. Next, we present a global picture of the quantitative influence of different error sources on the overall system performance. Finally, we introduce feedback control to compensate for the errors and quantitatively analyze the improvement in the processing accuracy. Our results show that the influence of the error sources varies for the different processing functions studied here, and that these errors can be significantly reduced by introducing feedback control for both static and slowly varying sources of error. This work provides a useful guide for optimizing the performance of microcomb-based MWP transversal signal processors for versatile high-speed information processing applications.
|
2309.10429 | Some fixed points results for extended nonlinear contractions defined on
$d-CS$ spaces | In this paper we shall introduced the class of $d-CS$ spaces and so we shall
obtained topological approach to large Kasahara spaces. This class include
complete symmetric spaces, complete quasi $b$-metric spaces and complete
$b$-spaces, but not include $d^*$-complete topological spaces and $d$-complete
topological spaces. Further will be presented fixed theorem for nonlinear
extended-contraction defined on this spaces, which generalize earlier results
obtaned by Browder, Matkowski, Jachymski, Matkowski and \'Swiatkowski,
Aran{\dj}elovi\'{c} and Ke\v{c}ki\'{c} and Alshehri, Aran{\dj}elovi\'c and
Shahzad. Also, we obtain the fixed point theorem for nonlinear
extended-contraction defined on quasi-metric spaces which extended recent
result's of Pasicki, obtained for pseudo-metric spaces. | Zoran D. Mitrovic, Ivan D. Arandjelovic | 2023-09-19T08:45:59Z | http://arxiv.org/abs/2309.10429v1 | # Some fixed points results for extended nonlinear contractions defined on \(d-Cs\) spaces
###### Abstract
In this paper we shall introduced the class of \(d-CS\) spaces and so we shall obtained topological approach to large Kasahara spaces. This class include complete symmetric spaces, complete quasi \(b\)-metric spaces and complete \(b\)-spaces, but not include \(d^{*}\)-complete topological spaces and \(d\)-complete topological spaces. Further will be presented fixed theorem for nonlinear extended-contraction defined on this spaces, which generalize earlier results obtaned by Browder, Matkowski, Jachymski, Matkowski and Swiatkowski, Arandelovic and Keckic and Alshehri, Arandelovic and Shahzad. Also, we obtain the fixed point theorem for nonlinear extended-contraction defined on quasi-metric spaces which extended recent result's of Pasicki, obtained for pseudo-metric spaces.
keywords: fixed point, iterative sequences, \(d-CS\) complete topological space, quasi \(b\)-metric space Msc: [2020] 54H25, 54E99, 47H10 +
Footnote †: journal:
## 1 Introduction
Exetended concepts of continuity and convergence, in paper [9] M. Frechet introduced the classes of metric spaces and \(E\)-spaces (in modern terminology so called symmetric spaces). Later in [10], he introduced the notion of \(L\)-spaces and presented one different approach to this concepts. In this class of spaces, which need not be topological spaces, the class of convergent sequences is axiomatic introduced. S. Kasahara [15] considered fixed point results on \(d\)-complete
spaces, also called Kasahara spaces (see [24]). T. Hicks [12] defined the notion of \(d\)-complete topological spaces and so obtained topological approach to these spaces, which is extended in [21] in form of \(d^{*}\)-complete topological spaces. Large Kasahara spaces was introduced by I. Rus [24]. Concept of quasi-metric space was presented by W. Wilson [25].
In this paper we shall introduced the class of \(d-CS\) spaces and so we shall obtained topological approach to large Kasahara spaces. This class include complete symmetric spaces, complete quasi \(b\)-metric spaces and complete \(b\)-spaces, but not include \(d^{*}\)-complete topological spaces and \(d\)-complete topological spaces. Further will be presented fixed theorem for nonlinear extended-contraction defined on this spaces, which generalize earlier results obtained by Browder [8], Matkowski [20], Jachymski, Matkowski and Swiatkowski [14], Arandelovic and Keckic [5] and Alshehri, Arandelovic and Shahzad [2]. Also, we obtain the fixed point theorem for nonlinear extended-contraction defined on quasi-metric spaces which extended recent result's of Pasicki [22; 23], obtained for pseudo - metric spaces.
## 2 Preliminary Notes
### Fixed point theory
Let \(X\) be a nonempty set and \(f:X\to X\) be an arbitrary mapping. An arbitrary element \(x\in X\) is a fixed point for \(f\) if \(x=f(x)\). For \(\vartheta_{0}\in X\), we say that a sequence \((\vartheta_{n})\) defined by \(\vartheta_{n}=f^{n}(\vartheta_{0})\) is a sequence of Picard iterates of \(f\) at point \(\vartheta_{0}\) or that \((\vartheta_{n})\) is the orbit of \(f\) at point \(\vartheta_{0}\).
Let \(d:X\times X\to[0,+\infty)\) be a mapping, then:
(1) \(f\) is contraction if there exists \(\alpha\in[0,1)\) such that
\[d(f(x),f(y))\leq\alpha d(x,y), \tag{2.1}\]
for all \(x,y\in X\);
(2) \(f\) is nonlinear contraction if there exists function \(\varphi:[0,+\infty)\to[0,+\infty)\) such that \(\varphi(r)<r\) for any \(r>0\) and
\[d(f(x),f(y))\leq\varphi(d(x,y)), \tag{2.2}\]
for all \(x,y\in X\);
(3) \(f\) is nonlinear extended-contraction if there exists function \(\varphi:[0,+\infty)\to[0,+\infty)\) such that \(\varphi(r)<r\) for any \(r>0\) and
\[d(f(x),f(y))\leq\max\{\varphi(d(x,y)),\varphi(d(x,f(x))),\varphi(d(y,f(y)))\}, \tag{2.3}\]
for all \(x,y\in X\).
In 1968 first fixed point theorems for nonlinear contractions on complete metric space were obtained by F. Browder [8] (his statemet has assumptions that considered metric space is bounded and that comparison function \(\varphi\) is monotone non - decreasing and right continuous), R. M. Bianchini and M. Grandolfi [6] (their statemet has assumptions that \(\varphi\) is monotone non - decreasing and that
\(\sum_{n=1}^{\infty}\varphi^{n}(t)<\infty\) for each \(t>0\)), M. Furi [11] and A. Zitarosa [26] (it has assumption that \(\varphi\) is monotone non - decreasing, such that for each \(r>0\)\(\lim\limits_{n\to\infty}\varphi^{n}(r)=0\), and that orbits of \(f\) are bounded).
**Remark 2.1**.: _In A. Zitarosa [26] theorem assumption \(\varphi(t)<t\) was omitted beacuse it follows from two other assumptions._
**Remark 2.2**.: _Results of R. M. Bianchini and M. Grandolfi [6], M. Furi [11] and Browder [8] are special cases of the Theorem of A. Zitarosa [26]._
In 1969 D. W. Boyd, and J. S. W. Wong [7] presented two new fixed point results for nonlinear contractions under assumptions:
a) \(\varphi\) is upper semi continuous from the right (i. e. for every \(r\geq 0\)\(\overline{\lim\limits_{t\to r+}}\varphi(t)\leq\varphi(r)\), or
b) \(\overline{\lim\limits_{t\to r+}}\varphi(t)<r\) for each \(r\geq 0\).
This conditions are eqivalent, but asssumptions of Boyd-Wong and Zitarosa are uncomparable.
**Remark 2.3**.: _Result of Browder [8] follows from the both theorems of Boyd and Wong [7]._
In 1975 J. Matkowski [20] that condition "orbits of \(f\) are bounded" in Theorem of Zitarosa can be omitted. This result extended Theorem of Zitarosa but it is uncomparable with both theorems of Boyd and Wong.
After a long time, in 2016 L. Pasicki [22] introduced new nonlinear contractive condition which include assumptions of J. Matkowski [20] and Boyd and Wong [7]. Result of Pasicki was formulated for mappings defined on pseudo-metric spaces. His theorem has assumption that for each \(t>0\) there exists \(\varepsilon>0\) such that
\[\varphi(s)\leq t\ \ \mbox{for any }s\in(t,t+\varepsilon).\]
Fixed point results for nonlinear contractions defined on semi-metric spaces was obtained by J. Jachymski, J. Matkowski and T. Swiatkowski [14] and for nonlinear contractions defined on symmetric spaces was presented by I. D. Arandelovic and D. J. Keckic [5].
First fixed point result on nonlinear extended-contractions was presented by M. Maiti, J. Achari, T. K. Pal [18]. It proved that any extended contraction \(f\) defined on complete metric space \((X,d)\), where \(\varphi\) satisfies condition \(a)\) of Boyd-Wong's theorem and function \(A:X\to\mathbb{R}\) defined by \(A(x)=d(x,f(x))\) is lower semicontinuous, has an unique fixed point which is unique limit of all sequences of its Picard iterations. In 2016 L. Paciski [22] generalize this result to pseudo-metric space. Corresponding result for nonlinear extended-contractions defined on symmetric spaces was presented by S. Alshehri, I. Arandelovic and N. Shahzad [2]. Fixed point theorem for nonlinear extended-contraction defined on cone metric spaces was proved in [4]. In 2018 L. Paciski [23] obtained corresponding result to quasi pseudo-metric space.
The first part of next statement was formulated and proved by D. Adamovic [1]. Its second part was presented in [5].
**Lemma 2.4**.: _(Arandelovic-Keckic [5]) Let \(X\) be the nonempty set and the mapping \(f:X\to X\). Let \(l\) be a positive integer such that \(f^{l}\) possesses a unique fixed point, say \(u_{*}\). Then \(u_{*}\) is the unique fixed point of \(f\). Also, if \(X\) is a topological space and any sequence of Picard iterates defined by \(f^{l}\) is convergent to \(u_{*}\), then the sequence of Picard iterates defined by \(f\) is convergent to \(u_{*}\)._
### Topological definitions
In this subsection we give some definitions.
**Definition 2.5**.: _Let \(X\) be a nonempty set, \(d:X\times X\to[0,+\infty)\) and \(s\in\mathbb{R}\). We define the following five properties: (A0) \(d(x,y)=0\) implies \(x=y\); (A1) \(d(x,y)=0\) if and only if \(x=y\); (A2) \(d(x,y)=d(y,x)\); (A3) \(d(x,y)\leq d(x,z)+d(z,y)\); If for any \(x,y,z\in X\), \((X,d)\) satisfies: (A0), (A2) and (A3), then \((X,d)\) is pseudo-metric space; (A1), (A2) and (A3), then \((X,d)\) is metric space; (A1) and (A2), then \((X,d)\) is symmetric space; (A0) and (A3), then \((X,d)\) is pseudo quasi-metric space; (A1) and (A3), then \((X,d)\) is quasi-metric space._
We shall used classical term pseudo-metric spaces, as in famous textbooks [16] or [17]). In the last 20 years this notion was reintroduced by P. Hitzler and A. K. Seda [13] (they used term dislocated metric spaces) and A. Amini - Harandi [3] (he used term metric like spaces).
**Example 2.6**.: _Let \(X=\mathbb{R}\) and \(d:X\times X\to[0,+\infty)\) defined by_
\[d(x,y)=|y-x|+\frac{y-x}{2},\]
_for \(x,y\in X\). Then \((X,d)\) is a quasi-metric space. Note that (A2) does not hold for the mapping \(d\)._
Let \((X,d)\) be a quasi-metric spaces and \(r_{n}\) be a sequence of nonnegative real numbers such that \(r_{n+1}\leq r_{n}\) and \(\lim r_{n}=0\). A quasi-metric space is a topological space with \(\{B(x,r_{n})\}\), as a base of neighborhood filter of the point \(x\) where
\[B(x,r_{n})=\{y\in X:d(y,x)<r_{n}\}.\]
**Definition 2.7**.: _Let \((X,d)\) be a quasi-metric space. A sequence \((x_{n})\subseteq X\) is said to be left Cauchy sequence if for given \(\varepsilon>0\) there is \(N\in\mathbb{N}\) such that \(d(x_{n},x_{m})<\varepsilon\), for all \(m>n\geq N.\) Then \((X,d)\) is complete if and only if every left Cauchy sequence converges to some \(x\in X\)._
Let \((X,d)\) and \((Y,d)\) be two quasi-metric spaces. A mapping \(f:X\to Y\) is sequentially continuous if for each sequence \((\vartheta_{n})\subseteq X\) from \(\lim d(\vartheta_{n},p)=0\), it follows that \(\lim d(f(\vartheta_{n},f(p))=0\).
### Comparison Functions
Let \(\varphi:[0,+\infty)\to[0,+\infty)\) be mapping which satisfies: \(\varphi(0)=0\) and \(\varphi(r)<r\) for any \(r>0\). Then \(\varphi\) is comparison function.
**Proposition 2.8**.: _Let \(\varphi:[0,+\infty)\to[0,+\infty)\) be monotone non-decreasing comparison function, such that for each \(r>0\)\(\lim\limits_{n\to\infty}\varphi^{n}(r)=0\). If there exists \(r>0\), which satisfies_
\[\overline{\lim\limits_{t\to r+}}\varphi(t)=r\]
_then there exists \(\varepsilon>0\) such that_
\[\varphi(t)=r\ \ \text{for any }t\in(r,r+\varepsilon).\]
Proof.: From
\[\overline{\lim\limits_{t\to r+}}\varphi(t)=r,\]
it follows that
\[\lim\limits_{t\to r+}\varphi(t)=r,\]
because \(\varphi\) is monotone non - decreasing. If for each \(\varepsilon>0\) there exists \(t\in(r,r+\varepsilon)\) such that \(\varphi(t)>r\), then we obtain that \(\varphi:(r,+\infty)\to(r,+\infty)\) which is contradictin with \(\lim\limits_{n\to\infty}\varphi^{n}(r+\varepsilon)=0\).
Proposition 2.8 implies that fixed point theorem of Matkowski [20] is included in result of Pasicki [22].
**Proposition 2.9**.: _Let \(\varphi:[0,+\infty)\to[0,+\infty)\) be comparison function, such that one of the following conditions be satisfied: \(\alpha)\)\(\overline{\lim\limits_{t\to s+}}\varphi(t)<s\) for each \(s>0\); \(\beta)\) for any \(s>0\) there exists \(\varepsilon>0\) such that \(\varphi(t)\leq s\) for any \(t\in(s,s+\varepsilon)\), then \(\lim\limits_{n\to+\infty}\varphi^{n}(s)=0\) and for each \(s>0\)._
Proof.: Let \(\alpha)\) be satisfied. Suppose that \(s>0\) and \(\varphi^{n}(s)>0\), for any \(n\). Sequence \(\varphi^{n}(s)\) is monotone decreasing because \(\varphi(r)<r\) and \(0\) is one its lower bound. So it is convergent sequence. Let \(\lim_{n\to+\infty}\varphi^{n}(s)=b>0\). Then there exists positive integer \(n_{0}\) such that \(n\geq n_{0}\) implies \(\varphi^{n}(s)\leq b\), because
\[\overline{\lim\limits_{t\to b+}}\varphi(t)<b.\]
So we obtain the contradiction, because \(\varphi^{n}(s)>\varphi^{n+1}(s)>b\) for any \(n\).
Let \(\beta)\) be satisfied. Then \(\phi^{2}\) satisfies \(\alpha)\).
**Lemma 2.10**.: _Let \(\varphi:[0,+\infty)\to[0,+\infty)\) be comparison function, which satisfies_
\[\overline{\lim\limits_{t\to r-}}\varphi(t)<r\ \text{and}\ \overline{\lim\limits_{t\to r+}} \varphi(t)\leq r\]
_for each \(r>0\), such that for every \(s>0\) which satisfies_
\[\overline{\lim_{t\to s+}}\varphi(t)=s,\]
_there exists \(\varepsilon>0\) such that_
\[\varphi(t)=s\ \ \text{ for any }t\in(s,s+\varepsilon).\]
_Then \(\phi:[0,+\infty)\to[0,+\infty)\) defined by \(\phi(0)=0\) and_
\[\phi(t)=\sup_{t\in[0,\eta]}\varphi(t)\]
_is monotone non - decreasing comparison function such that \(\lim_{n\to+\infty}\varphi^{n}(r)=0\), which satisfies_
\[\varphi(t)\leq\phi(t),\]
_for any \(t>0\)._
## 3 The \(d-CS\) spaces
Here we define \(d-CS\) spaces and give some properties of those spaces.
**Definition 3.1**.: _Let \(X\) be arbitrary set, \(d:X\times X\to[0,+\infty)\) and \(\tau_{d}\) topology on \(X\) defining by the family of closed sets as follows: a set \(A\subseteq X\) is closed if and only if for each \(x\in X\), \(d(A,x)=0\) implies \(x\in A\), where_
\[d(A,x)=\inf\{d(a,x):a\in A\}.\]
_Then, ordered triplet \((X,d,\tau_{d})\) is \(d-C\) space._
Let \((X,d,\tau_{d})\) be \(d-C\) space and \(x\in X\). By \(B(x,r)=\{y\in X:\ d(y,x)<r\}\), we denote ball with center \(x\) and radius \(r\). Such ball need not be open set.
**Example 3.2**.: _Let \(X=[0,1]\) and \(d:X\times X\to[0,+\infty)\) defined by \(d(x,y)=1-|x-y|\), for all \(x,y\in X.\) Then \(A=\{1\}\) is not closed set because,_
\[D(A,0)=0\text{ and }0\notin A.\]
_Also, \(B(1,1)=\{y\in X:1-|1-y|<1\}=[0,1)\) is not open set._
**Proposition 3.3**.: _If \((X,d,\tau_{d})\) is a \(d-C\) space, then the family \(\{B(x,r):r>0\}\) forms a local basis at \(x\). Also, if \(d(x_{n},x)\to 0\) then \(x_{n}\to x\) in the topology \(\tau_{d}\)._
Proof.: Let \(U\ni x\) be an open set. Then \(X\setminus U\) is closed and, since \(x\notin X\setminus U\), we have \(d(X\setminus U,x)=\eta>0\). Hence \(B(x,\eta)\subseteq U\).
Assume that \(d(x_{n},x)\to 0\). If \(U\) is an open set that contains \(x\), then \(U\supseteq B(x,\eta)\) for some \(\eta>0\). The last set contains almost all members of the sequence \(x_{n}\)
The convergence of a sequence \((x_{n})\) in the topology \(\tau_{d}\) need not imply \(d(x_{n},x)\to 0\).
The fact that \(\lim d(x_{n},x)=0\) will be denoted by \(\lim^{d}x_{n}=x.\)
**Definition 3.4**.: _Let \((X,d,\tau_{d})\) be a \(d-C\) space. A function \(f:X\to X\) is said to be left sequentially \(d\)-continuous if from \(\lim d(x_{n},x)=0\) follows \(\lim d(f(x_{n}),f(x))=0\), for any \(x\in X\) and each \((x_{n})\subseteq X\)._
**Definition 3.5**.: _Let \((X,d,\tau_{d})\) be a \(d-C\) space. We define the following two properties:_
(W3)__\(\lim d(x_{n},x)=0\) _and_ \(\lim d(x_{n},y)=0\) _implies_ \(x=y\)_;_
(JMS)__\(\lim d(x_{n},z_{n})=0\) _and_ \(\lim d(y_{n},z_{n})=0\) _implies_ \(\overline{\lim}d(x_{n},y_{n})\neq+\infty\)_._
The properties (W3) was introduced by M. Frechet [9] and (JMS) by J. Jachymski, J. Matkowski and T. Swiatkowski [14].
Next statement gives the characterization of symmetric space which satisfies the property (JMS). It generalize famous result of J. Jachymski, J. Matkowski and T. Swiatkowski [14] obtained for symmetric spaces.
**Theorem 3.6**.: _Let \((X,d)\) be a \(d-C\) space. Then the following conditions are equivalent: (i) \((X,d)\) satisfies property (JMS); (ii) There exists \(\delta,\eta>0\) such that for any \(x,y,z\in X\),_
\[d(x,z)+d(y,z)<\delta\text{ implies that }d(x,y)<\eta.\]
_(iii) There exists \(r>0\) such that_
\[R=\sup\left\{\operatorname{diam}\left(B(x,r)\right):x\in X\right\}<+\infty.\]
Proof.: \((i)\Rightarrow(iii)\). Suppose that \(\sup\left\{\operatorname{diam}\left(B(x,r)\right):x\in X\right\}=+\infty\) for any \(r>0\) and \(\lim^{d}x_{n}=z\). Then for each \(n\) there exists \(y_{n}\) such that \(d(y_{n},z)<d(x_{n},z)\) and \(d(x_{n},y_{n})>n\). So we obtain that \(\neg(iii)\Rightarrow\neg(i)\).
\((iii)\Rightarrow(ii)\). Put \(\delta=\dfrac{r}{2}\) and \(\eta=R\).
\((ii)\Rightarrow(i)\). From \(\lim d(x_{n},z_{n})=0\) and \(\lim d(y_{n},z_{n})=0\) follows that there exists natural number \(N\) such that \(n>N\) implies \(d(x_{n},z_{n})<\dfrac{\delta}{2}\) and \(d(y_{n},z_{n})<\dfrac{\delta}{2}\). Hence, \(d(x_{n},y_{n})<\eta\) for any \(n>N\).
**Remark 3.7**.: _(ii) is equivalent with:_
\[d(x,z)+d(y,z)<\delta\text{ implies }\max\{d(x,y),d(y,x)\}<\eta. \tag{3.1}\]
**Definition 3.8**.: _Let \((X,d,\tau_{d})\) be a \(d-C\) space. A sequence \((x_{n})\subseteq X\) is said to be left Cauchy sequence if for given \(\varepsilon>0\) there is \(N\in\mathbb{N}\) such that \(d(x_{n},x_{m})<\varepsilon\), for all \(m>n\geq N.\) If \((X,d,\tau_{d})\) is \(d-C\) space, then it is \(d-CS\) space if and only if every left Cauchy sequence converges to some \(x\in X\)._
**Lemma 3.9**.: _Let \((X,d,\tau_{d})\) be a \(d-CS\) space, which satisfies the properties (W3) and (JMS), \(f,g:X\to X\) be two left sequentially \(d\)-continuous mappings and \(d_{*}:X\times X\to[0,+\infty)\) be defined by : \(d_{*}(x,y)=0\) for \(x=y\) and_
\[d_{*}(x,y)=\max\{d(x,y),\rho(f(x),g(y)),\ldots,\rho(f^{n}(x),g^{n}(y)\}, \tag{3.2}\]
_otherwise. Then space \((X,d_{*},\tau_{d_{*}})\) is left complete \(d-C\) space._
Proof.: The space \((X,d_{*},\tau_{d_{*}})\) is \(d-C\) metric space, which satisfies the properties (W3) and (JMS), because conditions of Definitions 3.1 and 3.5 are trivial satisfied. Also, we have \(d(x,y)\leq d_{*}(x,y)\) for any \(x,y\in X\). Further, if \((x_{j})\subseteq X\) an arbitrary left Cauchy sequence in \((X,d_{*},\tau_{d_{*}})\), then \((x_{j})\) is a left Cauchy sequence in \((X,d,\tau_{d})\), which implies that \((X,d_{*},\tau_{d_{*}})\) is \(d-CS\) space, because \((X,d,\tau_{d})\) is \(d-CS\) space.
## 4 Main Results
By \(\Phi\) we denote set of all comparison functions \(\varphi:[0,+\infty)\to[0,+\infty)\) which satisfies
\[\overline{\lim_{t\to r^{-}}}\varphi(t)<r\text{ and }\overline{\lim_{t\to r +}}\varphi(t)\leq r\]
for each \(r>0\), such that for every \(s>0\) which satisfies
\[\overline{\lim}_{t\to s+}\varphi(t)=s,\]
there exists \(\varepsilon>0\) such that
\[\varphi(t)=s\ \ \text{ for any }t\in(s,s+\varepsilon).\]
**Example 4.1**.: _Let \(\varphi:[0,+\infty)\to[0,+\infty)\) defined by_
\[\varphi(t)=\left\{\begin{array}{ll}\frac{t}{2},&t\in[0,\frac{1}{2}],\\ \frac{1}{2},&t>\frac{1}{2},\end{array}\right.\]
_then \(\varphi\in\Phi\)._
**Lemma 4.2**.: _Let \(\varphi_{1},\ldots,\varphi_{n}\in\Phi\). Then there exists monotone non-decreasing function \(\varphi\in\Phi\) such that_
\[\varphi_{k}(x)\leq\varphi(x),\]
_for each \(1\leq k\leq n\) and \(x\geq 0\)._
Proof.: Let
\[\varphi(x)=\max\{\varphi_{1}(x),\ldots,\varphi_{n}(x)\}.\]
Then it is easy to show that: \(\varphi(0)=0\), \(\varphi_{k}(t)\leq\varphi(t)<t\)\((1\leq k\leq n)\) for all \(t>0\) and
\[\overline{\lim_{t\to x+}}\varphi(t)<x.\]
**Lemma 4.3**.: _Let \((X,d,\tau_{d})\) be a \(d-CS\) space which satisfies the properties (W3) and (JMS), let \(f:X\to X\), let \(\varphi\in\Phi\) and let \(\delta,\eta\) be defined as in \((ii)\) of Theorem 3.6. If_
\[d(f(x),f(y))\leq\varphi(d(x,y))\]
_for any \(x,y\in X\) and_
\[\sup_{t\in[0,\eta]}\varphi(t)\leq\frac{\delta}{2},\]
_then \(f\) has a unique fixed point \(y\in X\) and for each \(x\in X\) the sequence of Picard iterates defined by \(f\) at \(x\) converges, in the topology \(\tau_{d}\), to \(y\)._
Proof.: For any \(p,q\in X\) we have
\[d(f(p),f(q))\leq\varphi(d(p,q))\leq d(p,q),\]
which implies that \(f\) is left sequentially \(d\)-continuous.
By Lemma 2.10 function \(\phi:[0,+\infty)\to[0,+\infty)\) defined by \(\phi(0)=0\) and
\[\phi(t)=\sup_{t\in[0,\eta]}\varphi(t)\]
is monotone non-decreasing comparison function such that \(\lim_{n\to+\infty}\varphi^{n}(r)=0\), which satisfies
\[\varphi(t)\leq\phi(t),\]
for any \(t>0\) and \(\varphi(t)\leq\frac{\delta}{2}\).
Let \(x\in X\). Then
\[d(f^{n}(x),f^{m+n}(x))\leq\phi^{n}(d(x,f^{m}(x)))\text{ for any }m,n\in{\bf N}.\]
So
\[d(f^{n}(x),f^{n+1}(x))\leq\phi^{n}(d(x,f(x))),\]
which implies that
\[d(f^{n}(x),f^{n+1}(x))\to 0.\]
Then there exists \(k\in{\bf N}\) such that
\[d(f^{k}(x),f^{k+1}(x))\leq\min\{\frac{\delta}{2},\eta\}.\]
We shall prove that for all \(n\in{\bf N}\),
\[d(f^{k}(x),f^{k+n}(x))\leq\eta. \tag{4.1}\]
By definition of \(k\), we get that (4.1) is valid for \(n=1\). Now, assume that (4.1) is satisfied for some \(n\in{\bf N}\). From
\[d(f^{k}(x),f^{k+1}(x))\leq\frac{\delta}{2}\]
\[d(f^{k+1}(x),f^{k+n+1}(x))\leq\phi(d(f^{k}(x),f^{k+n}(x)))\leq\phi(\eta)\leq\frac{ \delta}{2},\]
it follows that
\[d(f^{k}(x),f^{k+1}(x))+d(f^{k+1}(x),f^{k+n+1}(x))\leq\delta,\]
which by Theorem 3.6 implies that
\[d(f^{k}(x),f^{k+n+1}(x))\leq\eta.\]
So, by induction we get that (4.1) is satisfied for any \(n\geq 1\). Thus
\[d(f^{k+n}(x),f^{k+n+m}(x))\leq\phi^{n}(\eta),\text{ for any }m,n\in\mathbf{N}.\]
Hence \((f^{n}(x))\) is a Cauchy sequence.
Then there exists \(y\in X\) such that \(\lim f^{n}(x)=y\). Since \(f\) is left sequentially \(d\)-continuous, we have \(\lim^{d}f^{n+1}(x)=f(y)\). Now we get that \(f(y)=y\) because \((X,d,\tau_{d})\) satisfies (W3).
Since \(d(f^{n}(x),y)=0\), by Proposition 3.3 we have that \(f^{n}(x)\to y\) in the topology \(\tau_{d}\).
If \(y^{*}\) is another fixed point for \(f\), then for all \(n\) we have
\[d(y,y^{*})=d(f^{n}(y),f^{n}(y^{*}))\leq\phi^{n}(d(y,y^{*}))\to 0,\text{ as }n\to\infty.\]
**Theorem 4.4**.: _Let \((X,d,\tau_{d})\) be a \(d-CS\) space which satisfies the properties (W3) and (JMS), \(f:X\to X\) and \(\varphi\in\Phi\). If_
\[d(f(x),f(y))\leq\varphi(d(x,y)) \tag{4.2}\]
_for any \(x,y\in X\), then \(f\) has a unique fixed point \(y\in X\) and for each \(x\in X\) the sequence of Picard iterates defined by \(f\) at \(x\) converges in \(d\), and, also by Proposition 3.3, it converges the same limit point in the topology \(\tau_{d}\)._
Proof.: Let \(\delta,\eta\) be defined as in (ii) of Theorem 3.6. By Lemma 2.10 function \(\phi:[0,+\infty)\to[0,+\infty)\) defined by \(\phi(0)=0\) and
\[\phi(t)=\sup_{t\in[0,\eta]}\varphi(t)\]
is monotone non-decreasing comparison function such that \(\lim_{n\to+\infty}\varphi^{n}(r)=0\), which satisfies
\[\varphi(t)\leq\phi(t),\]
for any \(t>0\).
If \(\phi(\eta)\leq\delta/2\), then from Lemma 4.3, it follows that \(f\) has a unique fixed point \(y\in X\) and for each \(x\in X\) the sequence of Picard iterates defined by \(f\) at \(x\) converges, in the topology \(\tau_{d}\), to \(y\).
Now assume that \(\phi(\eta)>\dfrac{\delta}{2}\). Then there exists the least positive integer \(j>1\) such that \(\phi^{j}(\eta)\leq\delta/2\). Also, we have that
\[d(f^{j}(x),f^{j}(y))\leq\phi^{j}(d(x,y)),\]
for any \(x,y\in X\), \(\phi^{j}\in\Phi\). By Lemma 4.3 we obtain that \(f^{j}\) has a unique fixed point, say \(z\in X\) and for each \(x\in X\) the sequence of Picard iterates defined by \(f^{j}\) at \(x\) converges, in the topology \(\tau_{d}\), to \(z\). From Lemma 2.4 it follows that \(f\) has a unique fixed point \(z\in X\) and for each \(x\in X\) the sequence of Picard iterates defined by \(f\) at \(x\) converges, in the topology \(\tau_{d}\), to \(z\).
**Theorem 4.5**.: _Let \((X,d,\tau_{d})\) be a \(d-CS\) space which satisfies the properties (W3) and (JMS), \(f:X\to X\) be left sequentially \(d\)-continuous and \(\varphi_{1},\varphi_{2},\varphi_{3},\in\Phi\). If_
\[d(f(x),f(y))\leq\max\{\varphi_{1}(d(x,y)),\varphi_{2}(d(x,f(x)),\varphi_{3}d( y,f(y))\}) \tag{4.3}\]
_for any \(x,y\in X\), then \(f\) has a unique fixed point \(y\in X\) and for each \(x\in X\) the sequence of Picard iterates defined by \(f\) at \(x\) converges in \(d\). Also, Picard iterates converges the same limit point in the topology \(\tau_{d}\)._
Proof.: By Lemma 4.2 there exists monotone non-decreasing \(\varphi\in\Phi\) such that
\[\varphi_{k}(x)\leq\varphi(x),\]
for each \(1\leq k\leq 3\) and \(x\geq 0\). We get that
\[d(f(x),f(y))\leq\varphi(\max\{d(x,y),\varphi d(x,f(x),\varphi d(y,f(y))\}).\]
Let \(\delta,\eta\) be defined as in (ii) of Theorem 3.6. By Lemma 2.10 function \(\phi:[0,+\infty)\to[0,+\infty)\) defined by \(\phi(0)=0\) and
\[\phi(t)=\sup_{t\in[0,\eta]}\varphi(t)\]
is monotone non - decreasing comparison function such that \(\lim_{n\to\infty}\varphi^{n}(r)=0\), which satisfies
\[\varphi(t)\leq\phi(t),\]
for any \(t>0\).
Define \(d^{*}:X\times X\to[0,+\infty)\) by \(d^{*}(x,y)=0\) if \(x=y\) and
\[d^{*}(x,y)=\max\{d(x,y),d(x,f(x)),d(y,f(y))\}\]
otherwise. Then \((X,d^{*},\tau_{d^{*}})\) is \(d-C\) space. Also, we have
\[d(x,y)\leq d^{*}(x,y)\]
for any \(x,y\in X\).
So, if \((x_{n})\subseteq X\) is an arbitrary left Cauchy sequence in \((X,d^{*},\tau_{d^{*}})\) then it is left Cauchy sequence in \((X,d,\tau_{d})\), which implies its convergence. Hence, \((X,d^{*},\tau_{d^{*}})\) is \(d-CS\) space.
From \(\lim d^{*}(x_{n},x)=0\) and \(\lim d^{*}(x_{n},y)=0\) it follows \(\lim d(x_{n},x)=0\) and \(\lim d(x_{n},y)=0\), which implies \(x=y\), because \((X,d,\tau_{d})\) has property (W3). So, \((X,d^{*},\tau_{d^{*}})\) has property (W3).
From \(d^{*}(x,z)+d^{*}(y,z)<\delta\) we get following inequalities: \(d(x,z)+d(y,z)<\delta\), \(d(x,f(x))+d(y,z)<\delta\) and \(d(x,f(x))+d(y,f(y))<\delta\), which implies that
\[d^{*}(x,y)\leq\eta+2\delta=\eta^{*},\]
because \((X,d,\tau_{d})\) has property (JMS). So, \((X,d^{*},\tau_{d^{*}})\) has property (JMS).
Let \(x,y\in X\). From
\[d(f(x),f^{2}(x))\leq\phi(d(x,f(x))),d(f(y),f^{2}(y))\leq\phi(d(y,f(y)))\]
and
\[d(f(x),f(y))\leq\varphi d(x,y),\]
we obtain
\[d^{*}(f(x),f(y))\leq\varphi(d^{*}(x,y)).\]
Now statement follows from Theorem 4.4.
The following theorem extends the previous results presented by M. Marjanovic [19].
**Theorem 4.6**.: _Let \((X,d,\tau_{d})\) be a \(d-CS\) space which satisfies the properties (W3) and (JMS), \(f:X\to X\) be left sequentially \(d\)-continuous and \(\varphi\in\Phi\). If_
\[d(f^{n+1}(x),f^{n+1}(y))\leq\varphi(\max_{0\leq i\leq n}\{d(f^{i}(x),f^{i}(y) )\}), \tag{4.4}\]
_for any \(x,y\in X\), then \(f\) has a unique fixed point \(y\in X\) and for each \(x\in X\) the sequence of Picard iterates defined by \(f\) at \(x\) converges in \(d\). Also, Picard iterates converges the same limit point in the topology \(\tau_{d}\)._
Proof.: By Lemma 3.9 space \((X,d_{*},\tau_{d_{*}})\) be a \(d-CS\) space which satisfies the properties (W3) and (JMS). Further, we get that
\[\rho_{*}(f^{n+1}(x),f^{n+1}(y))\leq\varphi(d_{*}(x,y)).\]
By Theorem 4.4 it follows that \(f^{n+1}\) has unique fixed point, say \(q\) which is unique limit of all Picard sequences defined by \(f^{n+1}\). By Lemma 2.4 we obtain that \(q\) is unique fixed point for \(f\) and unique limit of all Picard sequences defined by \(f\).
## 5 Fixed point result on quasi-metric space
In this section we give a result for nonlinear contractions in quasi-metric spaces.
**Theorem 5.1**.: _Let \((X,d)\) be a quasi-metric space, \(f:X\to X\) be left sequentially \(d\)-continuous and \(\psi_{1},\psi_{2},\psi_{3}\in\Phi\). If_
\[d(f(x),f(y))\leq\max\{\psi_{1}(d(x,y)),\psi_{2}(d(x,f(x))),\psi_{3}(d(y,f(y)))\}\]
_for any \(x,y\in X\), then \(f\) has a unique fixed point \(y\in X\) and for each \(x\in X\) the sequence of Picard iterates defined by \(f\) at \(x\) converges in \(d\). Also, Picard iterates converges the same limit point in the topology \(\tau_{d}\)._
Proof.: Let \(\psi:[0,+\infty)\to[0,+\infty)\) be a mapping defined by formula
\[\psi(t)=\max\{\psi_{1}(t),\psi_{2}(t),\psi_{3}(t)\}.\]
Then \(\psi(t)\in\Phi\) and
\[d(f(x),f(y))\leq\max\{\psi(d(g(x),g(y))),\psi(d(g(x),f(x))),\psi(d(g(y),f(y)))\}.\]
Let \(x_{0}\in X\) be arbitrary and \((f(x_{n}))\) sequence of Picard iterates of \(f\) at point \(x_{0}\).
Now we shall proved that \(\lim d(f(x_{n-1}),f(x_{n}))=0\).
Let \(d(f(x_{n-1}),f(x_{n}))=a_{n}\), \(a_{1}=b_{1}\) and \(b_{n+1}=\psi(b_{n})\). From
\[d(f(x),f^{2}(x))\leq\psi(\max\{d(x,f(x)),d(x,f(x)),d(f(x),f^{2}(x)))\}\]
it follows
\[d(f(x),f^{2}(x))\leq\psi(d(x,f(x))).\]
So
\[a_{n} = d(f(x_{n-1}),f(x_{n}))\leq\psi(d(f(x_{n-2}))\leq\cdots\leq\psi^{ n-1}(d(f(x_{0}),f(x_{1})))\] \[= \psi^{n-1}(b_{1})=b_{n},\]
it follows that \(0\leq a_{n}\leq b_{n}\), because \(d\) is nonnegative mappings. So \(\lim b_{n}=\lim a_{n}=0\), because \(\lim b_{n}=0\).
Further we shall proved that \(\lim d(f(x_{n}),f(x_{n-1}))=0\).
Let \(d(f(x_{n}),f(x_{n-1}))=c_{n}\), \(c_{1}=d_{1}\) and \(d_{n+1}=\psi(d_{n})\). From
\[d(f^{2}(x),f(x))\leq\psi(d(f(x),x))\]
we obtain
\[c_{n} = d(f(x_{n}),f(x_{n-1}))\leq\psi(d(f(x_{n-1}),f(x_{n-2})))\leq\cdots\] \[\leq \psi^{n-1}(d(f(x_{0}),f(x_{1})))=\psi^{n-1}(d_{1})=d_{n},\]
it follows that \(0\leq c_{n}\leq d_{n}\), because \(d\) is nonnegative mappings. So \(\lim d_{n}=\lim c_{n}=0\), because \(\lim c_{n}=0\).
Now we shall proved that \((x_{n})\) is a left Cauchy sequence. Assume now that \((x_{n})\) is not a Cauchy sequence. Then there exists \(\varepsilon>0\) and sequences of positive integers \((m_{k})\) and \((n_{k})\), such that for every \(k\in N\):
i) \(m_{k}>n_{k}\geq k\)
ii) \(h_{k}=d(f(x_{n_{k}}),f(x_{m_{k}}))\geq\varepsilon\), where \(m_{k}\) is the smallest positive integer which satisfies ii).
It follows that \(d(f(x_{m_{k}-1}),f(x_{n_{k}}))<\varepsilon\) for any \(k\). So
\[h_{k} = d(f(x_{n_{k}}),f(x_{m_{k}})\leq d(f(x_{n_{k}}),f(x_{n_{k}+1}))+d( f(x_{n_{k}+1}),f(x_{m_{k}}))\] \[\leq a_{n_{k}}+\varepsilon\leq a_{k}+\varepsilon,\]
which implies that \(\lim h_{n}=\varepsilon\). Hence
\[h_{k} = d(f(x_{n_{k}}),f(x_{m_{k}}))\] \[\leq d(f(x_{n_{k}}),f(x_{n_{k}+1}))+d(f(x_{n_{k}+1}),f(x_{m_{k}+1}))\] \[+ d(f(x_{m_{k}+1}),f(x_{m_{k}}))\] \[\leq a_{n_{k}+1}+\max\{\psi(h_{k}),\psi(a_{m_{k}+1}),\psi(c_{m_{k}+1}) \}+c_{m_{k}+1}\] \[\leq a_{k}+\psi(h_{k})+c_{k}.\]
We get that for some \(k\) that \(\varepsilon\leq h_{k}\leq\varphi(h_{k})\) which is contradiction. So \(x_{n}\) is a left Cauchy sequence. It is convergent because \((X,d)\) is complete. Let \(y\in X\) be its limit. Then \(\lim f(x_{n})=y\), because \(f\) is left sequentially \(d\)-continuous.
Let \(y_{0}\neq x_{0}\) be arbitrary and \(y_{n}\) be sequence of Picard iterates of \(f\) at point \(y_{0}\). Hence
\[d(f(x_{n}),f(y_{n}))\leq\max\{\psi(d(x_{n},y_{n})),\psi(d(x_{n},f(x_{n}))), \psi(d(y_{n},f(y_{n})))\}.\]
When \(n\to\infty\), we get that \(d(f(x_{n}),f(y_{n}))\to 0\), because \(\lim d(x_{n},f(x_{n})=0\), \(\lim\psi^{n}(d(x_{0},y_{0})=0\) and \(\lim d(y_{n},f(y_{n}))=0\). Hence, all sequences of Picard iterations have same limit.
If there exists \(p,q\in X\) such that \(p=f(p)\) and \(q=g(q)\) then
\[d(p,q)\leq\max\{\psi(d(p,p)),\varphi(d(p,q)),\varphi(d(q,q))\}=\varphi(d(p,q)).\]
So \(d(p,q)=0\), which implies \(y_{1}=y_{2}\).
**Remark 5.2**.: _Note that Theorem 5.1 is an extension of some results from Pasicki, Theorem 2.5. in [22] and Theorem 3.1 in [23]._
**Acknowledgments:** First author supported by Serbian Ministry of Science, Technological development and innovation - Grant number 451-03-47/2023-01/ 200105 3.2.2023.
**Author's contributions**: All authors have read and agreed to the published version of the manuscript.
**Funding**: This research received no external funding.
**Conflicts of interest**: The authors declare no conflict of interest.
**Availability of data and materials**: Data sharing is not applicable to this article as no data set were generated or analyzed during the current study. |
2309.11737 | Choice-75: A Dataset on Decision Branching in Script Learning | Script learning studies how stereotypical events unfold, enabling machines to
reason about narratives with implicit information. Previous works mostly
consider a script as a linear sequence of events while ignoring the potential
branches that arise due to people's circumstantial choices. We hence propose
Choice-75, the first benchmark that challenges intelligent systems to make
decisions given descriptive scenarios, containing 75 scripts and more than 600
scenarios. We also present preliminary results with current large language
models (LLM). Although they demonstrate overall decent performance, there is
still notable headroom in hard scenarios. | Zhaoyi Joey Hou, Li Zhang, Chris Callison-Burch | 2023-09-21T02:23:44Z | http://arxiv.org/abs/2309.11737v2 | # Choice-75: A Dataset on Decision Branching in Script Learning
###### Abstract
Script learning studies how daily events unfold. Previous works tend to consider a script as a linear sequence of events while ignoring the potential branches that arise due to people's circumstantial choices. We hence propose Choice-75, the first benchmark that challenges intelligent systems to predict decisions given descriptive scenarios, containing 75 scripts and more than 600 scenarios. While large language models demonstrate overall decent performances, there is still notable room for improvement in many hard scenarios. 1
Footnote 1: Our data and code are at [https://github.com/JoeyHou/branching](https://github.com/JoeyHou/branching).
## 1 Introduction
Events are the fundamental building blocks of the world around us. To understand the world, one has to comprehend the ways events interconnect with each other. Reasoning about the event-to-event relationships has long been a community effort from a wide range of perspectives, targeting temporal relations (Zhou et al., 2021) (Zhang et al., 2020), hierarchical relations (Li et al., 2020) (Zhou et al., 2022), script generation (Chambers and Jurafsky, 2008) (Lyu et al., 2021), open-domain question answering (Yang et al., 2003) (Zhang et al., 2023a), and so on. These tasks are challenging because event relations are often implicit and require commonsense to be uncovered.
As an important direction of event-centric reasoning, script learning studies how stereotypical events unfold which provides us with a humcentered perspective of events. The notion of scripts dates back to Schank (1977); since then, researchers have explored various aspects and applications of script learning, including narratives (Chambers and Jurafsky, 2010), news events (Du et al., 2022), instructions (Zhou et al., 2022), and so on. These studies jointly demonstrate the promising nature of script learning in building better intelligent systems.
However, most these previous works in script learning only consider scripts as linear developments of events. In the real world, scripts include many crossroads where the next event can unfold in multiple ways. In many of these cases, a human would decide the direction to which a script branches. There has yet been no benchmark that
Figure 1: An example of Choice–75. Each goal-option pair has multiple scenarios. Difficulty levels (e.g., easy, hard) will be discussed in 2.1.
challenges an intelligent system to model such decision-making process. Therefore, we define and study such a decision branching task, as follows: given a particular scenario, an intelligent system needs to identify the better among two given options. One such example is shown in Figure 1: to _purchase a plane ticket to see a desert abroad_, one could either _purchase a plane ticket to a major city and take train to the desert_ or _purchase a plane ticket to a small city but next to the desert_. Given a scenario that _the person finds no train route from the major city to desert at that time_, it would be obvious that the first option would not be feasible so the second is preferred.
We propose the first dataset targeted at such decision branching in scripts with 75 examples (Choice-75) each with one goal. Beyond that, we also collect more than 600 scenarios, with difficulty levels based on human judgment, and corresponding optimal choices. During dataset collection, we follow Liu et al. (2022) and apply the human-in-the-loop paradigm to generate challenging examples. We then experiment with state-of-the-art (SoTA) language models (LLMs), including text-davinci-003 and gpt-3.5-turbo, which is the backbone for ChatGPT2 and find that the level of performance of LLMs aligns with the difficulty levels based on human judgment: while these SoTA models demonstrate decent performances, there is still notable headroom in the hard cases.
Footnote 2: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt)
## 2 Dataset
### Overview
We begin by defining the basic unit of our dataset, Choice-75. Every data point in Choice-75 has the following: a goal, two options (option-1 and option-2), a list of scenario, and a list of ground-truth choice, all of which in plain text. In particular, a choice could be either option-1, option-2, or either (if taking either option would make little difference under that scenario). For example, in Figure 1, under scenario #4, both options would have little impact in achieving the goal, making the ground truth answer be either.
We use proScript Sakaguchi et al. (2021) as the starting point for dataset construction. It has 6.4k scripts that describe the sequence of actions for typical day-to-day activities, making it a perfect pool of goals for out task. We randomly sampled 75 steps from proScript as the goal and manually write two feasible option to execute this step. The options are annotated by one graduate student with decent knowledge of event-centric reasoning and is later verified by another graduate student in the same field. In this way, we collect 75 (goal, option-1, option-2) tuples. We then add scenario and the ground-truth choice to those tuples, which will be discussed in detail in Section 2.2 (manual scenario writing by annotators) and in Section 2.3 (human-in-the-loop scenario generation by machine).
After we finish collecting all the scenarios, one very important step we take is defining and annotating the difficulty level of each scenario, i.e. how complex it is for a human to do reasoning and to get the correct option choice. The criteria we use is the number of "steps" one would need in reasoning. In this way, we can explore multi-hop reasoning scenarios as a subset of our task. We defined four levels: _easy, medium, hard_, and _N/A_ (for those scenarios without an optimal choice). For example, in Figure 1, scenario #1 is _easy_ because it only requires one step of reasoning to land on the correct answer (i.e. _no train from the major city to desert => can only fly to the small city_). In contrast, scenario #2 requires one more step (i.e. _has a long-time friend living in the major city => it would be great to visit => travel through the major city is better_), and obviously, scenario #3 is even more complex since _connecting flight_ implicitly implies travel to the small city. The same criteria apply to scenarios that are not plain text (e.g., scenario #5). More details of scenarios from each difficulty level can be found in Appendix C.
### Manual Scenario Annotation
The manual-written scenarios are verb phrases, for example, scenario #1 to #4 in Figure 1. In some cases, the scenario describes an event, e.g., "finds no train route from the major city to desert at that time" (scenario #1); in other cases, the scenario describes a state of a person, either concrete or abstract, e.g., "hates connecting flights" (scenario #3). Summary statistics about manual scenario generation can be found in Table 1.
### Human-in-the-Loop Generation
During the manual scenario generation, we realized the challenge of coming up with high-quality hard scenarios. Therefore, we investigate the human
in-the-loop data generation paradigm and create two additional sets of hard scenarios: machine-generated verb phrases (same format as manual-written ones) and user profiles. For both sets, we follow Liu et al. (2022) by these steps3: first, collect a series of challenging scenarios as exemplars; then over-generate similar scenarios by few-shot prompting an LLM; lastly, manually review and curate the generated scenarios to ensure their validity. For both types of hard scenarios, prompting methods are discussed below.
Footnote 3: We skipped the automatic filtering because the level of challenge is very hard to automatically measure.
**Verb Phrase** The first type of hard scenario is the same as the manual written format, verb phrases. For the over-generation step, instead of simply doing a few-shot generation, we do a two-step prompting to simulate multi-hop reasoning (Figure 2). We first prompt a text-davinci-003 model to generate a scenario that leads to one choice and we save it as scenario-base; then we do another few-shot prompting to generate a new scenario that leads to the scenario-base and save it as scenario-hard (see Appendix A for prompts and more details). The scenario-hard then goes through manual review and curation.
**User Profile** Another type of hard scenario is a user profile in the form of an unordered list, for example, scenario #5 in Figure 1. Our consideration of user profiles in addition to standard textual contexts is motivated empirically. First, many AI systems such as digital smart assistants need to be personalized so that they can predict the decision process of a particular user. Moreover, user profiles, compared to textual scenarios, may be closer to real-life situations where the traits of a user are mined from heterogeneous data sources (which we assume are already condensed into a profile) rather than from short texts. Such profiles inevitably include noise, making the task more challenging. For the example above, the only relevant information to predict the optimal choice (_Option 2_) is that Doe _enjoys visiting metropolis_.
In the over-generation step of user profile scenarios, we prompt a text-davinci-003 model to generate a user profile that prefers one choice over another (Figure 3). In the prompt, we specify some hints and requirements for the output. For example, we require the model to include preferences, financial situations, etc., and make occupations, hobbies, gender, etc. optional (see Appendix A for more details). These generated user profiles are also done through human review and curation.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Group** & **Total** & **Easy** & **Medium** & **Hard** & **N/A** \\ \hline Verb Phrase (Manual) & 272 & 72 (26\%) & 90 (33\%) & 42 (16\%) & 68 (25\%) \\ Verb Phrase (Machine) & 159 & 48 (30\%) & 42 (27\%) & 18 (11\%) & 51 (32\%) \\ User Profile & 219 & 53 (24\%) & 76 (35\%) & 17 (8\%) & 73 (33\%) \\ \hline All & 650 & 151 (27\%) & 172 (30\%) & 63 (11\%) & 178 (32\%) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Counts of scenarios in Choice-75 by difficulty level. Percentages are relative to the group.
Figure 3: Hard scenario generation in user profile format. We prompt LLM with instructions about required/optional/to-avoid information.
Figure 2: Hard scenario generation in verb phrase format. We prompt LLM recursively to achieve the effect of multi-hop reasoning.
## 3 Method and Experiments
Out of the 75 goals in Choice-75, we randomly hold out 10 goals as demonstrations for in-context learning and the rest as the evaluation set.
We formulate the task of predicting optimal choice as an in-context learning task: the goal, two option, and one scenario are presented in the prompt; a LLM is then responsible for completing the prompt with the optimal choice (or either). The few-shot context consists of 9 demonstrations with the same format, including 3 different choices and 3 difficulty levels.
We include two models in our experiments: text-davinci-003 and gpt-3.5-turbo4. We set temperature to 0, max_tokens to 30, top_p to 1, presence_penalty to 0, and frequency_penalty to 0.
Footnote 4: Our last experiment was in 05/2023. Therefore, the closest variant of the turbo model is gpt-3.5-turbo-0613
For all the configurations above, we provide two different prompt formats: naive prompt and story prompt, shown in Table 2. More details about the prompt format can be found in Appendix B.
## 4 Results and Analysis
### Difficulty Levels
The most outstanding result is the alignment of human judgment of difficulty and the model's performance. As shown in Table 3, there is an obvious gap between easy, medium, and hard scenarios across every setting. Although the models we test demonstrate decent impressive performances in easy and medium levels, hard scenarios and "either" choice scenarios (i.e. _N/A_) remain challenging. This again demonstrates that LLMs struggle more in multi-hop reasoning.
### Case Studies
We take out one particular goal from Choice-75 (see Figure 1) and examine the performance of one model setup (gpt-3.5-turbo with _story prompt_). For scenario #3, the model fails to recognize that a small city usually requires a flight connection. For scenario #5, a user profile example, although the scenario explicitly describes this person as _"enjoy visiting metropolis"_, the model still gets it wrong. We observed similar errors in other goals, confirming the challenge of the long context window and unrelated information introduced by the user profile format. We have also included more qualitative analysis in the Appendix D
## 5 Related Work
Event-centric reasoningand script learning (Schank, 1977) are a crucial domain of machine reasoning. Past efforts include procedure learning (Dalvi et al., 2019; Zhang et al., 2020; Zhou et al., 2022), entity tracking (Tandon et al., 2020; Zhang et al., 2023), script construction (Chambers and Jurafsky, 2008) (Lyu et al., 2021) (Sakaguchi et al.,
\begin{table}
\begin{tabular}{l} \hline \hline _Naive Prompt_ \\ - Goal: {goal } - Option 1: {option 1} - Option 2: {option 2} - Scenario: {scenario} \\ - Question: Given the Scenario, which option above is the better choice in order to achieve the Goal? \\ \hline \multicolumn{2}{l}{_Story Prompt_} \\ A person Doe needs to {goal}. Now there are two options for Doe:{option 1} (Option 1) or {option 2} (Option 2). \\ Suppose Doe {scenario}. \\ - Question: Given the Scenario, which option above is the better choice in order to achieve the Goal? \\ \hline \hline \end{tabular}
\end{table}
Table 2: Illustration of the two types of prompts used.
\begin{table}
\begin{tabular}{l l|c c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Group**} & \multirow{2}{*}{**Prompt**} & \multicolumn{3}{c}{**All**} & \multicolumn{3}{c}{**Binary**} & \multicolumn{3}{c}{**Easy**} & \multicolumn{3}{c}{**Medium**} & \multicolumn{3}{c}{**Hard**} & \multicolumn{3}{c}{**N/A**} \\ & & 003 & Turbo & 003 & Turbo & 003 & Turbo & 003 & Turbo & 003 & Turbo \\ \hline
**Verb Phrase** & naive & 0.60 & 0.63 & 0.81 & 0.82 & 0.91 & 0.92 & 0.83 & 0.80 & 0.58 & 0.67 & 0.05 & 0.14 \\
**(Manual)** & story & 0.63 & **0.64** & **0.86** & 0.81 & **0.95** & 0.88 & **0.87** & 0.81 & **0.69** & **0.69** & 0.02 & **0.18** \\ \hline
**Verb Phrase** & naive & **0.56** & **0.56** & 0.77 & **0.80** & 0.79 & 0.79 & 0.77 & **0.85** & 0.69 & **0.75** & **0.21** & 0.15 \\
**(Machine)** & story & 0.55 & 0.55 & 0.79 & **0.80** & 0.79 & **0.82** & **0.85** & 0.81 & 0.69 & **0.75** & 0.15 & 0.13 \\ \hline
**User Profile** & naive & **0.61** & 0.59 & 0.72 & 0.69 & **0.78** & 0.73 & 0.73 & 0.69 & 0.47 & **0.60** & **0.40** & **0.40** \\ & story & 0.50 & 0.60 & 0.57 & **0.73** & 0.58 & 0.76 & 0.60 & **0.74** & 0.40 & **0.60** & 0.37 & 0.34 \\ \hline
**Average** & & 0.57 & **0.60** & 0.75 & **0.77** & 0.80 & **0.82** & 0.77 & **0.78** & 0.59 & **0.68** & 0.20 & **0.22** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Experiment results for all predictions by difficulty levels. **Binary** refers to the overall performances on easy, medium, and hard (i.e. the scenarios _with_ an optimal choice).
2021), and so on. Most of the above work focus on singular chains of events and do not consider decision branches like we do.
**Human decision-making** has been studied under single-agent and multi-agent settings. Efforts in the former focus on specific domains, such as financial earnings call Keith and Stent (2019), online review text Wang et al. (2019), and fantasy text-adventure game Qiu et al. (2022). In contrast, our methods and findings are more general. Efforts in the latter focus on dialogues and conversational AIs, such as dialogues Bak and Oh (2018); Karadzhov et al. (2022); Fernandez et al. (2008) with an emphasis on modeling the differences among characters, which is not our focus.
**Human-in-the-loop dataset creation** has been used for improving dataset quality and collection efficiency. Recent work has shown that LLMs can effectively generate data for various NLP tasks, including inference Liu et al. (2022), structural data synthesis Yuan et al. (2022), script construction Zhang et al. (2023), hate speech detection Tekiroglu et al. (2020), and so on. In our work, we follow the paradigm of Liu et al. (2022).
## 6 Conclusion
In conclusion, we propose a new machine reasoning task: Choice and collected a dataset Choice-75. In order to solve this task, the models need to incorporate implicit commonsense knowledge into the decision-making process. We also conducted experiments with the SoTA LLMs on our dataset and confirmed the alignment between human judgment and model performance. We hope this dataset can be a starting point for a more comprehensive study of LLM's capability of making daily decisions in align with human beings.
### Limitations
The first and most obvious drawback of Choice-75 is its distribution. Since we build Choice-75 from the _steps_ from proScript Sakaguchi et al. (2021), which focuses on daily procedures; therefore the distributions of word choices, writing styles, and domains are inherently limited. Therefore, specific adaptation would be required if the data come from a different domain.
Secondly, the size of the dataset is also relatively small due to limited annotation resources available to us. This also brings potential biases from the annotator, although we try to address this issue by having another annotator verify the annotations. Such a bias in the dataset might negatively impact the models fine-tuned on our dataset in the future. That could potentially lead to inappropriate prediction results from those fine-tuned models if the end users are from a different cultural background.
In addition, in the Choice-75, we make a lot of assumptions that are essentially oversimplified representations of real-world scenarios. For example, we assume each goal has two mutually exclusive choices, while in some cases there are much more choices (not _two_) and each choice overlaps with others (not _mutually exclusive_). There are lots of ways to expand and enrich this dataset and we leave this as future work.
Last but not least, we also do not conduct any prompt engineering due to a limited computation budget. We only experiment with two very basic prompt formats, a fixed number of few-shot samples, and a fixed set of GPT generation parameters. It would also be interesting for future works to study the performance of different language models and different prompt settings on Choice-75.
## Acknowledgements
We thank Nathanael Chambers for inspiring this work and for valuable discussions. This work would not be possible without the help of Hainiu Xu for his verification of the data. We also thank the help from Xiang Lorraine Li for her suggestions on revising this paper.
This research is based upon work supported in part by the DARPA KAIROS Program (contract FA8750-19-2-1004), the DARPA LwLL Program (contract FA8750-19-2-0201), the Office of the Director of National Intelligence (ODNI) via the IARPA HIATUS Program (contract 2022-22072200005), the NSF (Award 1928631), and gifts from Roblox and Salesforce. Approved for Public Release, Distribution Unlimited. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, ODNI, IARPA, NSF, the U.S. Government, or of Roblox or Salesforce. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. |
2309.09178 | Does Reliable Electricity Mean Lesser Agricultural Labor Wages? Evidence
from Indian Villages | Using a panel of 1,171 villages in rural India that were surveyed in the
India Human Development Surveys, I perform a difference-in-differences analysis
to find that improvements in electricity reliability have a negative effect on
the increase in casual agricultural labor wage rates. Changes in men's wage
rates are found to be affected more adversely than women's, resulting in a
smaller widening of the gender wage gap. I find that better electricity
reliability reduces the time spent by women in fuel collection substantially
which could potentially increase labor supply. The demand for labor remains
unaffected by reliability, which could lead the surplus in labor supply to
cause wage rates to stunt. However, I show that electrical appliances such as
groundwater pumps considerably increase labor demand indicating that
governments could target increasing the adoption of electric pumps along with
bettering the quality of electricity to absorb the surplus labor into
agriculture. | Suryadeepto Nag | 2023-09-17T07:02:30Z | http://arxiv.org/abs/2309.09178v1 | # Does Reliable Electricity Mean Lesser Agricultural Labor Wages? Evidence from Indian Villages
###### Abstract
Using a panel of 1,171 villages in rural India that were surveyed in the India Human Development Surveys, I perform a difference-in-differences analysis to find that improvements in electricity reliability have a negative effect on the increase in casual agricultural labor wage rates. Changes in men's wage rates are found to be affected more adversely than women's, resulting in a smaller widening of the gender wage gap. I find that better electricity reliability reduces the time spent by women in fuel collection substantially which could potentially increase labor supply. The demand for labor remains unaffected by reliability, which could lead the surplus in labor supply to cause wage rates to stunt. However, I show that electrical appliances such as groundwater pumps considerably increase labor demand indicating that governments could target increasing the adoption of electric pumps along with bettering the quality of electricity to absorb the surplus labor into agriculture.
**JEL Classification**: J23, J30, O13, Q40
**Keywords**: Electricity Reliability, Labor Wages, South Asia
**Acknowledgements**: I thank David I. Stern for several helpful discussions and detailed comments.
Introduction
The question of whether advancements in electrification have effects on agriculture has been a longstanding one (Barnes and Binswanger, 1986). In the late twentieth century, several countries promoted developments in electricity infrastructure to increase the adoption of electrical farm equipment and agricultural productivity. However, recent studies (Kumar and Rauniyar, 2018; Nag and Stern, 2023) have cast doubt on any electricity-induced benefits for agricultural income. Does this imply electricity access has no consequence on agriculture, or is it simply not significantly bettering productivity and income? In this paper, I study a less-studied dimension of electricity and agriculture - the effect of electricity quality on agricultural labor wages.
Taking a cursory glance at village-level data from the India Human Development Surveys (IHDS) (Desai et al., 2005, 2012), we can observe a strange phenomenon (Table 1). Villages where the quality of electricity improved from 2004-5 to 2011-12, have lower agricultural labor wage rates in 2012 than villages where the quality of electricity worsened. Both types of villages have similar average wage rates in 2004-5, and there is a general trend of increasing wages. However, the increase is less in villages that observe an improvement in the hours of access, when compared to villages that don't (most of which observe a decline). The disparity is consistent in both men's and women's wages, although the difference is larger in men's wages. After controlling for inflation, I find that the increase in wage rates is close to 24% higher in villages where the reliability worsened or stayed at the same level than in villages where
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{6}{c}{Means (standard deviation)} \\ \cline{2-7} & \multicolumn{2}{c}{2004-2005} & \multicolumn{2}{c}{2011-2012} & \multicolumn{2}{c}{\(\Delta\)} \\ \cline{2-7} & Increase in & Non-increase in & Increase in & Non-increase in & Increase in & Non-increase in \\ & Reliability & Reliability & Reliability & Reliability & Reliability & Reliability \\ \hline Women’s daily wage rate (2012 Rs.) & 86.71 & 85.98 & 133.29 & 138.93 & 40.44 & 47.31 \\ & (40.2) & (41.72) & (57.2) & (69.97) & (46.99) & (49.97) \\ Men’s daily wage rate (2012 Rs.) & 116.5 & 119.55 & 167.2 & 183.32 & 50.52 & 62.81 \\ & (51.45) & (52.21) & (66.76) & (83.07) & (56.14) & (61.62) \\ Average daily wage rate (2012 Rs.) & 104.93 & 105.42 & 150.5 & 161.63 & 44.8 & 55.45 \\ & (47.08) & (46.46) & (59.37) & (73.87) & (50.16) & (54.64) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Casual agricultural labor wage rate statistics for villages where reliability improves and villages where it does not. Reliability is measured as the average number of hours of electricity available per day. Includes 1281 Indian villages. The wages have been standardized to the 2012 Rupee by multiplying the 2005 rates by the national consumer price index (reported by the World Bank). Averages are unweighted. The data are unweighted. Source: IHDS I, and IHDS II surveys.
the reliability improved. Although the observation may merely be a correlation, a causal link between electricity and agriculture, wherein better quality electricity drives down agricultural labor wages, may have implications for our understanding of the role of electricity interventions in agrarian economies.
For my study, I use data from two panels of Indian villages surveyed as part of the India Human Development Surveys (IHDS), covering a period from 2004-5 to 2011-12. Among lower and lower-middle-income countries, India has been among the forerunners in the expansion of energy access, and studying the causal effects of reliability in India has consequences for several countries of the Global South expected to ramp up their power generation and distribution to meet electrification targets in the coming years. To investigate if improved electricity access is causally driving down wages, I employ a difference-in-differences design, while controlling for several village-level variables, to suitably identify the effects of reliability on the wage rates, and eliminate biases due to village- and time-fixed effects. My analysis is lent further aid by the rich IHDS survey data sets, which have data on several variables both at the level of households and villages. The data, representative at the national level, includes several essential variables, apart from reliability, such as the time since when a village has electricity, and the fraction of households in the village that have access to electricity. The surveys also have data on various other characteristics such as the status of infrastructure in the village, proximity to banks and markets, the number of schools in the village, etc., making it easier to control for a large number of confounding variables.
The period of study (2004-5 to 2011-12) coincides with India's last-mile electrification efforts, with over 90% villages already having been electrified prior to this period, and over 99% by the end of it. Since most villages had already been connected to the grid, rural India provides a good stage to study the more intricate dimensions of electricity access such as availability and disruptions. According to data from the World Bank, in 2005, India constituted the largest rural population of the World with villages making up 71% of the national population. At this time, agriculture, forestry, and fisheries made up over 15% of the Indian GDP (over 100 billion 2021 USD in size), compared to the global average of 3.2% share of the World GDP, making India an ideal site for investigating changes in agricultural practices and wages.
I find that improvements in electricity quality result in a lower increase in casual agricultural labor wage rates. The effect is especially pronounced in men's wage rates, which consequently results in a smaller widening of the gender wage gap in these villages. On analyzing the time spent in fuel collection which could be a possible labor supply channel, I find that reliable
electricity reduces the time burden of biomass collection, which could potentially increase labor supply. On the other hand, analyzing labor demand, I find that the demand for labor does not change with reliability. Since disguised unemployment and saturated labor demand are existing concerns in the Indian agricultural sector, households may not be able to reap reliability-linked benefits to income that are found elsewhere Dang and La (2019); Pepino et al. (2021). Instead, a possible increase in labor market participation could potentially hurt wage rates. However, I find that electrical farm machinery such as groundwater pumps have a large positive effect on the demand for labor. Therefore, policymakers could focus on greater investments in electric pumps or other alternate avenues that could increase labor demand and absorb the surplus labor, to help reduce what may be an electricity-induced aggravation of disguised unemployment.
The paper is organized as follows: In the next section, I discuss some theoretical arguments and the state of the literature. In the third section, I discuss the institutional context and present the data. This is followed by the empirical strategy in the fourth section, and the results in the fifth. In the final section, I discuss the conclusions.
## 2 Theoretical Arguments and Evidence from the Literature
Although several studies have looked at the effect of electricity quality and outages on industrial performance (Allcott et al., 2016; Maruyama Rentschler et al., 2019; Fakih et al., 2020), few studies have investigated the economic benefits of reliability on well-being, in general (Nduhuura et al., 2021; Hussain et al., 2023), and lesser still on agriculture. Studies find mostly positive impacts of electricity quality on income including Chakraverty et al. (2014) in India, Dang and La (2019) in Vietnam, and Pepino et al. (2021) in the Philippines. Samad and Zhang (2016), however, find that electricity reliability reduces non-farm income in India (2% reduction for every additional hour of power).1 Given that the main source of non-farm income in rural India is agricultural labor, this would be consistent with a reliability-induced fall in wage rates that we see in Table 1.
Footnote 1: Samad and Zhang (2016) are primarily interested in studying the effects of electricity access, and reliability is used as more of a control variable. The authors use propensity-score-weighted-regressions to remove selection bias in which households receive access, and the estimates for the effect of reliability may be biased by the weights. They also include households not connected to the grid as a control, assuming zero reliability, but these households may not benefit the same way as households who observe no changes in their existing reliability.
There are several ways through which the quality of electricity could impact agricultural practices and production (Costantini and Gracceva, 2004), which could have an impact on the demand for labor. Studies have shown that electricity access increases the adoption of electric pumps (Barnes and Binswanger, 1986; Smith and Urpelainen, 2016). If irrigation pumps are
operated at fixed times in the day, irregularity of power could affect the usability of pumps, and agricultural households may be forced to employ conventional methods of manual irrigation or other machinery such as diesel pumps (Smith and Urpelainen, 2016). Agricultural mechanization may supplant human labor used in farms (Baur and Iles, 2023), and the replacement could reduce labor demand. However, the presence of the machines may otherwise also lead to an increased demand for workers to operate the machines or simply as a consequence of increased productivity or farming during dry seasons. In the Indian context, the latter is more likely. Reliability may also affect agriculture indirectly, by affecting decision-makers and laborpers, which may alter the demand and supply of agricultural labor in villages. Better quality electricity could help households diversify their income through entrepreneurial options, as well as new employment opportunities that may have opened up because of better reliability. The demand for agricultural labor demand may also be affected by enhanced affluence, particularly from non-farm income increases, such as the effects observed by Chakraverty et al. (2014). This may reduce households' reliance on agriculture, which may, in turn, reduce the demand for agricultural labor.
In the context of labor supply, electricity access is also thought to have a positive impact on labor market participation (Dinkelman, 2011; Salmon and Tanguy, 2016; Rathi and Vermaak, 2018). Labor supply could increase via a reduced time burden of domestic chores, especially for women. Since reliable electricity is essential for disruption-free electric lighting and refrigeration, better quality electricity could relieve individuals of the time spent in fuel collection (Samad and Zhang, 2016; Njenga et al., 2021), and may enable them to participate in the labor market, which could increase the supply of labor. Whether there is a surplus of agricultural labor and disguised unemployment (Robinson, 1936) in India and other developing countries has been a topic of interest (Wellisz, 1968). Foster and Rosenzweig (2010) suggest that surplus labor already exists in Indian agriculture. Thus additional electricity-induced participation in casual agricultural labor markets is only likely to reduce the village-level wage rates if there isn't a proportional increase in the demand for labor. However, a reduction in the time burden of domestic chores may not necessarily encourage household members to engage in paid employment. In the case of agricultural households, this may result in members of the household supplying more labor to household farms, which could reduce the demand for hired labor.
The evidence on the relationship between electrification and agricultural labor has been mixed, although most studies look at the presence of electricity connections rather than the quality of electricity. In the context of casual labor, Van de Walle et al. (2017) study the
impact of electricity access on labor supply and wage rates in India using a long period from 1982-1999. The authors find a negative impact on the number of days of casual wage work supplied by men, from both household and village-level electrification, although they do not find such effects for women. However, using data from harvest wage rates, the authors do not find a statistically significant impact of electrification on wage rates, either for men or women. To study village-level electrification, though, Van de Walle et al. (2017) use only one dimension of electricity - the time since the village was first connected. In contrast, Emran and Shilpi (2018) use the fraction of connected households as a control variable in their study of the effects of agricultural productivity on hired labor and find a statistically significant negative effect associated with the fraction of households connected, implying that villages with better access to electricity saw agricultural households hiring less labor. Using solitary variables, as in the cases above, may lead to omitted variable biases and paint an incomplete picture. Therefore in my analysis, I complement the reliability variable with the fraction of households connected and also use the time since the village was first connected as a control to test for endogeneity in treatment assignment, to appropriately arrive at unbiased estimates of the effect of reliability.
## 3 Setting and Data
### Historical Context
At the time of India's independence from the British Empire, in 1947, a majority of India's population was neither educated nor literate, with its rural population particularly worse off. In the absence of industries, rural India was overwhelmingly agarian. In 1951, agriculture and allied sectors contributed to 51.9% of India's GDP (Wagh and Dongre, 2016). While that number has since fallen sharply to 17% in 2014-15, a majority of Indians continue to remain employed in agriculture. Therefore, it comes as little surprise that Government efforts in the second half of the 20th century toward electrifying villages were focused on connecting farms, rather than households.
Investments in bringing electric pumps to villages were further accelerated by the famine in Bihar in 1966-1976, and the drought in Maharashtra in 1970-1973 (Dubhashi, 1992). In 1969, the Ministry of Power established the Rural Electrification Corporation to oversee village electrification, with the specific purpose of aiding State Electricity Boards in facilitating the adoption of electric groundwater pumps in villages to increase agricultural productivity. The droughts and famines of the 1960s and 1970s, ultimately along with the Green Revolution in the 1970s, led to an expeditious increase in the rate of village electrification in India, as can be
seen in Figure 1. Despite the growth in the number of villages connected, power consumption by the agricultural sector remained low, and consumption only crossed 10,000GWh in the late 1970s (Figure 2), with an annual consumption of 12,028GWh in 1979, at the end of the Fifth 5-year Plan.2 By the end of the Seventh Plan, in 1997, electricity consumption had grown sevenfold. The growth in agricultural power consumption and the overall installed capacity (Central Electricity Authority, 2021) may imply that even though many villages were connected to the grid in the mid-late 1900s, the quality and quantity may have been subpar, or it may mean that it takes a longer time before households can make use of their electricity connections (Nag and Stern, 2023).
Footnote 2: Reported progress at the end of the Indian financial year (31\({}^{st}\) March) on the final year of the Plan.
Village electrification continued at a steady rate till the turn of the century, and by 2002, approximately 90% villages3 had been connected (Central Electricity Authority, 2021), although a large number of households in these villages were yet to be connected as of 2004-5 (Desai et al., 2005). In fact, until the year 2003, power consumption by the agricultural sector exceeded domestic power consumption at the national level (Central Electricity Authority, 2021). It is only in 2005 that the government began prioritizing household connections, and launched the Rajiv Gandhi Grameen Vidyutikaran Yojana (RGGVY) to connect the remaining unelectrified households. It is during this period that my study is set.
Figure 1: Time series of the total number of Indian villages with electricity connections in India since 1947. Source: Central Electricity Authority (2021)
### Study Period and Dataset
For my analysis, I use data from two waves of the India Human Development Survey (Desai et al., 2005, 2012). The first round of the survey was conducted in 2004-2005, and the second round was conducted in 2011-2012. This makes for a suitable period, as most villages had already been connected, allowing for a larger sample of villages where changes in reliability can be observed and studied. However, since a number of households would be connected in this period, my analysis would need to control for the fraction of households in a village that were connected to the grid. According to the Central Electricity Authority (2021), transmission and distribution losses were considerably higher than average in this period (31.25% in 2004-5 which reduced to 23.65% by 2011-12). This would have an impact on the reliability of electricity and induce variation in the sample. This period is also roughly 15 years after the shock of economic liberalization in India, and the benefits to economic growth had already set in, distinct from earlier work that investigated wage rates before and after the economic liberalization of 1991 (Van de Walle et al., 2017).
The IHDS-I survey covered 41,554 households, and IHDS-II covered 42,152 households in total from 384 districts, 1,503 villages, and 971 urban blocks. Since I am primarily interested in village-level quantities like labor wage rates, I restrict my preliminary analysis to the village questionnaires and data. In all, I construct a panel of 1406 villages from the two survey rounds. I further use household-level data to study labor demand and supply in an attempt to explain the phenomena observed in the village-level analysis. Although there are differences in some of
Figure 2: Time series of the total electric power consumption by the agricultural sector in India since 1947. Source: Central Electricity Authority (2021)
the questions asked in each survey, most of the variables of interest are present in both rounds.
The IHDS surveys make for an excellent data set to study the impact of electrical reliability on agricultural labor wages. The survey has detailed data on several insightful dimensions of village-level electricity. The survey tells me whether a village has electricity, what fraction of households in the village have electricity, when the village was first connected to the grid, and the average number of hours in a day that a village receives power. Similarly, the surveys also have data on the casual agricultural wage rates for sowing, and harvesting, for both men and women and for both major cropping seasons in India - Kharif (summer/monsoon) and Rabi(winter). Since the surveys also have household-level data, I can further investigate the labor demand and supply channels contributing to the trends observed in agricultural wages, at a more microscopic level, by studying changes in agricultural households.
### Descriptive Statistics
Since I work with observed data, rather than an experiment, whether a village experienced improvements or declines in reliability need not necessarily have been determined by randomization. Therefore, it is important to check if the assignment of villages to different treatment doses i.e., different levels of improvement or declines in electricity reliability were influenced by other factors. The ideal way to check whether the treatment is "As good as random" is to see if there are parallel trends in the treatment groups in the pre-treatment periods. However, in the absence of multiple pre-treatment periods, the best I can do is make the observation that the pre-treatment levels of wages are essentially indistinguishable in the pre-treatment period (Table 1).
Another method through which one could eliminate the possibility of selection bias in assignment to treatment due to covariates is by measuring the correlations between pre-existing levels of village characteristics and the treatment dose. Table 2 presents the mean and standard deviations of village characteristics of villages that receive more reliable electricity access and those that do not along with Pearson's correlation coefficient of the change in reliability with the 2005 levels of the variables. Overall, there are only marginal differences between most characteristics of treatment and control group villages, and the difference remains similar over time which encourages a difference-in-differences approach (Wing et al., 2018).
Since the correlations between pre-treatment characteristics and the change in reliability are weak, I assume that the treatment is random. This is not an unreasonable assumption as changes in reliability, are hardly conscious decisions made by policymakers based on village
characteristics, and are mostly a result of transmission and distribution losses. A more formal analysis of whether the assignment to treatment is random or not is presented in Appendix A.
In 2004-5, the average hours of electricity available to villages was 11.76 hours with a standard deviation of 7.80 hours, which increased to 13.38 hours in 2011-12, with a standard deviation of 6.74 hours. The correlation between household-level reliability and village-level reliability is 0.5491 and 0.6628, in 2004-05 and 2011-12, respectively, both coefficients being exceptionally statistically significant. While there is some parity between household responses on reliability, there are large variations as well, with households often even reporting higher quality than that received at the village level, which may indicate that the data at the household level for electricity quality may not be as trustworthy.
The IHDS surveys have data on casual agriculture wage rates in villages for both major cropping seasons in India - the Kharif season where crops are sowed towards the end of the summer, around June, and harvested at the end of the monsoon around October and November, and the Rabi season in which crops are sown at the start of winter, around mid-late November, and harvested in Spring, in April. Rice and maize are the main Kharif crops in India, while wheat is the main Rabi crop. I construct the wage rates for each category by averaging the wage rates for sowing and harvesting, when available4. From the data (Table 3), I can observe that
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \multicolumn{4}{c}{Moms (standard deviation)} \\ \cline{2-5} & \multicolumn{1}{c}{2004-2005} & \multicolumn{2}{c}{2011-2012} & \multicolumn{1}{c}{Correlation of} \\ & & & & \(\Delta\)Hability with} \\ \cline{2-5} & & & & \\ \hline Procentage of households with Electricity access (\(\mathbb{C}\)) & 09.98 & 71.50 & 80.36 & 79.90 & 0.00 \\ Procentage of Modified Road\({}^{\dagger}\) & 0.62 & 0.72 & 0.86 & 0.90 & -0.00\({}^{\ast}\) \\ Distance to the nearest bank through/work cooperative (km) & 5.27 & 4.00 & 5.44 & 4.54 & 0.10\({}^{\ast}\) \\ Distance to the closest market (km) & 2.37 & 2.28 & 2.43 & 2.78 & 0.00 \\ Distance of NOO/Development Organisation\({}^{\dagger}\) & 0.81 & 0.81 & 0.81 & 0.84 & -0.07\({}^{\ast}\) \\ Distance of Primary Healthcare Content\({}^{\dagger}\) & 0.81 & 0.80 & 0.83 & 0.83 & -0.07\({}^{\ast}\) \\ Number of Government Primary School\({}^{\dagger}\) & 0.81 & 0.81 & 0.81 & 0.81 & -0.07\({}^{\ast}\) \\ Number of Government Middle School\({}^{\dagger}\) & 0.82 & 0.82 & 0.83 & 0.83 & -0.09\({}^{\ast}\) \\ Number of Government Secondary School\({}^{\dagger}\) & 0.83 & 0.84 & 0.84 & 0.83 & -0.09\({}^{\ast}\) \\ Number of Government Higher Secondary Schools\({}^{\dagger}\) & 0.85 & 0.84 & 0.83 & 0.82 & -0.05 \\ Number of Government Higher Secondary Schools\({}^{\dagger}\) & 0.81 & 0.83 & 0.84 & 0.84 & -0.05 \\ \hline \hline \end{tabular} \({}^{\dagger}\) Dummy variable which takes 1 for “yes” and 0 for “no”.
\({}^{\ast}\)Significant at the 5% level, **Significant at the 1% level, ***Significant at the 0.1% level
\end{table}
Table 2: Descriptive statistics for treatment and control Villages 2004-2005 and 2011-2012. “Positive Treatment” villages refer to those villages which see an improvement in the average number of hours of electricity received in a day, and “Negative Treatment” villages refer to those which do not (These include control villages where there is no change). Correlation refers to Pearson’s correlation coefficient, with the two-sided alternate hypothesis. Includes 1254 villages. The data are unweighted. Source: IHDS I, and IHDS II surveys.
\({}^{\dagger}\) Dummy variable which takes 1 for “yes” and 0 for “no”.
\({}^{\ast}\)Significant at the 5% level, **Significant at the 1% level, ***Significant at the 0.1% level
there is little seasonal variation in the wage rates. However, the wage rates are usually higher for men than for women, with men earning nearly 35% more than women in 2004-5. Even though the wage gap shrinks slightly by 2011-12, there is still a considerable disparity. Figure 3-6 shows the distribution of changes in real wage rates for agricultural labor for women (Figure 3), men (Figure 4), the gender wage gap (Figure 5), and reliability (Figure 6). Typically, in states where reliability reduces, the gender wage gap increases, while in states where reliability increases, the gender wage gap reduces (Rajasthan, Punjab, and Northeast Indian states being the only examples), with the correlation between changes in state-wise reliability and the gender wage gap being -0.33. It is noteworthy that the reliability-associated differences in the growth of wages discussed earlier in Section 1 exist in seasonal wages as well. The difference, however, is less for women's wage rates, than men's wage rates, suggesting that gender may play a role in the dynamics of the effect on wage rates.
Figure 3: Average state-wise changes in women’s casual agricultural labor wage rates (2012 Rs.) between 2004-5 and 2011-12. Averages are unweighted. Source: Desai et al. (2005)
Figure 4: Average state-wise changes in men’s casual agricultural labor wage rates (2012 Rs.) between 2004-5 and 2011-12. Averages are unweighted. Source: Desai et al. (2005)
Figure 5: Average state-wise changes in the gender wage gap (men’s rate - women’s rate) (2012 Rs.) between 2004-5 and 2011-12. Averages are unweighted. Source: Desai et al. (2005)
## 4 Empirical Strategy
### Difference-in-differences specification
I begin with a two-way fixed effects specification for village \(j\):
\[y_{jt}=\alpha_{j}+\beta R_{jt}+\gamma X_{jt}+\mu_{t}+\epsilon_{jt},\quad t=2005,2012 \tag{1}\]
where the outcome variable \(y_{jt}\) can be either the log or level of the casual agricultural labor wage rate of the village, \(R_{jt}\) is the electricity reliability, \(X_{jt}\) is the vector of control variables, \(\alpha_{j}\) is the village-specific fixed effect, \(\mu_{t}\) is the time-specific fixed effect, and \(\epsilon_{jt}\) is the idiosyncratic error term. The control variables that I use are the fraction of households in the village with electricity access, whether there are metalled roads, NGOs or development organizations, primary healthcare centers in the village, the distances to the closest bank or credit cooperative, and market, the number of government and private primary, middle, secondary and higher secondary schools, and whether the village has less than 1000 inhabitants or more than 5000 inhabitants. I am interested in the coefficient \(\beta\) which quantifies the effect of reliability. Taking
Figure 6: Average state-wise changes in reliability (hours per day) between 2004-5 and 2011-12. Averages are unweighted. Source: Desai et al. (2005)
the first difference of equation 1 in time,
\[\Delta y_{jt}=\Delta\mu_{t}+\beta\Delta R_{jt}+\gamma\Delta X_{jt}+\Delta\epsilon_ {jt},\quad t=2012 \tag{2}\]
Since I have data on wage rates, reliability, and control variables, that allow me to generate a cross-section of differences as required for the above model, I can estimate the coefficients using an ordinary least squares (OLS) regression, with \(\Delta\mu\) as the intercept.
At this stage, it is important to note that the reliability is constrained to be between 0 and 24 hours a day. Therefore even if the assignment is random, only villages with access to high-quality electricity in the pre-treatment period would be able to experience a large negative treatment, and similarly, only villages with poor reliability in the pre-treatment period will be able to experience large benefits. This association poses a problem to the identification, as it will be difficult to determine if the effects are attributed to the changes or rather a lagged effect arising from pre-treatment levels. Is an increase in reliability reducing wages, or are wages reducing because the village historically has low reliability? One simple way to deal with this problem is to include the pre-treatment level of reliability in the regression as well, so that the regression does not suffer from omitted variable bias, and the effects can be independently attributed to the change in reliability and the pre-treatment level of reliability. Incorporating
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & & & \multicolumn{3}{c}{Means (standard deviation)} & \\ \cline{3-7} & & 2004-2005 & & 2011-2012 & & \(\Delta\) \\ \cline{3-7} & & Increase in & Non-increase in & Increase in & Non-increase in & Increase in & Non-increase in \\ & & Reliability & Reliability & Reliability & Reliability & Reliability \\ \hline Women’s daily wage rate (2012 Rs.) & summer-monsoon & 86.73 & 85.94 & 134.2 & 138.9 & 41.23 & 46.53 \\ & & (39.8) & (42.51) & (59.26) & (70.89) & (48.73) & (50.46) \\ & Winter & 84.56 & 85.63 & 132.93 & 138.86 & 40.02 & 48.01 \\ & & (40.91) & (42.399) & (56.43) & (68.46) & (48.19) & (52.45) \\ Men’s daily wage rate (2012 Rs.) & summer-monsoon & 116.99 & 119.66 & 168.26 & 183.48 & 51.24 & 61.93 \\ & & (51.51) & (52.81) & (68.21) & (83.76) & (57.66) & (63.38) \\ & & 115.95 & 117.95 & 166.76 & 184.65 & 49.48 & 63.51 \\ & & (53.01) & (51.8) & (66.01) & (83.15) & (56.72) & (61.76) \\ Average daily wage rate (2012 Rs.) & summer-monsoon & 105.37 & 105.46 & 151.52 & 161.73 & 45.53 & 54.75 \\ & & (46.86) & (65.7) & (61.01) & (74.74) & (51.71) & (66.00) \\ & Winter & 105.23 & 104.74 & 150.13 & 162.96 & 42.8 & 55.58 \\ & & (49.01) & (46.66) & (58.52) & (73.85) & (51.1) & (54.81) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Casual agricultural labor wage rate statistics for villages where reliability improves and villages where it does not, segregated by gender and cropping seasons. Reliability is measured as the average number of hours of electricity available per day. Includes 1281 Indian villages. The wages have been standardized to the 2012 Rupee by multiplying the 2005 rates by the national consumer price index (reported by the World Bank). Averages are unweighted. The data are unweighted. Source: IHDS I, and IHDS II surveys.
a lagged effect of the pre-treatment level (\(R_{jt-1}\)), the model stands as follows:
\[\Delta y_{jt}=\Delta\mu_{t}+\beta_{1}\Delta R_{jt}+\beta_{2}R_{jt-1}+\gamma \Delta X_{jt}+\Delta\epsilon_{jt},\quad t=2012 \tag{3}\]
Including the pre-treatment level in the difference equation, is the same as including a partial sum of reliability in the original two-way fixed effects specification. Thus, the correct form of equation 1 is
\[y_{jt}=\alpha_{j}+\beta_{1}R_{jt}+\beta_{2}\sum_{s=t_{0}}^{t-1}R_{js}+\gamma X _{jt}+\mu_{t}+\epsilon_{jt},\quad t=2005,2012 \tag{4}\]
Clearly, the fraction of households connected, which is a control variable, has the same problem, as values range from 0 to 1, and thus I also incorporate a pre-treatment level of the fraction as a control variable. The coefficient \(\beta_{1}\) in Equation 3, would give an unbiased estimate of the effect of changes in reliability.
However, there is little reason to believe that the effects of reliability would be symmetrical, i.e., the effects of a positive change in reliability need not quantitatively be the opposite of the effects of a negative change in reliability. Therefore, I consider two separate natural experiments, one to study the effect of improvements in reliability, and one to study the effect of a decline in reliability. The advantage of splitting the sample into villages that include only positive or negative samples is that I require changes in reliability - the treatment - to be randomly assigned only within each sample. For instance, there could be a selection bias in determining which villages receive positive and negative treatment, but as long as the magnitude of the treatment is random, the results will hold. In Appendix A, I investigate whether the treatment is genuinely random. We find that for villages where reliability increases, the treatment can be assumed to be as good as random, and the results of the difference-in-differences estimates will be unbiased. However, for villages where reliability reduces, there may be some endogeneity in the assignment to treatment and the results may not be as robust. Therefore, when we split the sample, as opposed to including two treatment variables in a single regression, we are able to isolate a sample of villages and a type of treatment that is as good as random.
### Mechanisms: Labor Demand and Supply
Since there are no village-level variables for demand and supply, I use household-level data to study the demand and supply of labor, although I use village-level reliability as the treatment, as households often report considerably different levels of electricity reliability and the household
data may not be as accurate.5 Moreover, the quality of electricity available to houses may not be the same as the quality of electricity available to farms (which will be useful in studying labor demand), and in the absence of data on electricity reliability in farms, the village's reliability is the best proxy.
Footnote 5: Since all the electricity comes from a common distributor in the village, the variation in quality is unlikely to be as much as I find in the household data. While there may be losses that affect individual households, sometimes households report levels of electricity reliability that exceed the levels received by the village, which indicates that household reporting may be an unreliable source of information for electricity quality. Furthermore, even load shedding is typically carried out for entire villages and not individual households.
There is no variable in the survey that can be directly used as a measure of labor supply. Thus, I investigate a channel through which labor supply may be affected, as this is easier to test from the data. I postulate that the supply of labor would be affected by electricity quality primarily through a reduced time burden of domestic chores. In particular, the tasks that would be especially affected by electricity disruptions are lighting and refrigeration. 4045 households in the sample (21.2%) claimed to own refrigerators in 2011-12. Long disruptions would hamper refrigeration, and the presence of good-quality electricity could reduce the time and fuel required for cooking, through refrigeration. Since the survey does not have questions related to the time spent cooking, I am constrained to restrict my analysis to fuel. About 75% households use firewood for either cooking or lighting. Over 40% households also use dung for cooking. A large number of households collect these fuels from either owned or community land, good-quality electricity could significantly reduce the time spent by members in collecting biomass. Thus, as a supply channel, I study the effect of electricity reliability on the time spent by men and women on fuel collection. I use a difference-in-difference specification, similar to equation 3. For a household \(i\) in village \(j\), we model the time spent in fuel collection \(F_{ijt}\) by
\[\Delta F_{ijt}=\Delta\mu_{t}+\beta_{1}\Delta R_{jt}+\beta_{2}R_{jt-1}+\gamma \Delta X_{ijt}+\Delta\epsilon_{ijt},\quad t=2012 \tag{5}\]
where most of the coefficients represent the effects of the same quantities as before (on time spent in fuel collection), except \(\gamma^{\prime}\) which now includes coefficients for some household-level controls along with the village-level controls listed about. The household-level controls are the number of adult men and women, the presence of a source of drinking water in the house, the presence of flush toilets, and the presence of separate kitchens. Obviously, village-level electricity quality cannot have a direct effect on households not connected to the grid. Hence, I only consider those households that had electricity in both 2004-5 and 2011-12. Ideally, had there been data for how much labor individuals in a household were willing to supply I could have studied the effect of time saved in fuel collection on such a variable. But in the absence of such a variable,
a proper analysis of labor supply is beyond the scope of this work.
Unlike supply, demand can be studied by considering the person-days of labor hired by agricultural households to work on their farms. Of course, I need to control for the labor wage rates as well, to avoid an omitted variable bias. Not controlling for labor wage rates could be particularly problematic, as reliability is correlated with the wage rates, and we know from theory that wage rates themselves can affect the demand for labor. In addition to the control variables listed above, we also include the change in the ownership of electric pumps (\(\Delta P_{ijt}\)). We use the logarithm of the person-days of hired farm labor \(D_{ijt}\) as the outcome variable and formulate another difference-in-differences equation
\[\Delta D_{ijt}=\Delta\mu_{t}+\beta_{1}\Delta R_{jt}+\beta_{2}R_{jt-1}+\delta_ {m}M_{jt}+\delta_{w}W_{jt}+\phi\Delta P_{ijt}+\gamma\Delta X_{ijt}+\Delta \epsilon_{ijt},\quad t=2012 \tag{6}\]
\(M_{jt}\) is the log men's casual agricultural wage rate in village \(j\), and \(W_{jt}\) is the corresponding quantity for women's wages. The coefficients \(\delta_{m}\) and \(\delta_{w}\) are the elasticities of labor demand for men and women, respectively, and tell us the fractional change in demand, for a fractional change in wages. Since pump ownership could have an impact on demand, it is also important to check whether reliability has an effect on pump ownership and is not affecting demand through pump ownership. We do so using another difference-in-difference regression:
\[\Delta P_{ijt}=\Delta\mu_{t}+\beta_{1}\Delta R_{jt}+\beta_{2}R_{jt-1}+\gamma \Delta X_{ijt}+\Delta\epsilon_{ijt},\quad t=2012 \tag{7}\]
## 5 Results
### Effect of electricity quality on wage rates
Table 4 presents the results of the difference-in-differences regressions for the impact of a change in the electricity reliability (both positive and negative) on the change in men's and women's casual agricultural labor wage rates. Although we present the results for both positive and negative treatments, only the positive treatments can be assumed to be as good as random, and the estimates of the effects of negative treatment may not be as robust. First, I find that reliability has a negative effect on both logs and levels of casual agricultural labor wages, for both men and women, although the effect on women's wage rates is considerably smaller, and not statistically significant. For every additional hour's increase in electricity reliability in a village, men's casual agricultural wage rates reduce by nearly Rs. 2 or 1.34% in relative terms. The effect on both levels and logs is statistically significant at the 1% level. On the other hand,
positive treatments have small and statistically insignificant impacts on women's wage rates.
The effects of a decrease in quality, however, has consistently larger effects on both men's and women's wages. For every additional hour's reduction in electricity availability, men's wage rates rise by over Rs. 2 (significant at the 1% level), although the effect on the log of wage rates is small (Rs. 0.6) and not significant. Similarly, women's wage rates increased by Rs. 1.2 (significant at the 5% level), although the effect on log wages is not statistically significant. Nevertheless, this implies that the effects of changes in reliability are asymmetrical, particularly for women's wage rates, and wage rates are more sensitive to reductions in reliability than increases. But these estimates are not as robust as those for improvements in reliability as the assignment to treatment may be biased by initial characteristics.
Interestingly, even the pre-treatment level of reliability has a statistically significant negative effect on men's wages- a Rs. 1.4 or a 1.04% reduction for every hour's increase or a Rs. 1.8 or 0.91% increase for every hour's reduction (all significant at the 1% level). From this, I can conclude that the effect of the overall level, not the change alone, is negative. However, for women's wages, the effect is again statistically insignificant for all cases except on the levels of the wage rate in the negative treatment case, where wages increase by Rs. 1.4, significant at the 5% level. An obvious consequence of different effects on men's and women's wages is that better reliability leads to a smaller wage gap. From the estimated coefficients, it is clear that the wage gap increases over time for both men and women, after controlling for various factors, although villages where reliability improves would experience a smaller widening of the gender
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{6}{c}{Coefficients (standard errors)} \\ \hline \multicolumn{2}{c}{Positive Treatment} & \multicolumn{3}{c}{Negative Treatment} \\ \hline Women’s Wage Rates & Men’s Wage Rates & Women’s Wage Rates & Men’s Wage Rates & Men’s Wage Rates \\ \multicolumn{2}{c}{(2012 Rs.)} & \multicolumn{2}{c}{(2012 Rs.)} & \multicolumn{2}{c}{(2012 Rs.)} & \multicolumn{2}{c}{(2012 Rs.)} \\ \multicolumn{2}{c}{\(N=658\)} & \multicolumn{2}{c}{\(N=720\)} & \multicolumn{2}{c}{\(N=580\)} & \multicolumn{2}{c}{\(N=637\)} \\ \cline{2-7} Independent Variable & Levels & Log & Levels & Logs & Levels & Logs & Levels & Logs \\ \hline Intercept & 29.2885*** & -0.2267*** & 51.7882*** & -0.0721 & 13.2521 & -0.386*** & 29.7162*** & -0.2218*** \\ & (6.8389) & (0.0642) & (8.1564) & (0.0591) & (9.2220) & (0.0663) & (9.5753) & (0.0547) \\ \(\Delta\) Reliability (hours) & -0.7196 & -0.0060 & -1.9521*** & -0.0134*** & -1.1836** & -0.0065 & -2.0438*** & -0.0061 \\ & (0.5177) & (0.0044) & (0.6213) & (0.0042) & (0.5919) & (0.0045) & (0.7467) & (0.0049) \\ Pre-treatment Reliability (hours) & -0.6470 & -0.0047 & -1.4216*** & -0.0104*** & -1.1436*** & -0.0050 & -1.7940*** & -0.0091*** \\ & (0.3962) & (0.0032) & (0.5042) & (0.0032) & (0.4914) & (0.0033) & (0.5267) & (0.0029) \\ \hline \hline \end{tabular}
\end{table}
Table 4: The impact of reliability on men’s and women’s casual agricultural labor wage rates. Positive treatment refers to the set of villages where reliability increased or stayed the same. Negative treatment refers to the set of villages where reliability decreased or stayed the same. Robust standard errors clustered at the district level. *Significant at the 10% level, **Significant at the 5% level, ***Significant at the 1% level
-page gap when compared to a village where reliability declines.
The effect of reliability on wage rates, however, is difficult to explain intuitively and will require studying the effects of reliability on labor demand and supply.
### Time spent in fuel collection: A potential labor supply channel
In order to study the possible effect of electricity reliability on the supply of labor, I estimate the impact of changes in reliability on the time spent by men and women on fuel collection. Table 5 presents the results.6 Reliability has a negative effect on the time spent on fuel collection. In villages where reliability improved, every additional hour of power available reduces the time spent by women on fuel collection by 12.6 minutes per week, which is statistically significant at the 1% level, while men's fuel collection time reduces by 6.1 minutes per week, significant at the 10% level. This can amount to a very large reduction of fuel collection time if the quality changes sizeably.
Footnote 6: We include only villages where reliability increases as the treatment in these villages can be assumed to be random. See Appendix A.
Clearly, women's fuel collection times are affected more, as fuel collections like other domestic chores are typically carried out by women. Time relieved from household chores could substantially increase the supply of labor and has been hypothesized to be a channel for electricity-facilitated increase in labor participation in past studies as well (Dinkelman, 2011). However, it may be more complicated, especially in the Indian context, and in other overwhelmingly agarain rural economies. In general, time saved could increase women's labor market participation, which would increase the supply of labor and may have a detrimental effect on wages, or the
\begin{table}
\begin{tabular}{c c c} \hline \hline & \multicolumn{2}{c}{Coefficients (standard errors)} \\ \cline{2-3} & \(\Delta\) Women’s Fuel Collection Time & \(\Delta\) Men’s Fuel Collection Time \\ & (minutes per week) & (minutes per week) \\ & \(N=2667\) & \(N=2513\) \\ \hline Intercept & 63.9049 & -91.7331* \\ & (79.3594) & (53.9066) \\ \(\Delta\) Reliability (hours) & -12.6077*** & -6.1298* \\ & (4.2664) & (3.4914) \\ Pre-treatment Reliability (hours) & 0.6603 & -0.3436 \\ & (3.0550) & (2.7551) \\ \hline \hline \end{tabular}
\end{table}
Table 5: The impact of reliability on men’s and women’s fuel collection times. Includes villages where reliability did not reduce. Robust standard errors clustered at the district level.
*Significant at the 10% level, **Significant at the 5% level, ***Significant at the 1% level
time could be utilized in leisure, in which case the labor market would be virtually unaffected.7 A third possibility is that women take up more domestic chores that used to be performed by men, freeing up men's time burden of domestic work, which could increase men's labor market participation, which would also increase labor supply. This may explain why men's wage rates fall more than women's wage rates. Although that could also be if women are hired more in the labor market because farmers may want to hire cheaper labor. But this possibility only works on the assumption that men's and women's labor is thought of to be fungible, which is unlikely to be the case. Another alternative is that women could provide labor on household-owned businesses, or more likely in the case of rural agricultural households in India, women could supply labor to farms owned by their households, a common phenomenon in Indian farms. While this would not exactly increase labor supply in a strict sense, since it is not paid work, it could reduce the demand for hired labor. Thus, it is also important that I study if reliability, or even fuel collection time, has an impact on labor demand.
Footnote 7: Unfortunately, there is no variable that aptly quantifies leisure time, and I cannot test this empirically.
### Labor demand effects of electricity reliability
To estimate the effects of electricity quality on labor demand, I study how changes in reliability affected the number of person-days of agricultural labor hired by farms. Table 6 presents the estimated effects on the levels and logs of the number of days of hired work. Both the changes in reliability and the pre-treatment level of reliability have no statistically significant impacts on the number of days of labor hired by farms. We also do not find any statistically significant effects of changes in wage rates, either at the levels or logs, on demand. This suggests that the own elasticity of labor demand is inelastic, for both men and women.
An increase in the number of pumps, however, has a large statistically significant positive impact on the demand for labor. Although the distribution of pumps may not be random thus making the sizes difficult to interpret, the positive effect of pump ownership on log changes in the number of hired days is significant at the 1% level. Pump ownership increases the number of person-days of hired labor by 57 days or 30% in relative terms, which is close to two months for one laborer. This effect does not disappear even after controlling for baseline levels of household affluence using the 2004-5 levels of expenditure. The positive effect, signifies, that electrical equipment such as irrigation pumps do not supplant labor, and instead drive up the demand for labor. This increase in demand could be because pumps may require labor to operate, or because pumps could increase agricultural productivity, incentivizing higher investments in agriculture. It could also arise from pumps facilitating agriculture during the dry seasons,
which is most likely. Thus, the possibility that electricity quality reduces labor wages through the mechanization of agriculture may be ruled out. To ensure that reliability does not affect labor demand through increased pump ownership, I test the effect of changes on reliability on pump ownership. Table 7 presents the results, and clearly there is no statistically significant increase (or decrease) in pump ownership due to increases in reliability.
## 6 Conclusion
I use village and household-level data from rural India to study how agricultural labor wages responded to improvements in electricity quality between 2004-5 and 2011-12. Using the India Human Development Survey which includes a panel of 1406 villages, I employ a difference-in-differences design to eliminate village and time-fixed effects. The major finding is that villages where electricity quality increases substantially experience a smaller increase in casual agricultural labor wage rates than villages where there are smaller or no improvements in electricity reliability. The effect is larger and statistically significant for men's wage rates, while the effect on women's wage rates is statistically insignificant, although still negative. As a result, villages with large improvements in electricity quality see a smaller widening of the gender wage gap,
\begin{table}
\begin{tabular}{c c c} \hline \hline \multicolumn{3}{c}{Coefficients (standard errors)} \\ \hline \multicolumn{3}{c}{\(\Delta\) Number of Man-days of Hired Farm Labor} \\ \hline \(N=593\) & Levels & Logs \\ \hline Intercept & -39.3914 & -0.9263*** \\ & (26.9972) & (0.2527) \\ \(\Delta\) Reliability (hours) & -0.0521 & 0.0104 \\ & (1.0093) & (0.0160) \\ Pre-treatment Reliability (hours) & 0.9098 & 0.0055 \\ & (0.8845) & (0.0127) \\ \(\Delta\) Women’s agricultural labor Wage Rate (2012 Rs.) & 0.2541* & \\ & (0.1374) & \\ \(\Delta\) Men’s agricultural labor Wage Rate (2012 Rs.) & -0.1100 & \\ & (0.1236) & \\ \(\Delta\) Log Women’s agricultural labor Wage Rate (2012 Rs.) & & 0.5450** \\ & & (0.2423) \\ \(\Delta\) Log Men’s agricultural labor Wage Rate (2012 Rs.) & & -0.4286 \\ & & (0.2850) \\ \(\Delta\) Number of Electric Pumps Owned & 56.9618*** & 0.3035** \\ & (15.1355) & (0.0827) \\ \hline \hline \end{tabular}
\end{table}
Table 6: The impact of reliability on the person-days of hired farm labor. Includes villages where reliability did not reduce. Robust standard errors clustered at the district level. *Significant at the 10% level, **Significant at the 5% level, ***Significant at the 1% level
both in absolute and relative terms. This effect of electricity quality on wages also helps explain the negative effect of reliability on non-farm income found by Samad and Zhang (2016).
Since changes in electricity reliability could affect wage rates through labor and supply, I also investigate such possibilities. In the absence of appropriate variables for supply, I study a channel that could potentially affect supply instead - the effect of reliability on the time spent by men and women in fuel collection, hypothesizing that a smaller time burden of household chores could encourage greater participation in labor markets. I find that the effect of reliability on women's fuel collection times is statistically significant (and on men's times to a lesser extent). Women's fuel collection times are reduced more for every additional hour of electricity availability, as they already spend a greater time in fuel collection in comparison to their male counterparts. Despite the smaller time saved by men, the effect on men's wages may be larger because the time saved by women may not necessarily be spent on employment opportunities, and women may simply displace men in other domestic chores that are not affected by better-quality electricity. An unrelated implication of negative effects on the time spent in collecting primitive fuels is that it suggests that better quality electricity may facilitate households' assent on the fuel ladder (van der Kroon et al., 2013).
Since a smaller time burden need not necessarily initiate wage work, and households may also work in household farms, thereby driving down demand, it is also important to investigate how the demand for hired agricultural labor responds to improvements in electricity quality. I find no effect of electricity reliability on the number of person-days of hired work, implying that while supply increases, the demand does not change with electricity quality. Since demand
\begin{table}
\begin{tabular}{c c} \hline \hline & Coefficients (standard errors) \\ \cline{2-3} \(N=4410\) & \(\Delta\) Number of Electric Groundwater Pumps Owned \\ \hline Intercept & 0.0985 \\ & (0.0655) \\ \(\Delta\) Reliability (hours) & -0.0050 \\ & (0.0038) \\ Pre-treatment Reliability (hours) & 0.0003 \\ & (0.0026) \\ \hline \hline \end{tabular}
\end{table}
Table 7: The impact of reliability on the ownership of electric pumps. Includes villages where reliability did not reduce. Robust standard errors clustered at the district level.
*Significant at the 10% level, **Significant at the 5% level, ***Significant at the 1% level
does not increase, surplus labor aggravates the levels of disguised unemployment and drives down wages. Thus, better quality electricity indirectly causes an overall negative effect on the agricultural labor market and labor wage rates.
What does, however, increase the demand for labor is the adoption of electric groundwater pumps, as these would require more hired labor to operate, and can lead to an increase in the number of cropping seasons driving up labor demand during the off-season. This has serious policy implications for governments in the Global South. Interventions such as household electricity connections or improvements in the quality of electricity supplied that could increase labor participation may be detrimental to wage rates due to the already saturated labor markets with persistent disguised unemployment, but the situation may be alleviated if improvements in electricity quality are also accompanied by other changes that could help absorb the surplus labor. Since the labor supply effects of electricity access and quality in rural areas are well-known, governments should focus on simultaneously enabling farms to make good use of that electricity through pumps and other infrastructure or invest in alternate avenues to proportionately increase the demand for labor.
|
2310.20489 | Northward Propagating Versus Non-propagating BSISO over South Asia:
Horizontal Advection Driven Moisture Mode Within a Vertically Sheared
Background | The Boreal Summer Intraseasonal Oscillation (BSISO) is a pronounced mode of
tropical variability. Here, we identify two types of BSISO events, one which
propagates northward over South Asia (SA) from the equatorial Indian Ocean
(EIO), and the other which doesn't. Contrasting their behaviour shows that
northward propagation occurs in multiple stages after convection is initiated
over the EIO. First, convection moves into the southern Arabian Sea (AS) due to
moistening of the free troposphere via horizontal BSISO anomalous winds acting
on the background moisture distribution, and forms a northwest-southeast
(NW-SE) oriented convection band. Subsequently, in the presence of an easterly
vertical shear of monsoon winds and meridional gradient of anomalous vertical
velocity, a NW-SE oriented tilting term is generated that results in a tilted
gyre north of the existing convective anomaly and south-easterly BSISO winds
over the South Asian landmass. In the second stage, these winds tap the ambient
north-westward moisture gradient and help move convection further north over
land. Moreover, background winds advect anomalous moisture to initiate
convection over the Bay of Bengal. For non-propagating events, though a Rossby
gyre results as a response to nascent EIO convection, it is smaller, thus BSISO
advection of moisture is weaker and does not initiate convection over the
southern AS. In turn, the meridional gradient of anomalous vertical velocity is
weak, and the background vertical shear does not generate sufficient tilting
over the northern AS. Thus, the convective wind response stalls, and
large-scale convection does not propagate north of 15N. Thus, free-tropospheric
moisture advection and vortex tilting due to the background vertical shear work
together for robust northward propagation of the BSISO. | Sambrita Ghatak, Jai Sukhatme | 2023-10-31T14:29:51Z | http://arxiv.org/abs/2310.20489v2 | # Northward Propagating versus Non-propagating Boreal Summer Intraseasonal Oscillations
###### Abstract
The Boreal Summer Intraseasonal Oscillation (BSISO) is a pronounced mode of tropical intraseasonal convective variability during the boreal summer. One of the most prominent features of the BSISO is the northward movement of convection in the South Asian monsoon region. Using long-term observational and reanalysis data, we identify two types of BSISO events, one which propagates northward over South Asia from the equatorial Indian Ocean, and the other which doesn't. By investigating the difference between these two types of events, we identify the critical mechanisms involved in northward propagation.
A moisture budget reveals that for propagating cases when organized convection first appears over the equatorial Indian Ocean, easterlies on the northern flank of the Rossby wave response to enhanced convection (cyclonic) as well as those on the southern flank of the Rossby wave response (anticyclonic) to the suppressed convection further north act on the climatological moisture distribution, and rapidly moisten the atmosphere over the southern Arabian Sea. This results in the characteristic northwest-southeast-oriented convection observed in the BSISO. Now, as this tilted belt of enhanced convection is present south of the previous cycle of suppressed convection associated with subsidence, in the presence of background easterly vertical shear of the monsoon winds, a latitudinally tilted vortex tilting term is generated due to the meridional gradient in vertical velocity. The generation of positive vorticity anomalies over the Arabian Sea more than over the Bay of Bengal, leads to a tilted gyre north of the convective anomaly. As a result, anomalous winds over the northern Indian landmass, particularly north of 20N become south-easterly. These winds tap into the north-westward moisture gradient that is present over much of the northern Indian landmass and help move the convection further north over India. Moreover, the south-westerly background monsoon winds advect anomalous moisture, thus initiating convection over the Bay of Bengal.
For non-propagating cases, though a well-formed Rossby gyre results as a response to the nascent convection in the equatorial Indian Ocean, easterlies over the Arabian Sea are much weaker and are unable to moisten the atmosphere sufficiently to initiate strong convection over the southern Arabian Sea. In the absence of strong vertical velocity due to a lack of convection, the meridional gradient of vertical velocity is weak, and the background vertical shear does not generate sufficient tilting over the northern Arabian Sea. In all, the convective wind response stalls, and the large-scale convection does not
propagate north of 15N. Taken together, this work shows that the northward propagating BSISO over South Asia is a moisture mode acting under the influence of background vertical shear, where vortex tilting as well as horizontal advection work hand in hand, and moistening over the Arabian Sea due to the strong easterly wind anomalies acting on the mean moisture gradient is critical for the BSISO to propagate over the Indian landmass.
## 1 Introduction
Intraseasonal oscillations (ISOs) in the tropical atmosphere exhibit pronounced seasonality (Adames et al., 2016; Jiang et al., 2018). While the Madden Julian Oscillation (MJO) is the dominant ISO signal during boreal winter, the Boreal Summer Intraseasonal Oscillation (BSISO) is the most significant intraseasonal signal during northern hemisphere summer, particularly in the Indo-pacific sector (Jiang et al., 2018; Chen and Wang, 2021). Like MJO in boreal winter, the BSISO is thought to be one of the most important sources of sub-seasonal variability and is known to influence various weather systems on different scales (Chen and Wang, 2021). Though both the MJO and BSISO most often develop in the Indian Ocean and have similar timescales (Madden and Julian, 1971, 1972), both have a time period within 30-60 days, they differ markedly in their spatial pattern. While MJO is largely symmetric about the equator and predominantly characterized by equatorial eastward propagation, the spatial structure and propagation characteristics of BSISO are more complicated (Chen and Wang, 2021; Wang and Sobel, 2022). Similar to MJO, the BSISO also has an equatorial eastward moving component (though much weaker) that moves from the Indian Ocean to the western Pacific Ocean, but unlike MJO, the most prominent feature of BSISO is its northward propagation from the equatorial Indian Ocean over South Asian Monsoon region, and northwestward propagation from the equator over western Pacific (Wang and Sobel, 2022). Traditionally, the MJO and BSISO are seen as separate low-frequency ISO modes with similar time scales, the conceptual boundary between these two is not very clear (Wang and Sobel, 2022), and a few recent studies don't see BSISO and MJO as separate phenomena (Jiang et al., 2018; Wang and Sobel, 2022). These studies treat BSISO as a "northern summer incarnation of the MJO" (Wang and Sobel, 2022). Thus, along with MJO, understanding BSISO can also be thought of as one of the fundamental questions in tropical atmospheric dynamics.
The BSISO has a profound influence on global weather systems and extremes, such as floods and droughts (Mooley and Parthasarathy, 1983), tropical cyclones (Kikuchi and Wang, 2010), monsoon low-pressure systems (Goswami et al., 2003). Particularly in South Asia, the BSISO heavily impacts the active and break cycles of the Indian monsoon (Pillai and Sahai, 2014). By impacting monsoon onset, active-break phases, low-pressure systems, and depressions, it dictates the overall pattern of monsoon rainfall (Goswami and Xavier, 2005). As South Asia is heavily dependent on monsoon rains, and this region also faces a myriad of disasters during the rainy season, understanding and successfully predicting BSISO is of great social and economic concern and remains a significant challenge (Neena et al., 2017). Because of its impact, there have been many studies in the last few decades to understand the BSISO, including theoretical, modeling, and observational perspectives. While progress has been made, the system remains elusive, and particularly the mechanism behind its striking northward movement has not been clearly understood yet (Wang and Sobel, 2022).
Over the South Asian Monsoon Region (SAMR), the northward propagating BSISO was first observed by Yasunari (1979) and Sikka and Gadgil (1980). Broadly, there are two schools of theories based on simplified models that attempt to produce northward propagating equatorial modes (Jiang et al., 2018; Wang and Li, 2020). One avenue is to understand the BSISO as a modified equatorial Rossby wave that interacts with the monsoonal background flow (Wang and Li, 2020). There are variations within this school of thought, but the basic understanding of northward propagation of convection is via moisture convergence in the boundary layer (Jiang et al., 2004; Bellon and Sobel, 2008). On the other hand, recently, based on the "moisture mode" theory (Sobel et al., 2001; Sobel and Maloney, 2013; Sukhatme, 2014), it has been shown that prognostic moisture is essential to produce northward propagating modes, and the perturbation winds of Rossby response act on the mean gradient of
moisture to give rise to northward movement (Adames et al., 2016; Jiang et al., 2018; Chen and Wang, 2021). While successful to some extent, these theories come with their own set of caveats when it comes to capturing the full set of features associated with the northward propagation of the BSISO.
In this paper, inspired by the work on MJO (Kim et al., 2014), we have identified two types of BSISO events, one where the convection moves northward to South Asia from the Equatorial Indian Ocean (EIO), and one where convection does not propagate northward in spite of a strong start at the EIO. We investigate the differences between the propagation mechanisms of these two categories and identify the critical mechanisms behind the northward propagation. Specifically, we employ a "moisture mode" framework, where moisture dictates the convection, as well as a vorticity budget analysis to understand the coupling with circulation.
## 2 Data and Methodology
Daily data from the ERA5 reanalysis (Hersbach et al., 2020) project serve as the main data set for this study. Specifically, we have used 25 years of horizontal winds, vertical velocity, and specific humidity data at 17 pressure levels (1000 to 200 hPa) with an interval of 50 hPa. The horizontal resolution of the data used in calculations is 2.5\({}^{\circ}\). Our analysis spans the boreal summer, i.e., May through October (MJJASO) from 1985 to 2009. This data is used to calculate the derived fields presented in this paper. Some fields, such as relative vorticity and various terms of the vorticity and moisture budgets are computed using Windspharm package (Dawson, 2016). Daily, 2.5\({}^{\circ}\) horizontal resolution outgoing longwave radiation (OLR) data from the National Oceanic and Atmospheric Administration (NOAA) satellites serves as a proxy for moist tropical convection (Liebmann and Smith, 1996).
To isolate the BSISO signal, we used a filter with a 25-80 day band, following Lawrence and Webster (2002). As there is a prominent 10-20 day mode in the same region(Chatterjee and Goswami, 2004), we use a lower cutoff greater than 20 days, though slightly changing the filter cutoff doesn't affect our results. We use the Lanczos band-pass filtering method (Duchon, 1979), and prior to filtering, the annual cycle of the time series in question is removed by subtracting the mean and the first three Fourier harmonics. This annual cycle has been called the background signal in this paper.
To distinguish between propagating and non-propagating BSISO events, we used two reference boxes, one over EIO (70\({}^{\circ}\)-90\({}^{\circ}\)E, 0\({}^{\circ}\)-5\({}^{\circ}\)N), and another over Indian landmass (70\({}^{\circ}\)-90\({}^{\circ}\)E, 17.5\({}^{\circ}\)-22.5\({}^{\circ}\)N). The standard deviation of box-averaged OLR in the EIO is \(\sim\) 18 W m\({}^{-2}\) and over the land is \(\sim\) 12 W m\({}^{-2}\). We define a BSISO event to be propagating if the lowest value of box-averaged 25-80 day filtered OLR anomaly is below -18 W m\({}^{-2}\) in the EIO box, and after attaining the lowest value in the EIO, it attains its lowest value over the land box within next 20 days. The lowest value in the land box must be below -13.5 W m\({}^{-2}\), which is substantially lower than one standard deviation of this region, and three-forth of the standard deviation of the EIO box. These criteria allow us to isolate cases that propagate northward with a substantially strong convective signal. The time interval between the EIO box minima to land box minima for all of our propagating cases is 10-20 days. Using these criteria, we obtain a total of 25 propagating cases from 25 years. Note that, Day 0 is defined to be when the box-averaged OLR anomaly attains a minima in the EIO.
Similarly, to isolate non-propagating events, we use the same criteria over the EIO box, but constrain the lowest value in the land box to be above -6 W m\({}^{-2}\), i.e., one-third of the standard deviation of the EIO box, and half of the standard deviation of the land box. Thus, we are able to isolate cases that started with almost an equal strength in the EIO but couldn't propagate into the Indian land region with substantial convective signal. This resulted in 14 such cases from 25 years of data. Our criteria are quite strict for isolating propagating and non-propagating cases, but they identify very distinct cases, which is helpful for comparing their propagation characteristics. We confirm that the results are insensitive to slight changes in the box size, location, and threshold values.
After isolating the strong propagating and non-propagating cases, we construct composites. To un
derstand the moist processes associated with the movement of convection, and the evolution of circulation, we perform a moisture and vorticity budget analysis, respectively. The terms in the budget are calculated first from the individual cases and then averaged to make the composites. Detailed descriptions of budgets are given in their respective sections. For all the constructed composites, we have performed significance testing, and we have only shown signals that are statistically significant signal at a 95% confidence level.
## 3 Horizontal Structure of the two types of BSISO events
In this section, we present the horizontal composite characteristics of the propagating and non-propagating BSISO events. We begin with the propagating composite which is shown in Figure 1; specifically, we show OLR and 850 hPa horizontal wind anomalies with an interval of four days. On Day -12, we see a pair of anticyclonic gyres, one in each hemisphere. These two gyres are not symmetric, the southern one is more zonally oriented, while the Northern hemisphere anticyclonic gyre has a clear NW-SE tilt. The OLR anomaly associated with the circulation is positive (indicating suppressed convection) and it also has a similar NW-SE tilt. The strongest OLR anomaly is seen over the Equatorial Indian Ocean (EIO), particularly to the eastern side, and over the Arabian Sea (AS). The signals visible in our domain of interest can be understood as a pair of Rossby gyres that straddle the equator to the west of anomalous convective activity, i.e., they are part of a Gill-type response (Jiang et al., 2018; Chen and Wang, 2021). These are associated with easterly wind anomalies along the equator, and westerly wind anomalies away from the equator. To the east, there exists a Kelvin wave response (not shown), but here we focus only on the Rossby gyre as our interest is in the northward propagating BSISO over India, and this region is dominated by the Rossby part of the circulation. An immediate question that one can ask is, why the Northern Hemisphere Rossby response is stronger and tilted from NW to SE? This will turn out to be a central question as we proceed in this paper, and it will be answered later, particularly in the vorticity budget analysis.
On Day -8 (Figure 1), the Northern hemisphere anticyclonic gyre becomes compact, and moves slightly northward, while in the EIO, strong easterlies continue to prevail. The positive OLR anomalies move further North and engulf the entire AS, the Bay of Bengal (BOB), and a significant part of the Indian landmass. While over the EIO, a hint of enhanced convection (negative OLR anomaly) appears. The enhanced convection gets stronger by Day -4, and it crosses 10N into the AS, simultaneously, the existing positive OLR anomalies in the AS recede. Interestingly, over the BOB, the positive anomalies (suppressed convection) don't recede as much, thus the area of suppressed convection gets tilted. In the wind anomalies, we see a new cyclonic Rossby-type gyre in the EIO, while the anticyclonic circulation continues over the land region, thus, between 5-20N, we notice very strong anomalous easterlies. On Day 0, the Rossby gyre in the EIO gets stronger and very well-marked with the strengthened convection anomaly; further, a weak Rossby-type circulation is also visible south of the equator, resulting in strong easterlies between 10-20N, and westerlies near the equator. This can again be understood as a modified Gill-type response, but now with enhanced convective heating. Interestingly, similar to Day -4, convection moves north into central AS, but it doesn't enter the Bay Bengal. Thus, the convection that started with a zonally oriented structure in the EIO on Day -8, progresses into the AS by Day 0, and a clear NW-SE tilted structure gets established.
In Figure 1, on Day 4, convection enters deep into the northern AS, and it also moves slightly north over the BOB. The most interesting feature of this day is in the circulation pattern, the cyclonic Rossby gyre that formed over the EIO in response to the enhanced convection now gets abruptly tilted from NW to SE and moves north. In other words, the cyclonic vortex quickly "jumps" north into the AS between Day 0 and Day 4, but in contrast, it moves slowly in the BOB sector. On Day 8, anomalous convection appears over peninsular India, and also over the north-west of India. The tilted structure of the vortex becomes more prominent and moves further north, and the south-easternlies over the land become more prominent and stronger. Over the EIO, we now see strong westerlies associated with this cyclonic vortex, and the convection dies down, so essentially, the whole
convective belt has moved north from the EIO. On Day 12, anomalous convection covers almost the entire Indian landmass. In the EIO, the next cycle of the BSISO starts as positive OLR anomalies appear over the region. Indeed, this leads to the period of this mode as being approximately 40 days.
Shifting our focus to the composite of the non-propagating cases (Figure 2) -- on Day -12, as in Figure 1, we see an anticyclonic Rossby gyre associated with suppressed convection north of the equator and its southern hemisphere counterpart. But, note that the southern gyre is rather weak, indeed, suppressed convection is much weaker compared to the propagating cases, and it is limited to 20N. On Day -4, the area of enhanced convection gets bigger and stronger, and a small Rossby gyre comes into being as a response to this heating. Interestingly, the area of suppressed convection north of the nascent enhanced convection doesn't weaken and remains almost stationary. On Day 0, convection strengthens, and the Rossby gyre is more prominent with westerlies over the EIO and easterlies between 10-20N. The difference with the Rossby gyre of propagating cases is in their extent; specifically, the easterlies associated with the gyre of the propagating composite extended west of 70E, while for the non-propagating cases, it is mostly confined to the east of 70E. On Day 4, enhanced convection remains stationary (though slightly weaker), along with the Rossby gyre, while the area of suppressed convection gets much weaker. In stark to the propagating cases, in spite of having a strong convective signal in the EIO, the BSISO doesn't propagate northward but starts to weaken. Subsequently, the convective signal disappears, and a new area of suppressed convection appears over the EIO.
## 4 Moisture budget
Tropical convection is known to be tied with column-integrated moisture and the environmental moisture distribution on various timescales (Bretherton et al., 2004). Many studies regarding MJO/BSISO
Figure 1: Composite of \(25-80\) day filtered OLR (W m\({}^{-2}\); shading) and 850 hPa wind anomalies (quivers) for the boreal summer (MJJASO) from Day \(-12\) to Day 20 for the propagating cases. OLR and wind vectors shown are statistically significant at 95% confidence level.
Figure 3: Composite of 25-80 day filtered column-integrated specific humidity (scaled by the latent heat of vaporization L) (\(10^{6}\)J s\({}^{-2}\); shading) and 850 hPa wind anomalies (quivers) for the boreal summer (MJJASO) from Day -12 to Day 20 for the propagating cases. Wind vectors shown are statistically significant at 95% confidence level.
Figure 2: Same as Figure 1, but for non-propagating cases.
have shown coherence between OLR or precipitation anomalies with column-integrated moisture (specific humidity) anomalies, or sometimes moisture anomalies lead precipitation (Kiranmayi and Maloney, 2011; Adames and Wallace, 2015; Kim et al., 2014; Jiang et al., 2018). Thus, a column-integrated moisture budget has been used to understand the processes involved in BSISO (Adames et al., 2016; Chen and Wang, 2021) and MJO (Adames and Wallace, 2015; Adames et al., 2016) dynamics. Moist static energy (MSE)/Moist entropy (ME) budgets have also been employed (Sobel et al., 2014; Jiang et al., 2018; Wang and Li, 2020), as they include additional energy fluxes that may affect convection and MSE/ME are nearly conserved variables, though many of these studies concluded that MSE/ME anomalies are in fact dominated by moisture anomalies. Following this line of work, the "moisture mode" framework has emerged as a promising avenue to understand certain large-scale moist tropical systems, which at the broadest level means that these modes of variability are dictated by moisture anomalies and that these modes would not exist in any mathematical model that does not contain a prognostic equation for moisture (Sobel et al., 2001, 2014; Sobel and Maloney, 2013; Sukhatme, 2014; Kim et al., 2014).
We begin our moisture budget analysis of the BSISO by examining OLR and column-integrated specific humidity. Specifically, Figures 3 and 4 show the 25-80 day filtered column-integrated (1000 to 200 hPa) specific humidity anomalies for propagating and non-propagating BSISO cases, respectively. Comparing with Figure 1 and Figure 2, we clearly see that BSISO-related specific humidity and OLR (convection) anomalies are collocated. Specifically, large negative OLR anomalies associated with strong convection are accompanied by large positive anomalous column-integrated specific humidity and vice-versa. Thus, an understanding of how the column-integrated moisture evolves should provide insight into the evolution of convection. The relevant equation reads,
\[[\frac{\partial q^{\prime}}{\partial t}]=-[(\mathbf{V}.\nabla_{h}q)]^{\prime}-[( \omega\frac{\partial q}{\partial p})]^{\prime}-P^{\prime}+E^{\prime}+R, \tag{1}\]
where \(q\) is the specific humidity, \(\mathbf{V}=u\mathbf{i}+v\mathbf{j}\) is the horizontal wind, \(\nabla_{h}=\mathbf{i}(\frac{\partial}{\partial x})+\mathbf{j}(\frac{\partial} {\partial y})\) is the horizontal gradient operator, \(P\) is precipitation, \(E\) is evaporation, and \(\omega\) is the vertical velocity in pressure coordinates. Here, prime denotes a 25-80 day anomaly. \(R\) is the residual in the budget (Adames and
Figure 4: Same as Figure 3, but for non-propagating cases.
Figure 5: Contours of the composite mean of 25-80 day anomaly terms in Equation 1 (scaled by the latent heat of vaporization, \(L\)) and their combinations (for column process) for propagating cases on (a) Day -8 and (b) Day -4. Units of terms are W m\({}^{-2}\). The 850 hPa wind anomalies are overlaid for reference. Wind vectors shown are statistically significant at 95% confidence level.
Figure 6: Same as Figure 5, but on (a) Day 4 and (b) Day 8
Wallace, 2015). The square bracket represents mass-weighted vertical integrals, calculated from 1000 to 200 hPa. The last three terms of the R.H.S are usually bundled together as,
\[-[Q_{2}]^{\prime}/L=-P^{\prime}+E^{\prime}+R, \tag{2}\]
which is called the column-integrated "apparent moisture sink" (Adames and Wallace, 2015). The last four terms in Equation 1 are together called "column-processes" (Chikira, 2014). Further,
\[C^{\prime}=-[(\omega\frac{\partial q}{\partial p})]^{\prime}-P^{\prime}+E^{ \prime}+R=-[(\omega\frac{\partial q}{\partial p})]^{\prime}-[Q_{2}]^{\prime}/L. \tag{3}\]
Hence, this term can be calculated directly by subtracting horizontal advection from moisture tendency. As precipitation and evaporation are not defined at pressure levels, the moisture budget equation at a single pressure level is often written as,
\[\frac{\partial q^{\prime}}{\partial t}=-(\mathbf{V}.\nabla_{h}q)^{\prime}-( \omega\frac{\partial q}{\partial p})^{\prime}-Q_{2}^{\prime}/L, \tag{4}\]
which we have used for our vertical structure investigation. Moreover, the vertical moisture advection term can be broken down into,
\[(\omega\frac{\partial q}{\partial p})^{\prime}=\frac{\partial(\omega q)^{ \prime}}{\partial p}+(q\nabla.\mathbf{V})^{\prime}, \tag{5}\]
where the first and second terms are the vertical and horizontal convergence of moisture flux, respectively. Note that as many previous studies regarding MJO/BSISO use MSE/ME budgets, in this paper, we multiply all the budget terms with \(L\), the latent heat of vaporization, as it will be useful to compare our results to those studies.
The composite moisture budgets for the propagating BSISO cases on Day -8 and Day -4, and Day 4 and Day 8, i.e., as the BSISO develops and propagates, are shown in Figures 5 and 6, respectively. One can see the first hint of northward movement of moisture anomaly (as well as convection) -- as in Figure 1 -- from the Equatorial Indian Ocean (EIO) to Southern Arabian Sea (AS) happens between Day -8 and Day -4. This northward movement over the AS continues, and on Day 0, we see the positive moisture anomaly has almost engulfed the AS up to 20N. Strikingly, from Day -8 to Day
Figure 7: Same as Figure 5, but only for Day 0 of the composite of non-propagating cases.
0, there is almost no northward movement of convection over the Bay of Bengal, this results in the characteristic NW-SE tilted convection band associated with the developing BSISO.
To understand the reason behind this preferential northward propagation over the AS at this stage, we focus on Figure 5, where we have plotted the terms of moisture budget (Equation 1) and the combination of the terms that contribute to "column-process" (Equation 3) for Day -8 and Day -4. Note that residual is included in \(C^{\prime}\). In Figure 5(a), we see that the moisture tendency on Day -8 has entered the southern AS, as expected from the discussion above. The pattern of moisture budget terms indicates that horizontal advection is the main contributor to the moisture tendency in the AS sector, though horizontal advection is not the dominant term in the budget, as shown previously for the BSISO (Adames et al., 2016; Jiang et al., 2018; Wang and Li, 2020; Chen and Wang, 2021). As expected, precipitation and vertical advection are the dominant terms in the budget, but they cancel each other to a large extent. The large precipitation and vertical advection signal is due to the negative moisture anomaly and anomalous subsidence associated with the gyre of the previous cycle of suppressed BSISO convection north of the newly born convective anomaly over the EIO. Evaporation, though small in magnitude, opposes the moistening, because the easterly wind anomalies associated with the BSISO act against the climatological south-westerly monsoonal mean flow and slow down the overall flow in the AS as well as BOB. This indicates that the wind-induced surface heat exchange (WISHE) mechanism is not applicable to the northward propagation of BSISO. Amongst "column-process" terms, vertical advection dominates and we get a net negative value over AS and BOB, with a minor positive contribution over the EIO. Both "column-process" and horizontal advection moisten the EIO, while the "column-process" dries the AS and BOB. In BOB, the magnitude of horizontal advection is close to the opposing "column-process", and in all, we see a drying tendency in the north and weak moistening in the southern BOB. But over AS, horizontal advection is considerably stronger, and it wins over the drying "column-process", and the net result is a moistening of the atmosphere over the AS. This is the primary cause for the preferential movement of moist convection over the AS at this stage in its development, and results in the NW-SE tilted structure of the BSISO.
The moisture budget for Day -4 (Figure 5b) tells a very similar story, though the "column-process" is a little weaker and horizontal advection is little stronger over the AS, so now we have stronger moistening over AS and the BSISO moves further north. Over BOB, though the "column-process" is strong, the horizontal advection also becomes slightly larger, and this causes some moistening, particularly in the southern sector of the BOB. These findings, at this stage of the BSISO, are consistent with the MSE budget analysis of Jiang et al. (2018). In peninsular India, we see a slightly different moistening process. Unlike AS and BOB, the horizontal advection on Day -8 and Day -4 tries the region. Yet, we find overall moistening as reflected in the tendency term, indeed, this is due to "column-process", more specifically, due to vertical advection. A similar pattern was also identified by Jiang et al. (2018) who speculated that this is probably due to topographic influence.
Next, we focus on the moistening over the Indian landmass, particularly beyond 20N. To the best of our knowledge, previous studies regarding northward propagating BSISO do not explain the moisture dynamics beyond 20N, where the BSISO influences the Indian monsoon active and break cycle. As seen in Figure 3, though the positive moisture anomaly enters the Indian landmass beyond 20N on Day 4, the most striking moistening happens between Day 4 and Day 12. To understand the moistening process on Day 4 and Day 8, we examine the moisture budget in Figure 6. On both days, we see a very strong moisture tendency covering almost all the Indian landmass. Similar to Day -8 and Day -4, horizontal advection is the main contributor to the positive moisture tendency. On both days, we see "column-process" (again, mainly due to vertical advection, and to some small extent, evaporation) induces drying at the North-West corner in the Northern AS and desert region of India, but that is much weaker than the large horizontal advection. On both days, the BOB also shows a large positive moisture tendency, and it is also due to horizontal moisture advection. Interestingly, on Day 8, we see slight moistening by horizontal advection over peninsular India, but the drying associated with "column-process" is larger and results in a net weak drying tendency in this region.
For the non-propagating cases, the composite in Figure 4 shows that from Day -8 to Day 0, the
anomalous moisture distribution remains almost unchanged, except for some strengthening of positive anomaly over the EIO. On Day 4, we see a sign of moistening over the AS, but the positive moisture anomaly fails to penetrate into the AS. The situation remains almost unchanged on Day 8, though by Day 12, we see the sign of a very weak positive moisture anomaly over the AS. Indeed, when compared to the propagating cases, this moistening is negligible. Further, over the EIO, positive moisture anomalies start to weaken from Day 0 onwards and almost vanish by Day 8. This is also evident in the OLR signal in Figure 2. As we see the first well-organized and strong circulation on Day 0, we focus on the moisture budget of day Day 0 for non-propagating BSISO composites to find out why the convection fails to propagate in the AS.
The moisture budget on Day 0 for the non-propagating cases is shown in Figure 7. As in the propagating BSISO cases, the tendency is dominated by the horizontal advection and the "column-process" (mostly by vertical advection) acts against it to reduce the amount of moistening. But here, the tendency is much weaker than the propagating cases, particularly over the Southern AS. This weak tendency is caused by the weak horizontal advection term (in some areas near 10S, it is even slightly negative), particularly in the southern region of the Arabian Sea. This weak moistening is the reason behind the inability of the convection to move into the AS. Interestingly, the moistening is slightly stronger in the Northern AS, which is also reflected in the fact that from Day 0 to Day 4, the negative anomaly in the Northern AS vanishes, but the positive anomaly fails to penetrate into the Southern AS. Over Indian peninsula, advection is negative, and "column-process" (dominated by vertical advection) is positive, and we see a net weak positive tendency. Finally, over the EIO, horizontal advection induces drying, and that eventually kills the convection in that region.
### Vertical structure
Having understood the column-integrated moisture budget, we now examine the vertical structure of important variables associated with BSISO. From the discussion above, the critical difference between the propagating and non-propagating cases stems from their ability/inability to penetrate into the AS, thus, here we focus on the AS sector (60-72.5E). Figure 8 shows various terms of moisture budget and a few other important variables on Day -8 of propagating BSISO cases. We decomposed vertical advection into two parts including horizontal moisture convergence, shown in Equation (6), as we want to categorically focus on the boundary layer moisture convergence, which is thought to be critical in northward propagation in many theories (Jiang et al., 2004; Bellon and Sobel, 2008), and used as a cornerstone in many modeling studies (Yang et al., 2019) and model validations (Neena et al., 2017). Moreover, the vertical structure enables us to examine whether the BSISO has a pronounced tilt which is a characteristic of the MJO (Adames and Wallace, 2015) and is implied by certain theories concerning the propagation of the BSISO (Jiang et al., 2004).
As seen in Figure 8, on Day -8, the positive moisture anomaly is located over the EIO, south of 10N, and a negative moisture anomaly is present to the north over the AS. Both anomalies reach up to 400 hPa, above which we don't see any major moisture signal. One should note that the strongest moisture anomaly signal is not in the boundary layer, but just above it in the free-troposphere. As expected, the positive moisture tendency is in front of the positive moisture anomaly, and interestingly, it is also strongest just above the boundary layer, in the free troposphere between 850-600 hPa. As noted in the previous section, the main contributor to the tendency is horizontal advection, while "column-process" acts against it to reduce the moistening. The main contributor to the "column-process" is vertical advection, which is stronger than the opposing apparent moisture sink. Interestingly, horizontal advection and "column-process", are equally strong in the boundary layer, and cancel each other, while in the free troposphere, horizontal advection is stronger, so the net tendency is positive. Vertical velocity strongly aligns with vertical advection, indicating that the vertical advection is determined by the anomalous subsidence over the AS and ascent over the EIO. Now, to examine the role of horizontal moisture convergence, we look at the decomposition of vertical advection. In the boundary layer, in front of the convection (positive moisture anomaly), we see very strong boundary
layer moisture divergence. So, boundary layer convergence cannot play a significant role in northward propagation, as is assumed in many theory and modeling studies (ref). Above the boundary layer, there is a small positive contribution of horizontal moisture convergence, but it is opposed by strong vertical moisture flux convergence. Thus, unlike the MJO here a tilt is clearly absent, and contrary to expectations from Jiang et al. (2004), the moisture anomaly as well as vertical velocity have a more or less upright structure in the BSISO.
### Process of moistening
To identify the specific processes responsible for the anomalous horizontal advection that causes the northward movement of convection, we decompose it into several terms consisting of BSISO-scale and background state wind and moisture components,
\[(\mathbf{V}.\nabla_{h}q)^{\prime}\approx(\tilde{\mathbf{V}}.\nabla_{h}q^{ \prime})^{\prime}+(\mathbf{V}^{\prime}.\nabla_{h}\bar{q})^{\prime}+(\mathbf{V} ^{\prime}.\nabla_{h}q^{\prime})^{\prime}, \tag{6}\]
where prime means the BSISO-scale perturbation (25-80 day filtered anomaly), and bar refers to the seasonal background (mean and first 3 harmonics). Though this background is not constant, it is slowly evolving. Of course, there are contributions from other timescales, but they are much smaller than the terms shown, so Equation (6) is a very good first-order approximation. In fact, even the last term in RHS is much smaller than the first two terms, and we have observed that these two terms together capture most of the BSISO advection anomaly. Physically, the first term in the RHS should be understood as the background moisture advection by the anomalous BSISO wind, and the second term should be understood as the anomalous BSISO moisture advection by the mean monsoon background wind.
In the context of moistening in the lower troposphere, we focus on one level (namely 700 hPa). We have chosen this level, as this is where both moisture tendency and moisture advection are strong as seen in Figure 9. We also confirmed that the column-integrated version of this decomposition paints a very similar picture. In Figure 9 we study the AS sector (the region where northward propagation of convection advances rapidly) on Day -8 for the propagating cases. For moistening over the Indian landmass to the North of 20N, i.e., the second stage of northward movement of the BSISO, we focus on Day 8 in Figure 10. Finally, to understand the reason for weak advection in the AS in the non-propagating cases, we present Day 0 of the non-propagating composites in Figure 10.
From Figure 9, it is clear that on Day -8 of the propagating cases composite the background moisture advection by the BSISO winds plays a dominant role in the total horizontal advection over the AS sector, thus this term is primarily responsible for the northward movement of convection over AS. The second term, that is the anomalous BSISO moisture advection by the background monsoon winds has a small contribution to the west of 70E, but it has a negative contribution along India's west coast and peninsular region. Overall, the former is much larger along the west coast, so in total, the entire region over the AS has a positive signal. In the BOB, both terms have weak contributions, and over the peninsular region advection of the background moisture by BSISO winds dominates, so we have a net negative anomaly when the terms are added together. Further north, as seen in Figure 10, the moistening north of 20N is again dominated by the background moisture advection by the BSISO wind anomalies. Advection of anomalous BSISO moisture by background monsoon winds tries to dry the northwestern desert region north of 20N, but it can't overcome the strong moistening by eddy advection of background moisture. The story is different in the peninsular Indian region and BOB, here, background wind advection of BSISO moisture anomalies induces strong moistening and dominates the total moisture advection.
Finally, for the non-propagating BSISO cases, as seen in Figure 11, we have net negative moisture advection in the southern AS sector, from the west coast of India to 65E, which as discussed before is the reason behind the failure of the non-propagating cases to effectively penetrate into the AS. In the Northern AS, moisture advection is very weak and close to zero. In fact, the advection of background
moisture by anomalous BSISO winds is much weaker and limited (close to the coast, east of 70E) compared to the propagating cases, as seen in Figure 9. Further, the advection of BSISO moisture anomalies is slightly stronger than the propagating cases and essentially offsets any moistening by the advection of background moisture. In fact, this induces a small net negative moisture advection in the Southern AS. So, overall, comparing Figures 9 and 11, the primary cause behind the failure of non-propagating cases to penetrate into the AS is weaker and limited advection of background moisture by BSISO winds. This is particularly true west of 70E where we see almost no moistening by this term, while for the propagating cases, there are considerable positive contributions all over the Southern AS.
To demonstrate how the BSISO wind anomalies advect background moisture and monsoon winds advect BSISO moisture anomalies in their respective places of dominance, we plot 700 hPa background moisture anomaly along with 700 hPa BSISO wind anomalies as well as 700 hPa BSISO moisture anomalies with background wind in Figures 12, 13 and 14 on Day -8, Day 8 for the propagating cases and Day 0 for the non-propagating case.
In the first stage, i.e., when convection enters the AS, on Day -8, for propagating cases (Figure 12), we see very strong easterlies from the equatorial region up to 15N, covering the whole AS and peninsular India. These are associated with the off-equatorial suppressed BSISO convection and can be thought of as a part of the Rossby component of a Gill-type response. These easterlies act upon the sharp zonally oriented gradient of background moisture over the AS and moisten the southern AS region. Interestingly over the BOB, the gradient of background moisture is more meridionally oriented, but the wind anomalies are north-easterly, so the advection of moisture is much weaker, thus the BSISO moisture anomaly (convection) moves much quicker over the AS than the BOB. This, as mentioned, establishes the characteristic NW-SE tilted convection band that is evident from Day 0 to Day 8. On the other hand, the negative BSISO moisture anomaly associated with the suppressed convection is stronger over the AS than peninsular India, so background westerly monsoon winds cause dry advection near the coast and over peninsular India. On Day -4 (not shown), we have equally strong easterlies between 5N to 20N, as, along with the suppressed convection of BSISO, the emerging convection over the EIO induces a new modified Gill-type response with an equatorial Rossby signal in the Indian Ocean. This signal persists up to Day 0 and continues to moisten the AS.
In the second stage, i.e., northward movement over the Indian landmass, on Day 8 (Figure 13), we see that the new Rossby gyre associated with the enhanced convection has a clear tilt from North-West to South-East (which is first visible on Day 4), and a well-formed vortex can be seen over the AS. Associated with this gyre, the wind anomalies over India are south-easterly, and aligned with the background moisture gradient (the background moisture decreases from the head BOB-Banglades-Myanmar region towards the desert region in the North-West of India). This wind taps in the moisture gradient and advects moisture from the BOB region to India north of 20N. Further, on Day 8, the background monsoon wind, which is south-easterly in the BOB, acts upon the anomalous moisture gradient in the same direction and moistens the Bay, thus the moisture anomaly and convection move further northward towards the North Bay of Bengal. Similarly, westerly monsoon winds advect anomalous moisture from AS towards the peninsular India. Note that, by Day 8, as the Rossby gyre is tilted and moves northward, the anomalous winds over the EIO as well as southern AS, become westerly, and these act against the background moisture gradient to dry the region. As a result, the whole tilted band of anomalous moisture (convection) moves further north.
Overall, for northward propagation, various moistening processes play a role at different stages of BSISO and at different locations. First, when the BSISO convection signal starts over EIO, anomalous easterlies advect background moisture into the AS and push convection northward into the AS. Then, the Rossby gyre associated with the BSISO tilts from the North-West to the South-East, and the anomalous south-easterly winds tap the existing background moisture gradient to moisten the lower troposphere above landmass beyond 20N, and thus the convection moves northward into the Indian land region. During this stage, when the BSISO convection has moved northward deep into the AS (but not so much in the BOB), the background monsoon winds advect the moisture into the BOB,
thus the convection moves northward towards the northern BOB.
Given these observations, we try to settle the debate in the literature about the process of moistening regarding the BSISO in South Asia. As noted by Kikuchi (2021), studies have claimed different terms to be dominant, and there is no consensus about which processes play the most significant role. As suggested by Kikuchi (2021), these discrepancies arose because studies averaged the terms over different regions, whereas as we have seen, differing processes are in action in various regions and stages of the BSISO. While Jiang et al. (2018) showed the dominant term to be the background moisture advection by BSISO winds, Wang and Li (2020) claimed that background winds advecting BSISO moisture anomalies are more important. As seen in the previous discussion, \((\mathbf{V}^{\prime}.\nabla_{h}\bar{q})^{\prime}\) is dominant over the AS, but \((\bar{\nabla}.\nabla_{h}q^{\prime})^{\prime}\) is dominant over the BOB, and also, it picks up slightly later than the moistening of the AS. As Jiang et al. (2018) took a large box comprising both AS and BOB, their result is dominated by the process that moistens the AS. Also, by focusing on a particular day, they missed different processes of moistening that are important during different stages of the BSISO. On the other hand, Wang and Li (2020) averaged across the BOB and claimed the dominance of the background wind advection term. To the best of our knowledge, the only study that showed the importance of both terms is Adames et al. (2016), though they suggested that the BSISO moisture anomaly advection by background flow only causes eastward movement, and it is dominant near the western north Pacific region.
Equally important is our finding that the moistening proceeds in stages, and in fact, the moistening over peninsular India is quite distinct. It was assumed that the moistening over the AS happens due to advection by the BSISO easterlies, and land also gets moistened by the same process (Adames et al., 2016), but that is not the case, in fact, the anomalous moisture advection is negative at this phase over the peninsular India, which gets moistened due to stronger "column-process". Moreover, to the best of our knowledge, no study in the context of BSISO has identified the moistening process over the land region north to 20N, though we see that convection reaches as far north as 30N. Some studies (Prisanna and Annamalai, 2012; Pillai and Sahai, 2014) in the context of active/break cycles of monsoon identified anomalous moisture advection over the land region, but they didn't pinpoint how the anomalous moistening process occurs, except for the drying (break) case, where they speculated that dry air advection from the desert region might play a crucial role. As discussed earlier, we have clearly demonstrated that there exists a strong north-westward background moisture gradient running from the moisture-rich north BOB towards the desert region of northwest India, and before the enhanced (suppressed) convection phase of BSISO reaches there, strong anomalous South-Easterlies (North-Westerlies) moisten (dry) the landmass by advection. These South-Easterlies (North-Westerlies) are associated with the modified Rossby response to the enhanced (suppressed) convection, which is tilted toward the Northern AS in the northwest. As previous studies didn't focus on the moistening beyond 20N, they also didn't ask the question as to why and how the Rossby response gets tilted to generate the South-Easterlies (North-easterlies) which results in moistening (drying) over that region. In the next section, we will try to solve this puzzle.
To understand why horizontal moisture advection is weak for non-propagating cases, we examine Figure 14. Here too, we clearly see the Rossby gyre associated with the Gill-type response and prominent easterlies, but it is comparatively weaker than the propagating cases, and limited to the East of 70E. Even near 70E, the winds turn at the edge of the gyre, so they are more north-easterly than easterly. These wind anomalies can't successfully tap the moisture gradient present in the AS, and fail to moisten the region. Moreover, due to the lack of strong moistening before Day 0, the negative moisture anomaly of the dry cycle over AS is quite strong, and the background westerlies act upon that to give rise to strong negative advection near the coast. Over the EIO, background westerlies work against the moisture gradient associated with the nascent convection to dry the region.
Thus, in contrast to the suggestion by Jiang et al. (2018), strong easterlies over India don't guarantee robust northward propagation. In fact, we see the critical condition is the extent of these easterlies, which need to extend far beyond 70E to amply moisten the AS. Why the strong moistening of AS is critical for the BSISO to reach further North into the South Asian land region? This question takes
us back to the previous question of sudden tilt in the Rossby response (as seen between Day 0 to Day 8), as we have seen that the South-easterlies associated with the tilted Rossby response are the reason behind the moistening of most of the land region. So, the question boils down to, why we don't get the tilted Rossby structure in the non-propagating BSISO.
## 5 Vorticity budget
Having examined the moistening processes, we now focus on the BSISO circulation. During the initial stage of the propagating cases, when the BSISO appears over the EIO, easterlies associated with the previous dry cycle and the new enhanced convection moistens the AS, thus the convection moves into AS. Once the convection enters deep into the AS (but not that far in BOB), the slanted structure of convection gets established, and the BOB moistening by background south-westerlies begins. But the question remains, how the Rossby gyre (that was initially established as a part of the modified Gill-type response associated with equatorial convection) tilts North-West to generate the South-Easterlies over the land region which in turn moisten the vast expanse of land in India? The other relevant question is, for non-propagating cases, why does the Rossby gyre not show a North-West tilt, and thus doesn't moisten land region over India? To understand this, here we appeal to vorticity budget
Figure 8: Pressure-latitude profile of anomalous moisture budget terms in Equation 4 (scaled by latent heat of vaporization L), components of anomalous vertical advection decomposition as shown in 6 (scaled by the latent heat of vaporization L), and anomalies of a few other important variables averaged over target region of Arabian Sea (60\({}^{\circ}\)-72.5\({}^{\circ}\)E) on day\(-\)8 for propagating cases. The upper panel shows (from left): specific humidity, moisture tendency, the sum of vertical moisture advection and apparent moisture sink, and horizontal moisture advection. Middle panel shows (from left): vorticity, horizontal moisture convergence, vertical moisture flux convergence, and vertical moisture advection. Lower panel shows (from left): apparent moisture sink, divergence, and vertical velocity in pressure coordinates. Moisture tendency contour is overlaid on moisture budget terms. Units for specific humidity(moisture) is J kg\({}^{-1}\) and for all other terms are W kg\({}^{-1}\)
Figure 10: Same as Figure 10, but for Day 8.
Figure 9: Anomalous horizontal advection term and its linearly decomposed primary contributor terms and their combination (all scaled by L) at 700 hPa as shown in Equation 6, for Day \(-8\) of the composite of propagating cases. Units of terms are W kg\({}^{-1}\). The 700 hPa wind anomalies are overlaid for reference. Wind vectors shown are statistically significant at 95% confidence level
Figure 11: Same as Figure 9, but for Day 0 of the non-propagating cases.
Figure 12: Background specific humidity (g kg\({}^{-1}\)) and 25-80 day filtered wind anomalies at 700 hPa on Day\(-\)8 of the propagating composite. Wind vectors shown are statistically significant at 95% confidence level.
Figure 14: Same as Figure 13, but for Day 0 of non-propagating composite.
Figure 13: Same as Figure 13, but for Day 8.
analysis. The column-integrated version of the relevant equation reads (Wang and Chen, 2017),
\[\langle\frac{\partial\zeta^{\prime}}{\partial t}\rangle=\langle(-\omega\frac{ \partial\zeta}{\partial p})\rangle^{\prime}+\langle(-\mathbf{V}.\nabla_{h} \xi)\rangle^{\prime}+\langle(-v\frac{\partial f}{\partial y})\rangle^{\prime} +\langle-(\zeta+f)D]\rangle^{\prime}+\langle T\rangle^{\prime}+\text{residual}, \tag{7}\]
where, \(\zeta=(\frac{\partial v}{\partial x}-\frac{\partial v}{\partial y})\) and \(D=(\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y})\) are the relative vorticity and divergence, respectively. \(\mathbf{V}=u\mathbf{i}+v\mathbf{j}\) is the horizontal wind, \(\nabla_{h}=\mathbf{i}(\frac{\partial}{\partial x})+\mathbf{j}(\frac{\partial} {\partial y})\) is the horizontal gradient operator, \(f\) is Coriolis parameter and \(\omega\) is the vertical velocity in pressure co-ordinates. Prime denotes a 25-80 day anomaly as defined earlier. \(T\) is given by \((\frac{\partial\omega}{\partial y})(\frac{\partial u}{\partial p})-(\frac{ \partial u}{\partial x})(\frac{\partial v}{\partial p})\), which is tilting term. In this analysis, we separately show these first and second terms of the tilting, as there are conflicting views regarding which term is important in the BSISO. Here, we call the first term \(T_{1}\) and the second term \(T_{2}\), so \(T^{\prime}\) reads \((T_{1}^{\prime}-T_{2}^{\prime})\). \([-(\zeta+f)D]\) represents the stretching term, \(\frac{\partial\zeta}{\partial t}\) is the local tendency of the relative vorticity, \((-\mathbf{V}.\nabla_{h}\xi)\) and \((-\omega\frac{\partial\zeta}{\partial p})\) represent the horizontal and vertical advection of relative vorticity, respectively, and \((-v\frac{\partial f}{\partial y})\) is the vorticity generation due to the \(\beta\) effect.
Figure 12 shows the terms comprising the lower-tropospheric vorticity budget and their combinations on Day 0 of the propagating cases. We chose Day 0 because, from Day 0 to Day 4, we see the abrupt tilting of the Rossby gyre towards North-West (associated with which is a well-formed vortex over the AS), which was zonally oriented on Day 0 (Figure 1). In other words, positive vorticity anomalies traveled northward much faster in the AS than in the BOB. In Figure 12, we see the vortex of positive vorticity (associated with the Rossby response of Equatorial convection) up to 15N, which is zonally oriented as expected. The tendency has a clear NW-SE tilt, which is also expected as discussed above. Clearly, the prime contributor to this NW-SE tilted tendency is \(T_{1}^{\prime}\) (the component of the tilting term associated with the meridional gradient of vertical velocity). The tendency as well as \(T_{1}^{\prime}\) are particularly strong over the AS, compared to the BOB, which causes the generation of cyclonic vortex over the AS on Day 4 and Day 8. Interestingly, quite similar to the moisture budget, \(T_{1}^{\prime}\) is the main contributor to the tendency, but it is not the largest term. The largest terms in this budget are the stretching and the horizontal advection, and they mostly cancel each other. We have added the stretching, horizontal advection, and \(\beta\) term (advection of planetary vorticity), their combination is close to zero over the AS, where the tendency is the strongest. The vertical advection (not shown) term is negligible over the AS and BOB but has a small positive contribution over the land. As seen, \(-T_{2}^{\prime}\) is also negligible, but we show it here as it was suspected to be important in BSISO propagation (Dixit and Srinivasan, 2011; Karmakar et al., 2022). As found by most of the vorticity budget studies, the residual (not shown) is non-negligible, but it is weak and negative over the region of strongest positive tendency over the AS and BOB, so it doesn't jeopardize our understanding. Above all, the contribution from all the terms except \(T_{1}^{\prime}\) is shown at the bottom right panel of Figure 12 (it includes the terms not explicitly shown in the figure) and it mostly yields negative values over the region of positive tendency, but \(T_{1}^{\prime}\) is much larger than the negative contribution from all other terms, thus resulting in a positive vorticity tendency. In essence, we can conclude that \(T_{1}^{\prime}\) is the term that dictates the NW-SE slanting of the Rossby gyre as seen from Day 0 of the propagating composite. Near the equator, horizontal advection contributes to a negative tendency, so the whole gyre moves North, of course with the characteristic tilt explained above. A similar process also holds on Day 4 (not shown), which tilts the gyre even more and generates stronger South-easterlies over the land region. The importance of the tilting term behind propagation has been previously noted by a couple of recent studies (Li et al., 2021; Karmakar et al., 2022).
To understand why the Rossby gyre doesn't tilt and move north into the AS for non-propagating cases, we focus on the Day 4 vorticity budget shown in Figure 13. We chose this day because, on Day 4, a clear Rossby signal is evident over EIO and Southern India, but it fails to propagate and weakens by Day 8 (Figure 2). As seen in Figure 13, \(T_{1}^{\prime}\) is almost absent over the AS, except for a small positive patch below 15N, and the tendency term also reflects a similar pattern. Cancellation between stretching and horizontal advection is also present like the propagating cases, and when added with the \(\beta\) term, their cumulative contribution over the AS is close to zero. Comparing with the propagating cases, we can conclude that the vortex doesn't propagate into the AS, and fails to generate the NW-SE tilted Rossby gyre due to a non-existent \(T_{1}^{\prime}\) term in the Northern AS. Over the Indian landmass, we
see a contribution from \(T_{1}^{\prime}\), but it is not strong enough to counteract the opposing effects by other terms, though it does generate a small positive tendency, thus the gyre moves slightly North over land (Figure 13), but it can't generate the South-easterly wind as the gyre doesn't get tilted. Near the equator, horizontal advection primarily contributes to the negative tendency to weaken the cyclonic vortex.
### Process of tilting
Finally, the question boils down to, why a strong tilting term (here, \(T_{1}^{\prime}\)) gets generated over the Northern AS region for propagating cases once the Rossby signal associated with the nascent convection in the EIO region gets firmly established, while this doesn't happen for the non-propagating cases. To understand this, similar to Equation 3, we break \(T_{1}^{\prime}\) into components comprising background and BSISO-related anomaly fields. Here, we only show \((\frac{\partial\omega^{\prime}}{\partial y})(\frac{\partial\bar{u}}{\partial p})\), which is the dominant term controlling \(T_{1}^{\prime}\), as seen in Figure 12. Physically, this term comprises the meridional gradient of anomalous vertical velocity and vertical shear of background zonal wind.
To investigate the \(T_{1}^{\prime}\) term, we again focus on a single level (700 hPa), which is a representative level for the lower free troposphere, and was also the level chosen for understanding the moistening process. In Figures 13(a) and 13(b), we have shown \(T_{1}^{\prime}\) its components, and other relevant fields at 700 hPa. Looking at Figure 14, we see most of the \(T_{1}^{\prime}\) is captured by \((\frac{\partial\omega^{\prime}}{\partial y})(\frac{\partial\bar{u}}{\partial p})\) for both the propagating and non-propagating cases, so we separately look at vertical shear of background zonal wind and meridional gradient of anomalous vertical velocity to understand the processes involved. The vertical wind shear is very similar for both cases, so it is not the reason behind the difference between propagating and non-propagating cases. Clearly, the difference arises from \((\frac{\partial\omega^{\prime}}{\partial y})\), which takes the shape of the \((\frac{\partial\omega^{\prime}}{\partial y})(\frac{\partial\bar{u}}{\partial p})\) and \(T_{1}^{\prime}\) term for both the categories. Just like \(T_{1}^{\prime}\), this term is positive north of the convection, and very strong over the Northern AS for the propagating cases, while for the non-propagating cases, it is limited to the land and the BOB, and essentially non-existent over the AS. The vertical shear of background zonal wind is also strongest over the Northern AS, so it amplifies \(T_{1}^{\prime}\) over the region even more. \((\frac{\partial\omega^{\prime}}{\partial y})\) is also strong over the BOB, but weaker background shear makes the tilting comparatively weak for both cases.
To explain why \((\frac{\partial\omega^{\prime}}{\partial y})\) is strong over the northern AS for propagating cases, but not so for the non-propagating cases, we shift our focus to the orientation of \(\omega^{\prime}\). For the propagating cases, by Day 0, anomalous negative values associated with ascending motion engulf all the EIO as well as the Southern AS, while for the non-propagating cases, even on Day 4, this is limited to the EIO, and to some extent peninsular India, with much weaker magnitude. The distribution of \(\omega^{\prime}\) is highly collocated with the distribution of column-integrated positive moisture anomaly, as high values of anomalous column-moisture cause convection that manifests in ascending motion. Though \(\omega^{\prime}\) and column-integrated moisture anomaly for propagating cases are highly coherent over the EIO and Southern AS, in the Northern AS, moisture has a slight lead, which might help in preconditioning the region for convection.
To the north of the area of the ascending motion, we find an area of positive vertical velocity anomaly (descending motion) for both categories. This area has a similar orientation for both categories except for the AS. For both categories, it runs much south over the BOB, almost up to 10N, but over the land, it is confined within 15-20N, so it also has a similar tilt. Over the AS, for the propagating cases, this is confined in near 20N, but for the non-propagating cases, weak anomalous descending motion is visible up to the southern AS. The descending motion for both categories is associated with the suppressed BSISO convection and accompanied by negative or near-zero moisture anomalies. Thus, a strong gradient of vertical velocity anomaly exists over the Northern AS region for the propagating cases, while it doesn't for the non-propagating cases.
Figure 16: Same as Figure 16, but for Day 4 of non-propagating composite.
Figure 15: 600-850 hPa(lower free-troposphere) integreated vorticity anomaly and dominant vorticity budget terms as shown in Equation 1 and their combinations, on Day 0 of the propagating composite. Unit of column integrated vorticity is kgm\({}^{-2}\) s\({}^{-1}\) and of budget terms are kgm\({}^{-2}\)s\({}^{-2}\).
Figure 17: (a)Various important terms associated with anomalous tilting on Day0 at 700 hPa for propagating cases, Clockwise(from top left): Dominant part of tilting term anomaly \(T_{1}^{\prime}\), its’ dominant linearized component, anomalous vertical velocity, vertical shear of background zonal wind, meridional gradient of anomalous vertical velocity at 700 hPa and column-integrated specific humidity anomaly, for propagating composite on Day 0. Units are s\({}^{-2}\), s\({}^{-2}\),Pa s\({}^{-1}\),m Pa\({}^{-1}\),Pa (ms)\({}^{-1}\) and J s\({}^{-2}\) respectively. (b)Same as (a), but for Day 4 of non-propagating composite.
Conclusions
In this study, we have investigated the mechanism behind the northward propagation of BSISO over South Asia using the "moisture mode" framework. We have identified two types of BSISO events, one propagates northward to South Asian landmass from EIO, while the other type doesn't. Comparing their propagation dynamics, we identified the critical mechanisms behind Northward propagation. We confirm that both types of BSISO convection anomalies are generally collocated with the column-integrated moisture anomalies. A moisture budget analysis was performed to understand the evolution of anomalous moisture, that dictates the evolution of anomalous convection. Our results suggest that, for propagating cases, easterlies on the southern flank of the anticyclonic Rossby gyre associated with the previous cycle of suppressed convection, as well as the easterlies in the north flank of the cyclonic Rossby gyre associated with the new area of enhanced convection over the EIO engulf most of the AS region. This aligns with the background gradient of moisture and moistens the Southern AS region by advection. Over BOB, the background moisture gradient is mostly meridional while BSISO wind is north-easterly, thus the moistening due to advection is weaker. 'Column-process' acts against this moistening, but the advection is stronger. At this stage, as we have stronger moistening over the AS, the convection quickly enters into the AS from the EIO, but it takes more time to enter the BOB. Thus, we get the initial NW-SE tilted band of convection.
At this stage, as a tilted belt of strong convection is present behind the area of suppressed convection associated with subsidence, a tilted belt of a meridional gradient of anomalous vertical velocity comes into being, from the Northern AS to the north-west, to the Southern BOB to the south-east. The vertical easterly shear in the background monsoon wind acts upon this gradient to generate an NW-SE slanted vortex tilting term, which dominates the vorticity tendency and thus leads to the vorticity anomaly. Thus, the cyclonic Rossby gyre associated with the enhanced BSISO convection gets a clear NW-SE tilt while moving northward from the EIO. So, the characteristic tilted vortex comes into being. On the northern flank of the tilted Rossby gyre, anomalous south-easterlies advect the background moisture from the moisture-rich head BOB region in the southeast towards the dry desert region in the northwest. Thus, again the anomalous BSISO wind taps the background moisture gradient and moistens the vast expanse of land north of 20 N, and the convection jumps into the aforementioned region. On the southern flank of this Rossby gyre, westerlies advect dry air in the EIO, thus the convection dies down, and the entire band of convection appears to jump from the equatorial region to the off-equatorial region, above 10N. Over the BOB, the moistening process is different. Once the tilted structure of convection comes into being, the background south-easterly monsoon wind moistens the BOB and helps the convection to take place.
For the non-propagating BSISO cases, while the nascent convection starts to gain strength over the EIO, easterlies over the Arabian Sea are much weaker, thus they can't properly moisten the region over the southern AS to initiate strong convection. In the absence of strong convection behind the zone of subsidence associated with suppressed convection, the meridional gradient of vertical velocity is almost absent. Hence, in spite of the presence of strong easterly vertical shear of background zonal wind, a strong vortex tilting term doesn't get generated. As a result, the convection stalls over the EIO, and the westerlies in the southern flank of the Rossby gyre of the enhanced convective signal eventually kill the convection.
Overall, we can claim that the northward propagation of BSISO over South Asia is a moisture mode acting under the influence of the background vertical shear of the zonal monsoon wind. It is a classic case of convectively coupled dynamics, where moisture and circulation influence each other to dictate the propagation of convection. While the vertical shear of the background zonal wind and the zonal gradient of background moisture over the AS are necessary conditions for northward propagation, they are not sufficient. The critical difference between the propagating and non-propagating cases arises from the strength and extent of easterlies over the southern AS when new convection starts in the EIO, as moistening over the southern AS by this easterlies is critical behind the propagation.
While previous studies highlighted the important role of vertical shear (Jiang et al., 2004; Li et al., 2021; Karmakar et al., 2022) as well as moisture advection (Adames et al., 2016; Jiang et al., 2018; Chen and Wang, 2021; Wang and Li, 2020) with uncertainly as to which process of moisture advection dominant, these two mechanisms were thought as either somewhat contradictory (Yang et al., 2019), or independent (Li et al., 2021; Wang and Sobel, 2022). Moreover, in the'vertical shear mechanism' (Jiang et al., 2004; Li et al., 2021; Karmakar et al., 2022), the moistening process was dictated by boundary layer moisture convergence, while the studies that followed the'moisture mode' framework (Adames et al., 2016; Jiang et al., 2018; Wang and Sobel, 2022), and understood the propagation by moisture advection, paid less attention to the vertical shear of the background wind and its role in tilting the Rossby gyre as a part of their mechanism. In this paper, we showed that though vertical shear of background wind is essential, boundary layer moisture convergence doesn't play a role in moistening the area north of existing convection to facilitate northward propagation. In fact, the strong moistening happens above the boundary layer, in the free-troposphere, and it is dictated by the moisture advection. On the other hand, only moisture advection, without the role played by vertical shear, can generate a very limited Northward propagation, up to the AS, but it can't explain the moistening process beyond 20N. It is also unable to explain the observed tilt in the Rossby gyre. Here, we claim that vortex tilting and the moisture advection process can't be looked at separately, in fact, they work hand in hand to facilitate northward propagation of the BSISO.
|
2309.13134 | An Alternative Approach to Computing $β(2k+1)$ | This paper presents a new approach to evaluating the special values of the
Dirichlet beta function, $\beta(2k+1)$, where $k$ is any nonnegative integer.
Our approach relies on some properties of the Euler numbers and polynomials,
and uses basic calculus and telescoping series. By a similar procedure, we also
yield an integral representation of $\beta(2k)$. The idea of our proof adapts
from a previous study by Ciaurri et al., where the authors introduced a new
proof of Euler's formula for $\zeta(2k)$. | Naomi Tanabe, Nawapan Wattanawanichkul | 2023-09-22T18:42:39Z | http://arxiv.org/abs/2309.13134v1 | # An alternative approach to computing \(\beta(2k+1)\)
###### Abstract.
This paper presents a new approach to evaluating the special values of the Dirichlet beta function, \(\beta(2k+1)\), where \(k\) is any nonnegative integer. Our approach relies on some properties of the Euler numbers and polynomials, and uses basic calculus and telescoping series. By a similar procedure, we also yield an integral representation of \(\beta(2k)\). The idea of our proof adapts from a previous study by Ciaurri et al., where the authors introduced a new proof of Euler's formula for \(\zeta(2k)\).
## 1. Introduction
It is well known that the value of the Riemann \(\zeta\)-function at a positive even integer \(2k\) can be expressed as
\[\zeta(2k)\ =\ \sum_{n=1}^{\infty}\frac{1}{n^{2k}}\ =\ \frac{(-1)^{k-1}2^{2k-1} \pi^{2k}}{(2k)!}B_{2k}, \tag{1.1}\]
where \(B_{k}\) is the \(k\)-th Bernoulli number. One of the classical proofs of this formula is attributed to Euler, which involves considering the expansion of \(\pi z\cot(\pi z)\) in two different ways. However, over time, numerous other proofs have been developed utilizing a variety of techniques and approaches, including notable examples such as [1], [3]-[9], [11], and [13]-[15]. The multitude of proofs reflects the fundamental importance of this formula and the richness of the mathematical concepts it connects.
On the other hand, the Riemann \(\zeta\)-function has been generalized in many ways, including the Dirichlet \(L\)-functions. In a similar manner to Equation (1.1), formulas for the special values of the Dirichlet \(L\)-functions have been established as follows (for details see, for example, [10, Section 7-2, Corollary 2.10]).
**Theorem 1** ([10]).: _Let \(\chi\) be a primitive character of conductor \(N\) and \(k\) be a positive integer satisfying \(\chi(-1)=(-1)^{k}\). Then we have_
\[L(k,\chi)=(-1)^{k-1}\frac{\tau(\chi)}{2}\left(\frac{2\pi i}{N}\right)^{k}\frac {B_{k,\overline{\chi}}}{k!}, \tag{1.2}\]
_where \(B_{k,\overline{\chi}}\) is the generalized Bernoulli number associated with the conjugate of the character \(\chi\), and \(\tau(\chi)\) is the Gauss sum of the character defined as_
\[\tau(\chi)\ =\ \sum_{a=1}^{N}\chi(a)e^{\frac{2\pi ia}{N}}.\]
These formulas play a critical role in number theory, particularly in the study of primes in arithmetic progressions, and have many connections with various
## 1. Introduction
In this paper, we study the \(L\)-function associated with the primitive Dirichlet character \(\chi_{4}\) modulo \(4\). The \(L\)-function associated with the primitive Dirichlet character \(\chi_{4}\) modulo \(4\) is defined by the generating function
\[\sum_{k=0}^{\infty}E_{k}\frac{t^{k}}{k!}\ =\ \frac{2e^{t}}{e^{2t}+1}. \tag{1.1}\]
By expanding the right-hand side of the equation above, the first few Euler numbers can be observed as
\[E_{0}=1,\ E_{1}=0,\ E_{2}=-1,\ E_{3}=0,\ E_{4}=5,\ \ldots.\]
**Definition 2**.: The \(k\)_-th Euler polynomial \(E_{k}(x)\)_ is defined by the generating function
\[\sum_{k=0}^{\infty}E_{k}(x)\frac{t^{k}}{k!}\ =\ \frac{2e^{xt}}{e^{t}+1},\ \text{ where}\ |t|\leq\pi,x\in\mathbb{R}. \tag{2.2}\]
Again, by expanding the right-hand side of the above equation, we see that the first few Euler polynomials are
\[E_{0}(x)=1,\ E_{1}(x)=x-\frac{1}{2},\ E_{2}(x)=x^{2}-x,\ E_{3}=x^{3}-\frac{3}{2}x ^{2}+\frac{1}{4},\ \ldots.\]
Using Definitions 1 and 2, we observe the following noteworthy proposition.
**Proposition 1**.: _For the Euler polynomials and \(k\in\mathbb{Z}_{\geq 0}\), the followings are true:_
1. \(E_{k}(1-x)=(-1)^{k}E_{k}(x)\) _and, in particular,_ \(E_{2k+1}\left(1/2\right)=0,\)__
2. \(E_{k}=2^{k}E_{k}\left(1/2\right),\)__
3. \(E_{k}(x+1)+E_{k}(x)=2x^{k},\)__
4. \(E_{2k}(1)=E_{2k}(0)=0,\)__
5. \(E_{0}^{\prime}(x)=0\) _and_ \(E_{k}^{\prime}(x)=kE_{k-1}(x)\) _when_ \(k\geq 1,\)__
6. \(E_{2k}\left(1/2\right)=-\dfrac{B_{2k+1,\chi 4}}{(2k+1)2^{2k-1}}\)_, where_ \(B_{2k+1,\chi 4}\) _is the generalized Bernoulli number associated with the primitive character modulo 4._
Proof.: To prove the first statement, we substitute \(x\) with \(1-x\) in Equation (2.2) and get
\[\sum_{k=0}^{\infty}E_{k}(1-x)\frac{t^{k}}{k!}\ =\ \frac{2e^{(1-x)t}}{e^{t}+1}\ =\ \frac{2e^{-xt}}{e^{-t}+1}\ =\ \sum_{k=0}^{\infty}(-1)^{k}E_{k}(x)\frac{t^{k}}{k!}.\]
Comparing the coefficients of \(t^{k}\) term on both sides of the equation gives us the desired result. The second part in (1.1) then follows by evaluating the equation at \(x=1/2\) when \(k\) is an odd integer. The second and third statements are obtained similarly, by evaluating Equation (2.2) at \(x=1/2\) and at \(x-1\), respectively.
The statement (1.4) follows from substituting \(x=0\) to equations in (1.1) and (1.3), which yields \(E_{2k}(1)=E_{2k}(0)\) and \(E_{2k}(1)+E_{2k}(0)=0\), respectively.
To verify (1.5), we differentiate Equation (2.2) with respect to \(x\);
\[\sum_{k=0}^{\infty}E_{k}^{\prime}(x)\frac{t^{k}}{k!}\ =\ \frac{d}{dx}\frac{2e^{xt}}{e^{t}+1}\ =\ t\cdot\frac{2e^{xt}}{e^{t}+1}.\]
The right-hand side of the equation is then the product of \(t\) and the generating function of the the Euler polynomials. Therefore, we see that
\[\sum_{k=0}^{\infty}E_{k}^{\prime}(x)\frac{t^{k}}{k!}\ =\ \sum_{k=0}^{\infty}E_{k}(x) \frac{t^{k+1}}{k!}\ =\ \sum_{k=1}^{\infty}E_{k-1}(x)\frac{t^{k}}{(k-1)!}, \tag{2.3}\]
which means \(E_{k}^{\prime}(x)=kE_{k-1}^{\prime}(x)\) when \(k\geq 1\). Since the constant term of the right-hand side of Equation (2.3) is \(0\), we conclude that \(E_{0}^{\prime}(x)=0\).
Lastly, for (1.6), we recall the relation between the \(k\)-th generalized Bernoulli number \(B_{k,\chi}\) associated with \(\chi\) and the \(k\)-th Bernoulli polynomial \(B_{k}(x)\) given by
\[B_{k,\chi}\ =\ N^{k-1}\sum_{a=1}^{N}\chi(a)B_{k}\left(\frac{a}{N}\right).\]
Here, \(N\) is the conductor of the character \(\chi\). See, for example, [2, Section 4.3]. In particular, when \(\chi=\chi_{4}\),
\[B_{k,\chi_{4}}\ =\ 4^{k-1}\left(B_{k}\left(\frac{1}{4}\right)-B_{k}\left(\frac{3}{ 4}\right)\right). \tag{2.4}\]
Likewise, the Euler polynomials can be related to the Bernoulli polynomials as
\[E_{k-1}(x)\ =\ \frac{2^{k}}{k}\left(B_{k}\left(\frac{x+1}{2}\right)-B_{k}\left( \frac{x}{2}\right)\right),\]
(see [12] for details). In particular, when \(x=1/2\), we have
\[E_{k-1}\left(\frac{1}{2}\right)\ =\ \frac{2^{k}}{k}\left(B_{k}\left(\frac{3}{4} \right)-B_{k}\left(\frac{1}{4}\right)\right). \tag{2.5}\]
Comparing Equations (2.4) and (2.5) gives us
\[E_{k-1}\left(\frac{1}{2}\right)\ =\ \frac{2^{k}}{k}\cdot\frac{-B_{k,\chi_{4}}}{4 ^{k-1}}\ =\ -\frac{B_{k,\chi_{4}}}{2^{k-2}k}.\]
A proof is completed by replacing \(k\) with \(2k+1\).
## 3. Computing \(\beta(2k+1)\)
In this section, we prove the formula for \(\beta(2k+1)\) as stated in Theorem 2.
Proof of Theorem 2.: We will use the following auxiliary integral
\[I(k,m)\ =\ \int_{0}^{1/2}E_{2k}(t)\sin((2m+1)\pi t)\ dt, \tag{3.1}\]
for integers \(k,m\geq 0\). For clarity, we split the proof into three main steps.
**1) Summing auxiliary functions.** First, we find the recurrence relation among the auxiliary functions \(I(k,m)\) and derive the closed form solution. We begin with the simplest case when \(k=0\). Using the fact that \(E_{0}(t)=1\) for any real \(t\), we have that
\[I(0,m)\ =\ \int_{0}^{1/2}\sin((2m+1)\pi t)\,dt\ =\ \frac{1}{(2m+1)\pi}. \tag{3.2}\]
For \(k\geq 1\), we integrate Equation (3.1) by parts and obtain
\[I(k,m) =-\left[E_{2k}(t)\frac{\cos((2m+1)\pi t)}{(2m+1)\pi}\right]_{t=0 }^{t=1/2}+\int_{0}^{1/2}E_{2k}^{\prime}(t)\frac{\cos((2m+1)\pi t)}{(2m+1)\pi}\,dt\] \[=\frac{1}{(2m+1)\pi}\int_{0}^{1/2}E_{2k}^{\prime}(t)\cos((2m+1) \pi t)\,dt,\]
where the last equality follows from (1.4).
Now, applying (1.5) and integrating by parts again, we get
\[I(k,m) =-\frac{2k(2k-1)}{(2m+1)^{2}\pi^{2}}\int_{0}^{1/2}E_{2k-2}(t)\sin ((2m+1)\pi t)\,dt\] \[=\frac{-2k(2k-1)}{(2m+1)^{2}\pi^{2}}I(k-1,m).\]
Applying this recurrence relation repeatedly, together with the value of \(I(0,m)\) from Equation (3.2), we obtain the closed form of our auxiliary functions
\[I(k,m) =\frac{-2k(2k-1)}{(2m+1)^{2}\pi^{2}}\cdot\frac{-(2k-2)(2k-3)}{(2m+1 )^{2}\pi^{2}}\cdots\frac{-2\cdot 1}{(2m+1)^{2}\pi^{2}}\cdot\frac{1}{(2m+1)\pi}\] \[=\frac{(-1)^{k}(2k)!}{(2m+1)^{2k+1}\pi^{2k+1}},\]
for any nonnegative integers \(k\) and \(m\). Multiplying each \(I(k,m)\) by \((-1)^{m}\) and summing up over nonnegative integers \(m\) relate \(I(k,m)\) to \(\beta(2k+1)\) as
\[\sum_{m=0}^{\infty}(-1)^{m}I(k,m)\ =\ \sum_{m=0}^{\infty}\frac{(-1)^{m}(-1)^{k} (2k)!}{(2m+1)^{2k+1}\pi^{2k+1}}\ =\ \frac{(-1)^{k}(2k)!}{\pi^{2k+1}}\beta(2k+1). \tag{3.3}\]
**2) Modifying auxiliary functions.** We now modify our auxiliary functions \(I(k,m)\) as follows:
\[I^{*}(k,m)\ :=\ \int_{0}^{1/2}E_{2k}^{*}(t)\sin((2m+1)\pi t)\,dt,\]
where
\[E_{2k}^{*}(t)\ :=\ E_{2k}(t)-\frac{E_{2k}}{2^{2k}}\sin(\pi t).\]
This can also be written as
\[I^{*}(k,m) =\int_{0}^{1/2}\left(E_{2k}(t)-\frac{E_{2k}}{2^{2k}}\sin(\pi t) \right)\sin((2m+1)\pi t)\,dt\] \[=I(k,m)-\int_{0}^{1/2}\frac{E_{2k}}{2^{2k}}\sin(\pi t)\sin((2m+1) \pi t)\,dt,\]
and therefore,
\[I(k,m)\ =\ I^{*}(k,m)+\frac{E_{2k}}{2^{2k}}\int_{0}^{1/2}\sin(\pi t)\sin((2m+1) \pi t)\,dt. \tag{3.4}\]
Furthermore, applying the following trigonometric identity
\[\sin(\alpha)\sin(\beta)\ =\ \frac{\cos(\alpha-\beta)-\cos(\alpha+\beta)}{2}\]
to the integrand in Equation (3.4) yields that
\[\frac{E_{2k}}{2^{2k}}\int_{0}^{1/2}\sin(\pi t)\sin((2m+1)\pi t) \,dt =\frac{E_{2k}}{2^{2k}}\int_{0}^{1/2}\frac{\cos(2m\pi t)-\cos((2m+2) \pi t)}{2}\,dt\] \[=\begin{cases}\frac{E_{2k}}{2^{2k+2}}&\text{ if }m=0,\\ 0&\text{ otherwise.}\end{cases}\]
We also note that (1.2) and (1.6) give
\[\frac{E_{2k}}{2^{2k+2}}\ =\ -\frac{B_{2k+1,\chi_{4}}}{(2k+1)2^{2k+1}}.\]
Hence, Equation (3.4) can be written as
\[I(k,m)\ =\ \begin{cases}I^{*}(k,m)-\dfrac{B_{2k+1,\chi_{4}}}{(2k+1)2^{2k+1}},& \text{if }m=0,\\ I^{*}(k,m),&\text{if }m\geq 1.\end{cases}\]
Thus,
\[\sum_{m=0}^{\infty}(-1)^{m}I(k,m) =\left(I^{*}(k,0)-\dfrac{B_{2k+1,\chi_{4}}}{(2k+1)2^{2k+1}} \right)+\sum_{m=1}^{\infty}(-1)^{m}I^{*}(k,m)\] \[=\sum_{m=0}^{\infty}(-1)^{m}I^{*}(k,m)-\dfrac{B_{2k+1,\chi}}{(2k+ 1)2^{2k+1}}. \tag{3.5}\]
Comparing this with Equation (3.3), it boils down to simplify Equation (3.5) to obtain the desired result. Indeed, we will show that \(\sum_{m=0}^{\infty}(-1)^{m}I^{*}(k,m)=0\) in the following subsection.
**3) Computing telescoping series.** We now show that the infinite series \(\sum_{m=0}^{\infty}(-1)^{m}I^{*}(k,m)\), which is defined as \(\lim_{N\to\infty}\sum_{m=0}^{N}(-1)^{m}I^{*}(k,m)\), converges to \(0\) by using trigonometric identities and telescoping sums. Consider
\[\lim_{N\to\infty}\sum_{m=0}^{N}(-1)^{m}I^{*}(k,m)\] \[=\lim_{N\to\infty}\left(I^{*}(k,0)-I^{*}(k,1)+\cdots+(-1)^{N-1}I ^{*}(k,N-1)+(-1)^{N}I^{*}(k,N)\right)\] \[=\lim_{N\to\infty}\int_{0}^{1/2}\left(E_{2k}^{*}(t)\sin(\pi t)-E_ {2k}^{*}(t)\sin(3\pi t)+\cdots\right.\] \[\qquad+(-1)^{N-1}E_{2k}^{*}(t)\sin((2N-1)\pi t)+(-1)^{N}E_{2k}^{ *}(t)\sin((2N+1)\pi t)\right)dt.\]
Applying the following trigonometric identity
\[\sin((2m+1)x)\ =\ \dfrac{\cos((2m-1)x)-\cos((2m+3)x)}{2\sin(2x)},\]
we obtain the telescoping series
\[\lim_{N\to\infty}\int_{0}^{1/2}\left(E_{2k}^{*}(t)\cdot\dfrac{ \cos(-\pi t)-\cos(3\pi t)}{2\sin(2\pi t)}-E_{2k}^{*}(t)\cdot\dfrac{\cos(\pi t )-\cos(5\pi t)}{2\sin(2\pi t)}\right.\] \[\qquad+E_{2k}^{*}(t)\cdot\dfrac{\cos(3\pi t)-\cos(7\pi t)}{2\sin( 2\pi t)}-E_{2k}^{*}(t)\cdot\dfrac{\cos(5\pi t)-\cos(9\pi t)}{2\sin(2\pi t)}+\cdots\] \[\qquad+(-1)^{N}E_{2k}^{*}(t)\cdot\dfrac{\cos((2N-1)\pi t)-\cos(( 2N+3)\pi t)}{2\sin(2\pi t)}\Big{)}\,dt. \tag{3.6}\]
To cancel out repetitive terms in Equation (3.6), we need to extend the function
\[f(t)\ =\ \dfrac{E_{2k}^{*}(t)}{\sin(2\pi t)},\qquad\text{ for }t\in(0,1/2),\]
to \(t=0\) and \(1/2\).
When \(t=0\), we note that \(E_{2k}^{*}(0)=E_{2k}(0)-\frac{E_{2k}}{2^{2k}}\cdot\sin(0)=0\) by (1.4). We then evaluate the limit of \(f(t)\) when \(t\) approaches \(0\) using L'Hopital's rule as follows
\[\lim_{t\to 0}\frac{E_{2k}(t)-\frac{E_{2k}}{2^{2k}}\sin(\pi t)}{\sin(2\pi t)}\ =\ \frac{2kE_{2k-1}(0)-\frac{E_{2k}}{2^{2k}}\pi}{2\pi},\]
which is some constant.
As for \(t=1/2\), notice that \(E_{2k}^{*}(1/2)=E_{2k}(1/2)-\frac{E_{2k}}{2^{2k}}\sin(\pi/2)=0\) by (1.2). Then the limit of \(f(t)\) as \(t\) approaches \(1/2\) can be evaluated as
\[\lim_{t\to 1/2}\frac{E_{2k}(t)-\frac{E_{2k}}{2^{2k}}\sin(\pi t)}{\sin(2\pi t)}\ =\ \frac{2k\cdot E_{2k-1}(1/2)-\frac{E_{2k}}{2^{2k}}\pi\cos(\pi/2)}{2\pi\cos(\pi)},\]
which equals \(0\) by using (1.1). Thus, \(f(t)\) is well-defined on \([0,1/2]\), and, hence, most of the terms in Equation (3.6) get cancelled. Moreover, since the first two terms \(f(t)\cos(-\pi t)\) and \(f(t)\cos(\pi t)\) are equal, we are left with
\[\sum_{m=0}^{\infty}(-1)^{m}I^{*}(k,m)\] \[=\lim_{N\to\infty}(-1)^{N-1}\int_{0}^{1/2}\frac{E_{2k}^{*}(t)}{2 \sin(2\pi t)}\left(\cos((2N+1)\pi t)-\cos((2N+3)\pi t)\right)\,dt\] \[=\lim_{N\to\infty}(-1)^{N-1}\int_{0}^{1/2}\frac{E_{2k}^{*}(t)}{2 \sin(2\pi t)}\left(-2\sin((2N+2)\pi t)\sin(-\pi t)\right)\,dt\] \[=\lim_{N\to\infty}(-1)^{N-1}\int_{0}^{1/2}\frac{E_{2k}^{*}(t)}{2 \cos(\pi t)}\left(\sin((2N+2)\pi t)\right)\,dt. \tag{3.7}\]
To proceed further, we justify that the function \(\frac{E_{2k}^{*}(t)}{2\cos(\pi t)}\sin((2N+2)\pi t)\) is differentiable on \([0,1/2]\) with continuous derivative. Similar to the case of \(f(t)\), we extend the function
\[g(t)\ =\ \frac{E_{2k}^{*}(t)}{\cos(\pi t)},\ \ \ \ \mbox{for}\ t\in[0,1/2),\]
to \(t=1/2\), which can be achieved by applying (1.1) and (1.5):
\[\lim_{t\to 1/2}\frac{E_{2k}(t)-\frac{E_{2k}}{2^{2k}}\sin(\pi t)}{2\cos(\pi t)}\ =\ 0.\]
Therefore, \(g(t)\) is differentiable with continuous derivative on \([0,1/2]\).
We now consider the integral on the right-hand side of the last equation of (3.7). Writing \((2N+2)\pi=R\) and integrating by parts give
\[\int_{0}^{1/2}g(t)\sin(Rt)\,dt\ =\ -\frac{\cos(R/2)}{R}g(1/2)+\frac{1}{R}g(0)+ \int_{0}^{1/2}g^{\prime}(t)\frac{\cos(Rt)}{R}\,dt.\]
The boundedness of \(g(0)\), \(g(1/2)\) and \(g^{\prime}(t)\) shows that each term in the above sum approaches zero as \(R\) approaches infinity, and therefore
\[\lim_{N\to\infty}\sum_{m=0}^{N}(-1)^{m}I^{*}(k,m)=0.\]
Thus, Equation (3.5) is simplified as
\[\sum_{m=0}^{\infty}(-1)^{m}I(k,m)\ =\ -\frac{B_{2k+1,\chi}}{(2k+1)2^{2k+1}}.\]
This, together with Equation (3.3), completes the proof.
## 4. An Integral Representation of \(\beta(2k)\)
This section is devoted to obtaining the integral representation of \(\beta(2k)\) as stated in Theorem 3. In this case, we split the proof into two steps.
Proof of Theorem 3.: We consider a slightly different auxiliary integrals
\[J(k,m)\ =\ \int_{0}^{1/2}E_{2k+1}(t)\cos((2m+1)\pi t)\,dt \tag{4.1}\]
with integers \(k,m\geq 0\).
**1) Summing auxiliary functions.** Similar to the case of \(I(0,m)\), applying (1.5) and integrating by parts yield
\[J(0,m)\ =\ \frac{E_{1}\left(\frac{1}{2}\right)\sin\left(\frac{(2m+1)}{2} \pi\right)-E_{1}(0)\sin(0)}{(2m+1)\pi}-\int_{0}^{1/2}E_{0}(t)\frac{\sin((2m+1 )\pi t)}{(2m+1)\pi}\,dt.\]
Then, using (1.1), together with the facts that \(E_{0}(t)=1\) and \(\sin(0)=0\), we are left with
\[J(0,m)\ =\ -\int_{0}^{1/2}\frac{\sin((2m+1)\pi t)}{(2m+1)\pi}\,dt\ =\ -\frac{1}{(2m+1)^{2}\pi^{2}}. \tag{4.2}\]
Now we consider Equation (4.1) when \(k\geq 1\). Integrating by parts twice, along with (1.1), (1.4), and (1.5), gives us
\[J(k,m)=-\frac{(2k+1)(2k)}{(2m+1)^{2}\pi^{2}}J(k-1,m). \tag{4.3}\]
Putting Equations (4.2) and (4.3) together provides the closed form of \(J(k,m)\) as
\[J(k,m)\ =\ \frac{(-1)^{k+1}(2k+1)!}{(2m+1)^{2k+2}\pi^{2k+2}}.\]
Therefore, we can relate \(J(k,m)\) to \(\beta(2k)\) as
\[\sum_{m=0}^{\infty}(-1)^{m}J(k-1,m)\ =\ \sum_{m=0}^{\infty}(-1)^{m}\frac{(-1)^{k} (2k-1)!}{(2m+1)^{2k}\pi^{2k}}\ =\ \frac{(-1)^{k}(2k-1)!}{\pi^{2k}}\beta(2k),\]
for any \(k\geq 1\), or equivalently,
\[\beta(2k)\ =\ \frac{(-1)^{k}\pi^{2k}}{(2k-1)!}\sum_{m=0}^{\infty}(-1)^{m}J(k-1,m). \tag{4.4}\]
**2) Computing telescoping series.** We now simplify the right-hand side of Equation (4.4) by exploiting a telescoping series and the trigonometric identity
\[\cos((2m+1)\pi t)\ =\ \frac{\cos(2m\pi t)+\cos((2m+2)\pi t)}{2\cos(\pi t)}. \tag{4.5}\]
Applying Equation (4.5) in Equation (4.1), we obtain that
\[J(k-1,m)\ =\ \int_{0}^{1/2}E_{2k-1}(t)\frac{\cos(2m\pi t)+\cos((2m+2)\pi t)}{2 \cos(\pi t)}\ dt.\]
Multiplying each \(J(k-1,m)\) by \((-1)^{m}\) and summing up over nonnegative integers \(m\) give
\[\sum_{m=0}^{\infty}(-1)^{m}J(k-1,m)\] \[\ \
is null. Let \(R\) denote \((2N+2)\pi\). The integral above then equals
\[\lim_{N\to\infty}(-1)^{N}\int_{0}^{1/2}h(t)\cos(Rt)dt\] \[=\lim_{N\to\infty}(-1)^{N}\left(h(1/2)\frac{\sin(R/2)}{R}-h(0) \frac{\sin(0)}{R}-\int_{0}^{1/2}h^{\prime}(t)\frac{\sin(Rt)}{R}dt\right).\]
Since \(h(1/2),h(0)\), and \(h^{\prime}(t)\) are bounded, each summand approaches \(0\) as \(R\to\infty\), and therefore this limit is indeed \(0\). Thus, the summation (4.6) is
\[\sum_{m=0}^{\infty}(-1)^{m}J(k-1,m)\ =\ \int_{0}^{1/2}\frac{E_{2k-1}(t)\sec(\pi t )}{2}dt.\]
Substituting this back into Equation (4.4) yields
\[\beta(2k)\ =\ \frac{(-1)^{k-1}\pi^{2k}}{2(2k-1)!}\int_{0}^{1/2}E_{2k-1}(t) \sec(\pi t)dt,\]
for all positive integers \(k\), as desired.
**Acknowledgement.** We are grateful for the support from the Kibbe Science Fellowship from Bowdoin College.
|
2309.03327 | Uniform Asymptotic Approximation Method with Pöschl-Teller Potential | In this paper, we study analytical approximate solutions of the second-order
homogeneous differential equations with the existence of only two turning
points (but without poles), by using the uniform asymptotic approximation (UAA)
method. To be more concrete, we consider the P\"oschl-Teller (PT) potential,
for which analytical solutions are known. Depending on the values of the
parameters involved in the PT potential, we find that the upper bounds of the
errors of the approximate solutions in general are $\lesssim 0.15\% \sim 10\%
$, to the first-order approximation of the UAA method. The approximations can
be easily extended to high-order, with which the errors are expected to be much
smaller. Such obtained analytical solutions can be used to study cosmological
perturbations in the framework of quantum cosmology, as well as quasi-normal
modes of black holes. | Rui Pan, John Joseph Marchetta, Jamal Saeed, Gerald Cleaver, Bao-Fei Li, Anzhong Wang, Tao Zhu | 2023-09-06T19:18:04Z | http://arxiv.org/abs/2309.03327v3 | # Uniform Asymptotic Approximation Method with Poschl-Teller Potential
###### Abstract
In this paper, we study analytical approximate solutions of the second-order homogeneous differential equations with the existence of only two turning points (but without poles), by using the uniform asymptotic approximation (UAA) method. To be more concrete, we consider the Poschl-Teller (PT) potential, for which analytical solutions are known. Depending on the values of the parameters involved in the PT potential, we find that the upper bounds of the errors of the approximate solutions in general are \(\lesssim 0.15\%\sim 10\%\), to the first-order approximation of the UAA method. The approximations can be easily extended to high-order, with which the errors are expected to be much smaller. Such obtained analytical solutions can be used to study cosmological perturbations in the framework of quantum cosmology, as well as quasi-normal modes of black holes.
## I Introduction
A century after the first claim by Einstein that general relativity (GR) needs to be quantized, the unification of Quantum Mechanics and GR still remains an open question, despite enormous efforts [1]. Such a theory is necessary not only for conceptual reasons but also for the understanding of fundamental issues, such as the big bang and black hole singularities. Various theories have been proposed and among them, string/M-Theory and Loop Quantum Gravity (LQG) have been extensively investigated [2; 3]. Differences between the two approaches are described in [4; 5].
LQG was initially based on a canonical approach to quantum gravity (QG) introduced earlier by Dirac, Bergmann, Wheeler, and DeWitt [6]. However, instead of using metrics as the quantized objects [6], LQG is formulated in terms of densitized triads and connections, and is a non-perturbative and background-independent quantization of GR [7]. The gravitational sector is described by the SU(2)-valued Ashtekar connection and its associated conjugate momentum, the densitized triad, from which one defines the holonomy of Ashtekar's connection and the flux of the densitized triad. Then, one can construct the full kinematical Hilbert space in a rigorous and well-defined way [3]. An open question of LQG is its semiclassical limit, that is, are there solutions of LQG that closely approximate those of GR in the semiclassical limit?
Although the above question still remains open, concrete examples can be found in the context of loop quantum cosmology (LQC) (For recent reviews of LQC, see [8; 9; 10; 11; 12; 13; 14; 15; 16; 17] and references therein). Physical implications of LQC have also been studied using _the effective descriptions_ of the quantum spacetimes derived from coherent states [18], whose validity has been verified numerically for various spacetimes [19; 20], especially for states sharply peaked on classical trajectories at late times [21]. The effective dynamics provide a definitive answer on the resolution of the big bang singularity [22; 23; 24; 25; 26; 27], replaced by a quantum bounce when the energy density of matter reaches a maximum value determined purely by the underlying quantum geometry.
To connect LQC with observations, cosmological perturbations in LQC have been also investigated intensively in the past decade, and a variety of different approaches to extend LQC to include cosmological perturbations have been developed. These include the dressed metric [28; 29; 30], hybrid [31; 32; 33; 34], deformed algebra [35; 36; 37; 38] and separate universe [39; 40] approaches. For a brief review on each of these approaches, we refer readers to [16].
One of the major challenges in the studies of cosmological perturbations in LQC is how to solve for the mode functions \(\mu_{k}\) from the modified Mukhanov-Sasaki equation. So far, it has mainly been done numerically [8; 9; 10; 11; 12; 13; 14; 15; 16; 17]. However, this is often required to be conducted with high-performance computational resources [41], which are not accessible to general audience.
In the past decade, we have systematically developed the uniform asymptotic approximation (UAA) method initially proposed by Olver [42; 43; 44], and applied it successfully to various circumstances [45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63]1. In this paper, we shall continuously work on it by considering the
case in which the effective potential has only zero points but without singularities. To be more concrete, we shall consider the Poschl-Teller (PT) potential, for which analytical solutions are known [66]. The consideration of this potential is also motivated from the studies of cosmological perturbations in dressed metric and hybrid approaches [67; 68], in which it was shown explicitly that the potentials for the mode functions can be well approximated by the PT potential with different choices of the PT parameters. In particular, in the dressed metric approach, the mode function satisfies the following equation [67]
Footnote 1: In the case of a non-zero potential, the potential is not a function of the form [64; 65].
\[\mu_{k}^{\prime\prime}(\eta)+\left[k^{2}-\mathscr{V}(\eta)\right]\mu_{k}(\eta)=0, \tag{1.1}\]
in which \(\mathscr{V}(\eta)\) serves as an effective potential. During the bouncing phase it is given by
\[\mathscr{V}_{\text{dressed}}(\eta)\equiv\frac{\gamma_{\text{B}}m_{\text{Pl}}^ {2}(3-\gamma_{\text{B}}t^{2}/t_{\text{Pl}}^{2})}{9(1+\gamma_{\text{B}}t^{2}/t _{\text{Pl}}^{2})^{5/3}}, \tag{1.2}\]
where \(\gamma_{\text{B}}\) is a constant introduced in [67], and \(m_{\text{Pl}}\) and \(t_{\text{Pl}}\) are respectively, the Planck mass and time. This potential can be well approximated by a PT potential
\[\mathscr{V}_{\text{PT}}(\eta)=\frac{\mathscr{V}_{0}}{\cosh^{2}\alpha(\eta- \eta_{\text{B}})}, \tag{1.3}\]
with
\[\mathscr{V}_{0}=\frac{\gamma_{\text{B}}m_{\text{Pl}}^{2}}{3}=\frac{\alpha^{2 }}{6}. \tag{1.4}\]
Here \(\eta\) is the conformal time related to the cosmic time \(t\) by \(d\eta=dt/a(t)\). On the other hand, in the hybrid approach, the effective potential during the bouncing phase is given by
\[\mathscr{V}_{\text{Hybrid}}(\eta)=-\frac{\gamma_{\text{B}}m_{\text{Pl}}^{2}( 1-\gamma_{\text{B}}t^{2}/t_{\text{Pl}}^{2})}{9(1+\gamma_{\text{B}}t^{2}/t_{ \text{Pl}}^{2})^{5/3}}, \tag{1.5}\]
which can be also modeled by the PT potential (1.3) but now with [68]
\[V_{0}=\frac{m_{\text{Pl}}^{2}\gamma_{B}}{9},\quad\alpha^{2}=\frac{2}{3}m_{ \text{Pl}}^{2}\gamma_{B}. \tag{1.6}\]
For more details, we refer readers to [67; 68].
The rest of the paper is organized as follows: In Sec. II we provide a brief review of the UAA method with two turning points, and show that the first-order approximate solution will be described by the parabolic cylindrical functions. In Sec. III we construct the explicit approximate analytical solutions with the PT potential, and find that the parameter space can be divided into three different cases: A) \(k^{2}\gg\beta^{2}\), B) \(k^{2}\simeq\beta^{2}\), and C) \(k^{2}\ll\beta^{2}\), where \(k\) and \(\beta\) are real constants. After working out the error control function \(\mathscr{T}\) [cf. Appendix C] in each case, we are able to determine the parameter \(q_{0}\), introduced in the process of the UAA method in order to minimize the errors. Then, we show the upper bounds of errors of our approximate solutions with respect to the exact one, given in Appendix B. In particular, in Case A), the upper bounds are \(\lesssim 0.15\%\), while in Case B) they are no larger than \(10\%\). In Case C), the errors are also very small, except the minimal points [cf. Fig. 10], at which the approximate solutions deviate significantly from the analytical one. The causes of such large errors are not known, and still under our investigations. In each of these three cases, we also develop our numerical codes, and find that the numerical solutions trace the exact one very well, and the upper bounds of errors are always less than \(10^{-4}\%\) in each of the three cases. The paper is ended in Sec. IV, in which our main conclusions are summarized. There are also three appendices, A, B, and C, in which some mathematical formulas are presented.
## II The uniform asymptotic approximation method
Let us start with the following second-order differential equation
\[\frac{d^{2}\mu_{k}(y)}{dy^{2}}=f(y)\mu_{k}(y). \tag{2.1}\]
It should be noted that all second-order linear homogeneous ordinary differential equations (ODEs) can be written in the above form by properly choosing the variable \(y\) and \(\mu_{k}(y)\). Instead of working with the above form, we introduce two functions \(g(y)\) and \(q(y)\), so that the function \(f(y)\) takes the form 2
Footnote 2: In the case of a non-zero potential, the function \(f(y)\) is not a function of the form [64].
\[f(y)=\lambda^{2}g(y)+q(y), \tag{2.2}\]
where \(\lambda\) is a large positive dimensionless constant and serves as a bookmark, so we can expand \(\mu_{k}(y)\) as
\[\mu_{k}(y)=\sum_{n=0}^{\infty}\frac{\mu_{k}^{(n)}(y)}{\lambda^{n}}. \tag{2.3}\]
After all the calculations are done, one can always set \(\lambda=1\) by simply absorbing the factor \(\lambda^{-n}\) into \(\mu_{k}^{(n)}(y)\). It should be noted that there exist cases in which the above expansion does not converge, and in these cases we shall expand \(\mu_{k}(y)\) only to finite terms, say, \(\mathcal{N}\), so that \(\mu_{k}(y)\) is well approximated by the sum of these \(\mathcal{N}\) terms. On the other hand, the main reason to introduce two functions \(g(y)\) and \(q(y)\), instead of only \(f(y)\), is to minimize errors by properly choosing \(g(y)\) and \(q(y)\).
In general, the function \(g(y)\) has singularities and/or zeros in the interval of our interest. We call the zeros and singularities of \(g(y)\) as _turning points_ and _poles_, respectively. The _uniform asymptotic approximate_ (UAA) solutions of \(\mu_{k}(y)\) depend on the properties of \(g(y)\) around
their poles and turning points [42; 43; 44]. The cases in which \(g(y)\) has both poles and turning points were studied in detail in [47; 49; 54], so in this paper we shall focus ourselves on the cases where singularities are absent and only turning points exist. As to be shown below, the treatments of these cases will be different from the ones considered in [47; 49; 54]. In particular, in our previous studies the function \(q(y)\) was uniquely determined by requiring that _the error control function be finite and minimized at the poles_, while in the current cases no such poles exist. So, to fix \(q(y)\), other analyses of the error control function must be carried out.
### The UAA Method
The UAA method includes three major steps: (i) the Liouville transformations; (ii) the minimization of the error control function; and (iii) the choice of the function \(y(\zeta)\), where \(\zeta\) is a new variable. In the following, we shall consider each of them separately.
#### ii.1.1 The Liouville Transformations
The Liouville transformations consist of introducing a new variable \(\zeta(y)\), which is assumed that _the inverse \(y=y(\zeta)\) always exists and is thrice-differentiable_. Without loss of the generality, we also assume that \(y(\zeta)\) is a monotonically increasing function [cf. Fig. 1]. Then, in terms of \(U(\zeta)\), which is defined by
\[U(\zeta)\equiv\dot{y}^{-1/2}\mu_{k}, \tag{4}\]
Eq.(1) takes the form,
\[\frac{d^{2}U(\zeta)}{d\zeta^{2}}=\left[\lambda^{2}\dot{y}^{2}g+\psi(\zeta) \right]U(\zeta), \tag{5}\]
where
\[\dot{y}\equiv\frac{dy(\zeta)}{d\zeta}>0,\quad\zeta^{\prime}(y)\equiv\frac{d \zeta(y)}{dy}=\frac{1}{\dot{y}}, \tag{6}\]
and
\[\psi(\zeta) \equiv \dot{y}^{2}q+\dot{y}^{1/2}\frac{d^{2}}{d\zeta^{2}}\left(\dot{y}^{ -1/2}\right) \tag{7}\] \[= \dot{y}^{2}q-\dot{y}^{3/2}\frac{d^{2}}{dy^{2}}\left(\dot{y}^{1/2} \right)\equiv\psi(y).\]
It should be noted that Eqs.(1) and (5) are completely equivalent, and so far no approximations are taken. However, the advantage of the form of Eq.(5) is that, by properly choosing \(q(y)\), the term \(|\psi(\zeta)|\) can be much smaller than \(\left|\lambda^{2}\dot{y}^{2}g\right|\), that is,
\[\left|\frac{\psi}{\lambda^{2}\dot{y}^{2}g}\right|\ll 1, \tag{8}\]
so that the exact solution of Eq.(1) can be well approximated by the first-order solution of Eq.(5) with \(\dot{\psi}(\zeta)=0\). This immediately raises the question: how to choose \(q(y)\) so that the condition (8) holds. To explain this in detail, let us move onto the next subsection.
#### ii.1.2 Minimization of Errors
To minimize the errors, let us first introduce _the error control function_[42; 43; 44; 47; 49; 54]
\[\mathscr{T}(\zeta)\equiv-\int\frac{\psi(\zeta)}{|\dot{y}^{2}g|^{1/2}}d\zeta. \tag{9}\]
Then, introducing the free parameters \(a_{n}\) and \(b_{n}\) into the functions \(g(y)\) and \(q(y)\), so we have
\[g(y)=g\left(y,a_{n}\right),\quad q(y)=q\left(y,b_{n}\right), \tag{10}\]
where \(n=1,2,...,N\), with \(N\) being an integer. It is clear that for such chosen \(g(y)\) and \(q(y)\), the error control function \(\mathscr{T}(\zeta)\) will also depend on \(a_{n}\) and \(b_{n}\). To minimize the errors, one way is to minimize the error control function by properly choosing \(a_{n}\) and \(b_{n}\), so that
\[\frac{\partial\mathscr{T}\left(\zeta,a_{n},b_{n}\right)}{\partial a _{n}}=0,\qquad\frac{\partial\mathscr{T}\left(\zeta,a_{n},b_{n}\right)}{ \partial b_{n}}=0,\] \[\qquad\qquad\qquad\left(n=1,2,...,N\right). \tag{11}\]
#### ii.1.3 Choice of \(y(\zeta)\)
On the other hand, the errors also depend on the choice of \(y(\zeta)\), which in turn sensitively depends on the properties of the functions \(g(y)\) and \(q(y)\) near their poles and turning points. In addition, it must be chosen so that the resulting equation of the first-order approximation (obtained by setting \(\psi(\zeta)=0\)) can be solved explicitly (in terms of known functions). Considering all the above, it has been found that \(y(\zeta)\) can be chosen as [42; 43; 44; 47; 49; 54]
\[\dot{y}^{2}g=\begin{cases}\text{sgn}(g),&\text{zero turning point},\\ \zeta,&\text{one turning point},\\ \zeta_{0}^{2}-\zeta^{2},&\text{two turning point},\end{cases} \tag{12}\]
Figure 1: The function \(\zeta(y)\) vs \(y\), which is assumed to be always an increasing function of \(y\).
in the cases with zero, one and two turning points, respectively. Here \(\text{sgn}(g)=1\) for \(g>0\) and \(\text{sgn}(g)=-1\) for \(g<0\).
In the rest of this paper, we shall consider only the cases with two turning points.
### UAA Method for Two Turning Points
For the cases with two turning points, we can always write \(g(y)\) as
\[g(y)=p(y)(y-y_{1})(y-y_{2}), \tag{13}\]
where \(y_{1}\) and \(y_{2}\) are the two turning points, and \(p(y)\) is a function of \(y\) with \(p(y_{i})\neq 0,\ (i=1,2)\). In general, according to the properties of \(y_{1}\) and \(y_{2}\), we can divide all the cases into three different subclasses:
1. \(y_{1}\) and \(y_{2}\) are two distinct real roots of \(g(y)=0\);
2. \(y_{1}=y_{2}\), a double real root of \(g(y)=0\); and
3. \(y_{1}\) and \(y_{2}\) are two complex roots of \(g(y)=0\). Since \(g(y)\) is real, in this case these two roots must be complex conjugate, \(y_{1}=y_{2}^{*}\).
To apply the UAA method to Eq.(6), we assume that the following conditions are satisfied [47; 49; 54]:
* When far away from any of the two turning points, we require \[\left|\frac{q(y)}{g(y)}\right|\ll 1.\] (14)
* When near any of these two points, we require \[\left|\frac{q(y)(y-y_{i})}{g(y)}\right|\ll 1,\ (i=1,2),\] (15) provided that the two turning points are far away from each other, that is, when \(\left|y_{1}-y_{2}\right|\gg 1\).
* If the two turning points are close to each other, \(\left|y_{1}-y_{2}\right|\simeq 0\), then near these points we require \[\left|\frac{q(y)(y-y_{1})(y-y_{2})}{g(y)}\right|\ll 1.\] (16)
It should be noted that, when \(\left|y_{2}-y_{1}\right|\gg 1\), the two turning points are far away, and each of them can be treated as an isolated single turning point [42; 43]. In addition, without loss of generality, we assume that \(g(y)<0\) for \(y>y_{2}\) or \(y<y_{1}\), when \(y_{1}\) and \(y_{2}\) are real. When \(y_{2}\) and \(y_{1}\) are complex conjugate, we assume that \(g(y)<0\) [cf. Fig. 2]. Then, in this case we adopt a method to treat all these three classes listed above together [44; 47; 49; 54]. In particular, we choose \(\dot{y}^{2}g\) as
\[\dot{y}^{2}g=\zeta_{0}^{2}-\zeta^{2}\begin{cases}>0,&g>0,\\ =0,&g=0,\\ <0,&g<0,\end{cases} \tag{17}\]
so that \(\zeta\) is an increasing function of \(y\) [cf. Fig. 1] and
\[\sqrt{\left|g(y)\right|}\;dy=\sqrt{\left|\zeta_{0}^{2}-\zeta^{2}\right|}\;d\zeta. \tag{18}\]
When we integrate the above equation, without loss of the generality, we shall choose the integration constants so that
\[\zeta(y_{1})=-\zeta_{0},\quad\zeta(y_{2})=\zeta_{0}. \tag{19}\]
Then, we find that
\[\zeta_{0}^{2}=\begin{cases}>0,&y_{1,2}\text{ real, and }y_{1}\neq y_{2},\\ =0,&y_{1,2}\text{ real, and }y_{1}=y_{2},\\ <0,&y_{1,2}\text{ complex},\end{cases} \tag{20}\]
with
\[\zeta_{0}^{2} = \pm\frac{2}{\pi}\int_{y_{1}}^{y_{2}}\sqrt{\left|g(y)\right|}dy \tag{21}\] \[= \pm\frac{2}{\pi}\int_{-\zeta_{0}}^{\zeta_{0}}\sqrt{\left|\zeta_{ 0}^{2}-\zeta^{2}\right|}d\zeta,\]
where "\(+\)" corresponds to the cases that the two turning points \(y_{1}\) and \(y_{2}\) are both real, and "\(-\)" to the cases that the two turning points \(y_{1}\) and \(y_{2}\) are complex conjugate. When \(y_{1}\) and \(y_{2}\) are complex conjugate, the integration of Eq.(21) is along the imaginary axis [44]. When the two real roots are equal, we have \(\zeta_{0}=0\).
To proceed further, let us first derive the relation between \(\zeta(y)\) and \(y\) by first integrating the right-hand side of Eq.(18). To this goal, it is found easier to distinguish the case in which \(y_{1}\) and \(y_{2}\) are real from the one in which they are complex conjugate.
#### ii.2.1 When \(y_{1,2}\) Are Real
Let us first consider the case when \(y_{1}\) and \(y_{2}\) are real. Then, when \(y>y_{2}\), we have \(\zeta(y)>\zeta_{0}\) [cf. Fig.1]. Hence,
Figure 2: The function \(g(y)\) defined by Eq. (3.3) for different choices of \(k\) and \(\beta\). In particular, the dotted black line denotes the case \(k^{2}<\beta^{2}\), and the solid blue line denotes the case \(k^{2}=\beta^{2}\), while the dash-dotted red line denotes the case \(k^{2}>\beta^{2}\).
from Eq. (18) we find
\[\int_{y_{2}}^{y}\sqrt{-g(y^{\prime})}dy^{\prime}=\int_{\zeta_{0}}^{ \zeta}\sqrt{v^{2}-\zeta_{0}^{2}}dv\] \[=\frac{1}{2}\zeta\sqrt{\zeta^{2}-\zeta_{0}^{2}}-\frac{\zeta_{0}^{2 }}{2}\ln\left(\frac{\zeta+\sqrt{\zeta^{2}-\zeta_{0}^{2}}}{\zeta_{0}}\right)\] \[=\frac{1}{2}\zeta\sqrt{\zeta^{2}-\zeta_{0}^{2}}-\frac{\zeta_{0}^{ 2}}{2}\;\text{arcosh}\left(\frac{\zeta}{\zeta_{0}}\right),\;(y\geq y_{2}). \tag{22}\]
When \(y\leq y_{1}\), we have \(\zeta(y)\leq-\zeta_{0}\). Then, from Eq. (18) we find
\[\int_{y}^{y_{1}}\sqrt{-g(y^{\prime})}dy^{\prime}=\int_{\zeta}^{- \zeta_{0}}\sqrt{v^{2}-\zeta_{0}^{2}}dv\] \[=-\frac{1}{2}\zeta\sqrt{\zeta^{2}-\zeta_{0}^{2}}+\frac{\zeta_{0} ^{2}}{2}\ln\left(\frac{-\zeta-\sqrt{\zeta^{2}-\zeta_{0}^{2}}}{\zeta_{0}}\right)\] \[=-\frac{1}{2}\zeta\sqrt{\zeta^{2}-\zeta_{0}^{2}}-\frac{\zeta_{0} ^{2}}{2}\ln\left(\frac{-\zeta+\sqrt{\zeta^{2}-\zeta_{0}^{2}}}{\zeta_{0}}\right)\] \[=-\frac{1}{2}\zeta\sqrt{\zeta^{2}-\zeta_{0}^{2}}-\frac{\zeta_{0}^ {2}}{2}\;\text{arcosh}\left(-\frac{\zeta}{\zeta_{0}}\right),(y\leq y_{1}). \tag{23}\]
When \(y_{1}\leq y\leq y_{2}\), we have \(-\zeta_{0}<\zeta(y)<\zeta_{0}\), and
\[\int_{y_{1}}^{y}\sqrt{g(y^{\prime})}dy^{\prime}=\int_{-\zeta_{0}} ^{\zeta}\sqrt{\zeta_{0}^{2}-v^{2}}dv=\frac{1}{2}\zeta\sqrt{\zeta_{0}^{2}-\zeta ^{2}}\] \[\qquad+\frac{\zeta_{0}^{2}}{2}\arccos\left(-\frac{\zeta}{\zeta_{0 }}\right),\;(y_{1}\leq y\leq y_{2}). \tag{24}\]
#### ii.2.2 When \(y_{1,2}\) Are Complex Conjugate
Now let us turn to consider the case when \(y_{1}\) and \(y_{2}\) are complex. For this case \(\zeta_{0}^{2}\) is always negative, \(\zeta_{0}^{2}<0\), thus from Eq. (4) we find [44]
\[\int_{0}^{y}\sqrt{-g(y^{\prime})}dy^{\prime}=\int_{0}^{\zeta} \sqrt{\zeta^{2}-\zeta_{0}^{2}}d\zeta\] \[=\frac{1}{2}\zeta\sqrt{\zeta^{2}-\zeta_{0}^{2}}-\frac{\zeta_{0}^{ 2}}{2}\ln\left(\frac{\zeta+\sqrt{\zeta^{2}-\zeta_{0}^{2}}}{|\zeta_{0}|}\right). \tag{25}\]
#### ii.2.3 The First-order Approximate Solutions
With the choice of Eq.(17), we find that Eq. (6) reduces to
\[\frac{d^{2}U}{d\zeta^{2}}=\Big{[}\lambda^{2}\left(\zeta_{0}^{2}- \zeta^{2}\right)+\psi(\zeta)\Big{]}U, \tag{26}\]
where we assume that \(\zeta\in(-\zeta_{2},\zeta_{2})\), with \(\zeta_{2}\) being a real and positive constant, which can be arbitrarily large \(\zeta_{2}\rightarrow\infty\).
Neglecting the \(\psi(\zeta)\) term, we find that the approximate solutions can be expressed in terms of the parabolic cylinder functions \(W(\frac{1}{2}\lambda\zeta_{0}^{2},\pm\sqrt{2\lambda}\zeta)\)[44], and are given by
\[U(\zeta) = \alpha_{k}\Bigg{\{}W\left(\frac{1}{2}\lambda\zeta_{0}^{2},\sqrt{ 2\lambda}\zeta\right)+\epsilon_{1}\Bigg{\}} \tag{27}\] \[+\beta_{k}\Bigg{\{}W\left(\frac{1}{2}\lambda\zeta_{0}^{2},-\sqrt {2\lambda}\zeta\right)+\epsilon_{2}\Bigg{\}},\]
from which we have
\[\mu_{k}(y) = \alpha_{k}\left(\frac{\zeta^{2}-\zeta_{0}^{2}}{-g(y)}\right)^{ \frac{1}{4}}\left[W\left(\frac{1}{2}\lambda\zeta_{0}^{2},\sqrt{2\lambda}\zeta \right)+\epsilon_{1}\right]\] \[+\beta_{k}\left(\frac{\zeta^{2}-\zeta_{0}^{2}}{-g(y)}\right)^{ \frac{1}{4}}\left[W\left(\frac{1}{2}\lambda\zeta_{0}^{2},-\sqrt{2\lambda}\zeta \right)+\epsilon_{2}\right],\]
where \(\alpha_{k}\) and \(\beta_{k}\) are two integration constants, \(\epsilon_{1}\) and \(\epsilon_{2}\) are the errors of the corresponding approximate solutions, whose upper bounds are given by Eqs.(136) and (137) in Appendix A.
For the choice of Eq.(17), we find that the associated error control function defined by Eq.(9) now takes the form
\[\mathscr{T}(\zeta) = -\int^{\zeta}\left\{\frac{q}{g}-\frac{5}{16}\frac{g^{\prime 2}}{g^{3}}+ \frac{1}{4}\frac{g^{\prime\prime}}{g^{2}}\right\}\sqrt{v^{2}-\zeta_{0}^{2}}dv\] \[+\int^{\zeta}\left\{\frac{5\zeta_{0}^{2}}{4(v^{2}-\zeta_{0}^{2})^ {3}}+\frac{3}{4(v^{2}-\zeta_{0}^{2})^{2}}\right\}\sqrt{v^{2}-\zeta_{0}^{2}}dv\] \[= -\int^{y}\left\{\frac{q}{g}-\frac{5}{16}\frac{g^{\prime 2}}{g^{3}}+ \frac{1}{4}\frac{g^{\prime\prime}}{g^{2}}\right\}\sqrt{-g}dy^{\prime}\] \[+\int^{\zeta}\left\{\frac{5\zeta_{0}^{2}}{4(v^{2}-\zeta_{0}^{2})^ {5/2}}+\frac{3}{4(v^{2}-\zeta_{0}^{2})^{3/2}}\right\}dv,\]
for \(g<0\), and
\[\mathscr{T}(\zeta) = \int^{\zeta}\left\{\frac{q}{g}-\frac{5}{16}\frac{g^{\prime 2}}{g^{3}}+ \frac{1}{4}\frac{g^{\prime\prime}}{g^{2}}\right\}\sqrt{\zeta_{0}^{2}-v^{2}}dv\] \[-\int^{\zeta}\left\{\frac{5\zeta_{0}^{2}}{4(v^{2}-\zeta_{0}^{2})^ {3}}+\frac{3}{4(v^{2}-\zeta_{0}^{2})^{2}}\right\}\sqrt{\zeta_{0}^{2}-v^{2}}dv\] \[= \int^{y}\left\{\frac{q}{g}-\frac{5}{16}\frac{g^{\prime 2}}{g^{3}}+ \frac{1}{4}\frac{g^{\prime\prime}}{g^{2}}\right\}\sqrt{g}dy^{\prime}\] \[+\int^{\zeta}\left\{\frac{5\zeta_{0}^{2}}{4(\zeta_{0}^{2}-v^{2})^ {5/2}}-\frac{3}{4(\zeta_{0}^{2}-v^{2})^{3/2}}\right\}dv,\]
for \(g>0\).
## III Uaa Solutions with the Poschl-Teller Potential
To study the case in which only turning points exist, in this paper we consider the second-order differential
equation (2.1) with a Poschl-Teller (PT) potential [67; 68]
\[\left(\lambda^{2}g+q\right)=-\left(k^{2}-\frac{\beta_{0}^{2}}{\cosh^{2}(\alpha y )}\right), \tag{3.1}\]
as in this case exact solutions exist, where \(k\) is the comoving wavenumber, and \(\beta_{0}\) is a real and positive constant. The two parameters \(\beta_{0}\) and \(\alpha\) determine the height and the spread of the PT potential, respectively. Under the rescaling \(\alpha y\to y\), the \(\alpha\) parameter can be absorbed into the wavenumber \(k\) and \(\beta_{0}\) by redefining \(\left(k/\alpha\to k,\beta_{0}/\alpha\to\beta_{0}\right)\). As a result, there is no loss of generality to set \(\alpha=1\) from now on. Then, the exact solutions in this case exist, and are presented in Appendix B.
On the other hand, to apply the UAA method to this case, and to minimize the errors of the analytic approximate solutions, we tentatively choose \(q\) as
\[q=\frac{q_{0}^{2}}{\cosh^{2}(y)}, \tag{3.2}\]
where \(q_{0}\) is a free parameter, to be determined below by minimizing the error control function (2.9) with the choice of \(\dot{y}^{2}g\) given by Eq.(2.17). Then, we have
\[g(y)=\frac{\beta^{2}}{\cosh^{2}(y)}-k^{2}, \tag{3.3}\]
where \(\beta\equiv\sqrt{\beta_{0}^{2}-q_{0}^{2}}\). In this paper, without loss of generality, we shall choose \(q_{0}\) so that \(\beta\) is always real, that is
\[\beta^{2}\equiv\beta_{0}^{2}-q_{0}^{2}>0. \tag{3.4}\]
Thus, from \(g(y)=0\) we find that the two roots are given by
\[y_{i}=\pm\cosh^{-1}\frac{\beta}{k}=\pm\cosh^{-1}\frac{\sqrt{\beta_{0}^{2}-q_{0 }^{2}}}{k}. \tag{3.5}\]
It is clear that, depending on the relative magnitudes of \(\beta_{0}\) and \(k\), as well as the choices of \(q_{0}\), two turning points can be either complex or real. In Fig. 2, we plot out the three different cases, \(k^{2}<\beta^{2}\), \(k^{2}=\beta^{2}\), and \(k^{2}>\beta^{2}\), from which it can be seen clearly that the two turning points are real and different for \(k^{2}<\beta^{2}\), real and equal for \(k^{2}=\beta^{2}\), and complex conjugate for \(k^{2}>\beta^{2}\), respectively. Then, from Eqs.(3.2) and (3.3), we find that
\[\left|\frac{q(y)}{g(y)}\right|=\left|\frac{q_{0}^{2}}{\beta^{2}-k^{2}\cosh^{2} (y)}\right|\simeq q_{0}^{2}e^{-2|y|}, \tag{3.6}\]
for \(|y|\gg 0\), and
\[\left|\frac{q(y)(y-y_{i})}{g(y)}\right|\simeq\frac{q_{0}^{2}}{y+y_{j}},(i\neq j), \tag{3.7}\]
for \(|y|\simeq|y_{i}|\) and \(|y_{1}-y_{2}|\gg 1\), and
\[\left|\frac{q(y)(y-y_{1})(y-y_{2})}{g(y)}\right|\simeq q_{0}^{2}, \tag{3.8}\]
for \(|y|\simeq|y_{1}|\) and \(|y_{1}-y_{2}|\simeq 0\). In the following, let us consider the three cases: (a) \(k^{2}\gg\beta^{2}\); (b) \(k^{2}\simeq\beta^{2}\); and (c) \(\beta^{2}\gg k^{2}\), separately.
### \(k^{2}\gg\beta^{2}\)
In this case, we have \(g(y)\) is always negative, \(g(y)<0\), so that the two turning points of \(g(y)=0\) are complex conjugate and are given by
\[y_{1}=y_{2}^{*}=-i\cos^{-1}\left(\frac{\beta}{k}\right)\simeq-\frac{i\pi}{2}. \tag{3.9}\]
As discussed in the last section, now \(\zeta_{0}^{2}<0\), for which Eq.(2.26) can be cast in the form
\[\frac{d^{2}W(\zeta)}{d^{2}\zeta}=\Big{\{}-\lambda^{2}\left(\zeta^{2}+\hat{ \zeta}_{0}^{2}\right)+\psi\Big{\}}W(\zeta), \tag{3.10}\]
where \(\hat{\zeta}_{0}^{2}\equiv-\zeta_{0}^{2}>0\). Note that in writing down the above equation, we had replaced \(U\) by \(W\). In addition, the new variable \(\zeta\) is related to \(y\) via
\[\int_{0}^{y}\sqrt{-g(y)}dy=\int_{0}^{\zeta}\sqrt{v^{2}+\hat{\zeta}_{0}^{2}}dv= \frac{1}{2}\hat{\zeta}_{0}^{2}\ln\left(\zeta+\sqrt{\zeta^{2}+\hat{\zeta}_{0}^{ 2}}\right)+\frac{1}{2}\zeta\sqrt{\zeta^{2}+\hat{\zeta}_{0}^{2}}-\frac{1}{2} \hat{\zeta}_{0}^{2}\ln\hat{\zeta}_{0}, \tag{3.11}\]
from which we find that \(\hat{\zeta}_{0}\) is given explicitly by
\[\hat{\zeta}_{0}^{2}=2\left(k-\beta\right)>0. \tag{3.12}\]
Moreover, in the case of the PT potential, the integration of Eq. (3.11) can be carried out explicitly, giving
\[\int_{0}^{y}dy\sqrt{-g}=\epsilon_{y}\sqrt{1-x^{2}}\sqrt{k^{2}-\beta^{2}}\times{ \rm AppellF}_{1}\left(\frac{1}{2},-\frac{1}{2},1,\frac{3}{2};\frac{1-x^{2}}{1-k^{ 2}/\beta^{2}},1-x^{2}\right), \tag{3.13}\]
where \(\epsilon_{y}\) denotes the sign of \(y\) with \(x\equiv 1/\cosh(y)\), and AppellF\({}_{1}\) is the Appell hypergeometric function. Ignoring the \(\psi\) term in Eq. (3.10), we find the general solution
\[\mu_{k}(y)=\left(\frac{\zeta^{2}+\hat{\zeta}_{0}^{2}}{-g(y)}\right)^{1/4} \Bigg{\{}\alpha_{k}W\left(-\frac{\hat{\zeta}_{0}^{2}}{2},\sqrt{2}\zeta\right) +\beta_{k}W\left(-\frac{\hat{\zeta}_{0}^{2}}{2},-\sqrt{2}\zeta\right)\Bigg{\}}, \tag{3.14}\]
where \(W\) denotes the Weber parabolic cylinder function [69], and \(\alpha_{k}\) and \(\beta_{k}\) are two integration parameters which generally depend on the comoving wavenumber \(k\).
The validity of the analytic solution (3.14) depends on the criteria given by Eqs.(2.14) - (2.16), while its accuracy can be predicted by the error control function \(\mathscr{T}\). In the current case, we find that \(\mathscr{T}\) of Eq.(2.29) can be written as a combination of three terms as that given by Eqs.(C.3) 2, where
Footnote 2: In this case, the associated error control function is \(\mathscr{F}_{\zeta_{1},\zeta}(\mathscr{T})\) for any given \(\zeta_{1}\), where \(\zeta_{1}\in(-\infty,\infty)\)[44]. In this paper, we choose \(\zeta_{1}=0\), so the integrations will be carried out in the interval \(\zeta\in[0,\infty)\), corresponding to \(y\in[0,\infty)\). Due to the symmetry of the equation, one can easily obtain the solutions for the region \(y\in(-\infty,0]\) by simply replacing \(y\) by \(-y\) (or \(\zeta\) by \(-\zeta\)).
\[\mathscr{T}_{1} = \int_{0}^{y}\frac{q}{\sqrt{-g}}dy=\frac{q_{0}^{2}\epsilon_{y}}{ \beta}\ln\left(\frac{\sqrt{1-x^{2}}\beta+\sqrt{k^{2}-\beta^{2}x^{2}}}{\sqrt{k ^{2}-\beta^{2}}}\right),\] \[\mathscr{T}_{2} = \int_{0}^{y}\left(\frac{5g^{\prime 2}}{16g^{3}}-\frac{g^{{}^{ \prime\prime}}}{4g^{2}}\right)\sqrt{-g}dy=-\epsilon_{y}\Bigg{\{}\frac{1}{4 \beta}\ln\left(\frac{\sqrt{1-x^{2}}\beta+\sqrt{k^{2}-x^{2}\beta^{2}}}{\sqrt{ k^{2}-\beta^{2}}}\right)-\frac{\sqrt{1-x^{2}}A}{12(k^{2}-\beta^{2})(k^{2}-\beta^{2}x^{ 2})^{3/2}}\Bigg{\}},\] \[\mathscr{T}_{3} = \int_{0}^{\zeta}\left(\frac{-5\hat{\zeta}_{0}^{2}}{4\left(v^{2}+ \hat{\zeta}_{0}^{2}\right)^{5/2}}+\frac{3}{4\left(v^{2}+\hat{\zeta}_{0}^{2} \right)^{3/2}}\right)dv=-\frac{\zeta\left(\zeta^{2}+6\hat{\zeta}_{0}^{2} \right)}{12\hat{\zeta}_{0}^{2}\left(\zeta^{2}+\hat{\zeta}_{0}^{2}\right)^{3/2}}, \tag{3.15}\]
where \(A\) is given by Eq.(C.5). It should be noted that \(\mathscr{T}_{1}\), \(\mathscr{T}_{2}\) and \(\mathscr{T}_{3}\) given in Eq. (3.15) all vanish when \(y=0\) (for which we have \(x=1\) and \(\zeta=0\)), that is,
\[\mathscr{T}(\zeta=0)=0. \tag{3.16}\]
Besides, as the PT potential is an even function, the error control function is antisymmetric about the origin, namely, \(\mathscr{T}(-y)=-\mathscr{T}(y)\). As a result, we will study its behavior only on the positive \(y\) axis, \(y\geq 0\). With the help of Eq. (3.11), the numeric value of the error control function at any point \(y>0\) can be found from Eq.(3.15). In particular, for \(\beta/k\ll 1\), we find that
\[\mathscr{T}=\frac{q_{0}^{2}}{k}\sqrt{1-x^{2}}-\frac{\zeta\left(\zeta^{2}+6\hat{ \zeta}_{0}^{2}\right)}{12\hat{\zeta}_{0}^{2}\left(\zeta^{2}+\hat{\zeta}_{0}^{ 2}\right)^{3/2}}+\mathcal{O}\left(x^{2},\frac{\beta^{2}}{k^{3}}\right) \rightarrow\frac{1}{24k}\left[\left(24q_{0}^{2}-1\right)-\left(\frac{\beta}{k} \right)+\mathcal{O}\left(\frac{\beta^{2}}{k^{2}}\right)\right], \tag{3.17}\]
as \(x\to 0\) (or \(y\rightarrow\infty\)). Note that \(\zeta\rightarrow\infty\) as \(y\rightarrow\infty\), which can be seen clearly from Eq.(3.11). Thus, to minimize the error control function for very large values of \(y\), we must
choose
\[q_{0}^{2}=\frac{1}{24}\simeq 4.167\times 10^{-2}. \tag{3.18}\]
In Fig. 3 we plot the functions, \(|q/g|\), \(|q(y-y_{1})/g|\), and \(|q(y-y_{1})(y-y_{2})/g|\), together with the error control function defined by Eqs.(C.3) - (C.5) for \((k,\beta)=(5.0,1.0)\), with \(q_{0}\) being given by Eq.(3.18). (Recall \(\beta_{0}\equiv\sqrt{\beta^{2}+q_{0}^{2}}\)). From these figures it is clear that the the numerical solution obtained by integrating Eq.(2.1) directly with the same initial conditions, while \(\mu_{k}^{\rm E}(y)\) is the exact solution given by Eq.(B.5). From these figures we can see that the maximal errors occur in the region near \(y=0\), but the upper bound is no larger than \(0.15\%\)
at any given \(y\), including the region near \(y\simeq 0\).
It is interesting to note that this analytical approximate solution is only up to the first-order approximation of the UAA method. With higher order approximations, the relative errors are even smaller.
To check our numerical solutions, in Fig. 4, we also plot the relative differences \(\epsilon(y)\) between \(\mu_{k}^{\rm N}(y)\) and \(\mu_{k}^{\rm E}(y)\), defined by
\[\epsilon(y) \equiv \left|\frac{\left|\mu_{k}^{N}(y)\right|-\left|\mu_{k}^{\rm E}(y) \right|}{\mu_{k}^{\rm E}(y)}\right|. \tag{30}\]
From these figures it can be seen that \(\epsilon(y)\) is no larger than \(10^{-7}\), and our numerical code is well tested and justified.
It is also interesting to note that the mode functions are oscillating for \(y\lesssim-10\), and these fine features are captured in all three mode functions, although there are some differences in the details. Again, as shown by their relative variations, these differences are very small. In addition, we also consider other choices of \(\beta\) and \(k\), and find that they all have similar properties, as long as the condition \(k^{2}\gg\beta^{2}\).
### \(\beta^{2}\simeq k^{2}\)
In this case, depending on \(k\gtrsim\beta\) or \(k\lesssim\beta\), the function \(g(y)\) has different properties, as shown in Fig. 2. Therefore, in the following subsections let us consider them separately.
#### iii.2.1 \(k\gtrsim\beta\)
When \(k\gtrsim\beta\) the function \(g(y)\) is always non-positive for \(y\in(-\infty,\infty)\). Then, from Eqs.(C.3) and (3.15) we find that
\[\mathscr{T}(y)\simeq\frac{q_{0}^{2}-1/4}{2\beta}\ln\left(\frac{2}{\epsilon} \right)+\frac{9}{48k}+\mathcal{O}\left(\epsilon\right), \tag{3.21}\]
as \(y\to\infty\), but now with \(\epsilon\equiv(k-\beta)/k\). Thus, to have the error control function be finite at \(y=\infty\), now we must set
\[q_{0}^{2}=\frac{1}{4}, \tag{3.22}\]
instead of the value given by Eq.(3.18) for the case \(k\gg\beta^{2}\). In Fig. 5, we plot the quantities \(|q/g|\), \(|q(y-y_{1})/g|\), \(|q(y-y_{1})(y-y_{2})/g|\), and the error control function \(\mathcal{T}\) for \(k=5.0\), \(\beta=4.9\) and \(q_{0}=1/2\), for which we have \(y_{1}=y_{2}^{*}=0.200335i\). From these figures we can see clearly that the conditions (2.14) - (2.16) are well satisfied, and the error control function remains small all the time. Then, the corresponding quantities \(\mu_{k}(y)\), \(\mu_{k}^{\rm N}(y)\), \(\mu_{k}^{E}(y)\), \(\delta^{\rm A}(y)\) and \(\epsilon(y)\) are plotted in Fig. 6. From the curves of \(\delta^{\rm N}(y)\) and \(\delta^{\rm E}(y)\) we can see that now the errors of the first-order UAA solution are \(\leq 4\%\), which are larger than those of the last subcase. This is mainly because of the fast oscillations of the solution in the region \(y<0\). Therefore, in order to obtain solutions with high precision, high-order approximations for this case are needed. However, we do like to note that our numerical solution still matches to the exact one very well, as shown by the curve of \(\epsilon(y)\), which is no larger than \(6.0\times 10^{-6}\).
#### iii.3.2 \(k\lesssim\beta\)
In this case, we find that
\[\zeta_{0}^{2}=\frac{2}{\pi}\left|\int_{y_{1}}^{y_{2}}\sqrt{g(y)}\;dy\right|=2\left| k-\beta\right|. \tag{3.23}\]
On the other hand, from Eqs.(2.30), (C.3) and (C.6) we find that
\[\mathscr{T}(y)\simeq\begin{cases}\frac{\zeta(0)(6\zeta_{0}^{2}-\zeta^{2}(0) )}{12\zeta_{0}^{2}\left(\zeta_{0}^{2}-\zeta^{2}(0)\right)^{3/2}},&y\to 0,\\ \frac{\pi\left(\vartheta_{0}^{2}-1/4\right)}{2\beta},&y\to y_{2},\end{cases} \tag{3.24}\]
where \(\zeta(0)\equiv\left.\zeta(y)\right|_{y=0}<\zeta_{0}\). Note that in calculating the error control function near the turning point \(y\simeq y_{2}\), we
have used the relation
\[\frac{\beta}{k^{2}\sqrt{\beta^{2}-k^{2}}}\left(\beta^{2}x^{2}-k^{2}\right)^{3/2} \simeq\frac{1}{\zeta_{0}}(\zeta_{0}^{2}-\zeta^{2})^{3/2}, \tag{3.25}\]
so that the divergence of the second term of \(\mathscr{T}_{2}\) cancels exactly with that of \(\mathscr{T}_{3}\). Eq.(3.25) can be obtained directly from the relation \(\sqrt{g}dy=\sqrt{\zeta_{0}^{2}-\zeta^{2}}d\zeta\) for the case \(g\geq 0\). Similarly, it can be shown that
\[\mathscr{T}(y)\simeq\frac{q_{0}^{2}-1/4}{2\beta}\ln\left(\frac{2}{\epsilon} \right),\quad y\to\infty. \tag{3.26}\]
It is clear that to minimize the errors, in the present case \(q_{0}^{2}\) must be also chosen to be
\[q_{0}^{2}=\frac{1}{4}, \tag{3.27}\]
as that given by Eq.(3.22). In Fig. 7, we plot the quantities \(|q/g|\), \(|q(y-y_{1})/g|\), \(|q(y-y_{1})(y-y_{2})/g|\), and the error control function \(\mathcal{T}\) for \(k=5.0\), \(\beta=5.1\) and \(q_{0}=1/2\), for which we have \(y_{1}=-y_{2}=-0.199668\). It is clear that in this case the two turning points are very close, and the conditions \(|q/g|\ll 1\) and \(|q(y-y_{1})/g|\ll 1\) are violated near these points. But, the condition \(|q(y-y_{1})(y-y_{2})/g|\ll 1\) holds near them. So, the conditions (2.14) - (2.16) are also satisfied, and the error control function remains small all the time.
Then, the corresponding quantities \(\mu_{k}(y)\), \(\mu_{k}^{\rm N}(y)\), \(\mu_{k}^{\rm E}(y)\), \(\delta^{\Lambda}(y)\) and \(\epsilon(y)\) are plotted in Fig. 8. From the curves of \(\delta^{\rm N}(y)\) and \(\delta^{\rm E}(y)\) we can see that now the errors of the first-order UAA solution are \(\lesssim 10\%\). Similar to the last subcase, this is mainly because of the fast oscillations of the solution in the region \(y<0\). Therefore, in order to obtain high precision, high-order approximations for this case are needed, too. In addition, our numerical solution still matches well to the exact one, as shown by the curve of \(\epsilon(y)\), which is no larger than \(2.0\times 10^{-6}\).
### \(\beta^{2}\gg k^{2}\)
In this case, two real turning points appear, given, respectively by
\[y_{1}=-y_{2}=-\cosh^{-1}\left(\frac{\beta^{2}}{k^{2}}\right). \tag{3.28}\]
Then, we find that Eqs.(3.23) and (3.24) still hold in the current case, while Eq.(3.25) is replaced by
\[\mathscr{T}(y)\to\frac{q_{0}^{2}-1/4}{2\beta}\ln\left(\frac{1+\epsilon}{1- \epsilon}\right)+\frac{4+\epsilon-5\epsilon^{2}}{24k(1-\epsilon^{2})}, \tag{3.29}\]
as \(y\to\infty\), but now with \(\epsilon\equiv k/\beta\). Combining Eqs.(3.23), (3.24) and (3.29), we find that currently the proper choice of \(q_{0}\) is still that given by \(q_{0}=1/2\), as those given in the last two subcases.
In Fig. 9, we plot the quantities \(|q/g|\), \(|q(y-y_{1})/g|\), \(|q(y-y_{1})(y-y_{2})/g|\), and the error control function \(\mathcal{T}\) for \(k=0.6\), \(\beta=4.0\) and \(q_{0}=1/2\), for which we have \(y_{1}=-y_{2}\simeq-2.58459\). From this figure we can see that the preconditions (2.14)-(2.16) are well satisfied. Then, to the first-order approximation of the UAA method, the solution can be approximated by Eq.(2.28), where \(\zeta_{0}^{2}\) is given by Eq.(3.23), \(\alpha_{k}\) and \(\beta_{k}\) are two integration constants, and \(\epsilon_{1}\) and \(\epsilon_{2}\) the errors of the corresponding approximate solutions, whose upper bounds are given by Eqs.(A.1) and (A.2) in Appendix A.
In Fig. 10 (a), we plot out our first-order approximate solution, while Fig. 10 (b) to compare the approximate solution with the exact one, we plot both of them. In particular, the solid line represents the exact solution, while the red dotted line the approximate solution. From this figure it can be seen that except the minimal points, the two solutions match well. However, at these extreme
minimal points, they deviate significantly from each. The causes of such errors are not clear, and we hope to come back to this issue in another occasion.
Finally, similar to all other cases, our numerical solution still matches well to the exact one, as shown by the curve of \(\epsilon(y)\), which is no larger than \(8.0\times 10^{-6}\).
## IV Conclusions
In this paper, we have applied the UAA method to the mode function \(\mu_{k}\) with a PT potential, for which it satisfies the second-order differential equation
\[\frac{d^{2}\mu_{k}(y)}{dy^{2}}+\left(k^{2}-\frac{\beta_{0}^{2}}{\cosh y} \right)\mu_{k}(y)=0, \tag{10}\]
where \(k\) and \(\beta_{0}\) are real constants. In this case, the exact solution is known and given by Eq.(10). The implementation of the UAA method includes the introduction of an auxiliary function \(q(y)\), which is taken as
\[q(y)=\frac{q_{0}^{2}}{\cosh y}, \tag{11}\]
where \(q_{0}\) is a free parameter. Then, we carry out the integration of the error control function, defined by
\[\mathscr{T}(\zeta) \equiv -\int\frac{\psi(\zeta)}{\left|\dot{y}^{2}g\right|^{1/2}}d\zeta, \tag{12}\]
where
\[\psi(\zeta) \equiv \dot{y}^{2}q+\dot{y}^{1/2}\frac{d^{2}}{d\zeta^{2}}\left(\dot{y}^{ -1/2}\right),\] \[\dot{y}^{2}g = \zeta_{0}^{2}-\zeta^{2}. \tag{13}\]
Clearly, the error control function \(\mathscr{T}(\zeta)\) will depend on \(q_{0}\). After working out the details, we find that it is convenient to distinguish the three cases: A) \(k^{2}\gg\beta^{2}\), B) \(k^{2}\simeq\beta^{2}\), and C) \(k^{2}\ll\beta^{2}\), where \(\beta^{2}\equiv\beta_{0}^{2}-q_{0}^{2}>0\). In particular, in Case A), a proper choice of \(q_{0}\) is \(q_{0}=1/\sqrt{24}\), while in Cases B) and C), it is \(q_{0}=1/2\).
Once \(q_{0}\) is fixed, the analytical approximate solutions are uniquely determined by the linear combination of the two parabolic cylindrical functions \(W(\zeta_{0}^{2}/2,\pm\sqrt{2}\zeta)\), as shown by Eq.(28). In particular, in Case A) the upper bounds of errors are \(\lesssim 0.15\%\), as shown in Fig. 4. In Case B), two subcases are considered, one with \(k\gtrsim\beta\)
Figure 10: Plots of the mode functions \(\mu_{k}(y),\ \mu_{k}^{N}(y),\mu_{k}^{E}(y)\) and their relative differences \(\delta^{N}(y),\ \delta^{E}(y)\) and \(\epsilon(y)\) for \(k=0.6\), \(\beta=4.0\) and \(q_{0}=1/2\), for which we have \(y_{1}=-y_{2}\simeq-2.58459\).
and the other with \(k\lesssim\beta\). In the first case, the upper bounds of errors are \(\lesssim 4\%\), while in the second case they are \(\lesssim 10\%\), as shown, respectively, by Figs. 6 and 8. In Case C), the approximate solutions trace also very well to the exact one, except the minimal points, as shown in Fig. 10. This might be caused by the fact that at these points the mode function \(\mu_{k}\) is almost zero, and very small non-zero values will cause significantly deviations. We are still working on this case, and hope to come back to this point in another occasion.
As mentioned in the Introduction, the potentials of the mode functions in both dressed metric and hybrid approaches can be well modeled by PT potentials. Therefore, the current analysis on the choice of the function \(q(y)\) and the minimization of the error control function shall shed great light on how to carry out similar analyses in order to obtain more accurate approximate solutions in these models. We have been working on it recently, and wish to report our results soon in another occasion.
In addition, the differential equations for the quasi-normal modes of black holes usually also take the form of Eq.(1) with potentials that have no singularities 3, but normally do have turning points [70; 71]. For example, the effective potential for the axial perturbations of the Schwarzschild black hole is given by
Footnote 3: Recall the inner boundaries of black hole perturbations are the horizons, at which the potentials are usually finite and non-singular.
\[\mathscr{V}(r)=\frac{r-2m}{r^{4}}\Big{\{}l(l+1)r-6m\Big{\}}, \tag{43}\]
where \(\omega\) denotes the quasinormal mode. Clearly, for \(r\geq 2m\), this potential also has no poles, but in general \(f(r)\equiv\mathscr{V}(r)-\omega^{2}\) have two turning points. From [70; 71], it can be seen that the properties of this potential are shared by many other cases, including those from modified theories of gravity. Thus, one can equally apply the analysis presented here to the studies of quasi-normal modes of black holes.
## Acknowledgements
RP and AW were partially supported by the US Natural Science Foundation (NSF) with the Grant No. PHY2308845, and JJM and JS were supported through the Baylor Physics graduate program. BFL and TZ were supported in part by the National Key Research and Development Program of China under Grant No. 2020YFC2201503, the National Natural Science Foundation of China under Grant Nos. 11975203, 11675143, 12205254, 12275238, and 12005186, the Zhejiang Provincial Natural Science Foundation of China under Grant Nos. LR21A050001 and LY20A050002, and the Fundamental Research Funds for the Provincial Universities of Zhejiang in China under Grant No. RF-A2019015.
## Appendix A Upper Bounds of Errors
The upper bounds of the errors \(\epsilon_{1}\) and \(\epsilon_{2}\) appearing in Eq.(27) are given by
\[\frac{|\epsilon_{1}|}{M\left(\frac{1}{2}\lambda_{0}^{2},\sqrt{2 \lambda}\zeta\right)},\ \frac{|\partial\epsilon_{1}/\partial\zeta|}{\sqrt{2}N\left(\frac{1}{2} \lambda_{0}^{2},\sqrt{2\lambda}\zeta\right)} \leq\frac{\kappa}{\lambda_{0}E\left(\frac{1}{2}\lambda_{0}^{2},\sqrt{2\lambda}\zeta\right)}\Bigg{\{}\exp\left(\lambda\mathscr{V}_{\zeta, \zeta_{2}}(\mathscr{T})\right)-1\Bigg{\}}.\] \[\frac{|\epsilon_{2}|}{M\left(\frac{1}{2}\lambda_{0}^{2},\sqrt{2 \lambda}\zeta\right)},\ \frac{|\partial\epsilon_{2}/\partial\zeta|}{\sqrt{2}N\left(\frac{1}{2} \lambda_{0}^{2},\sqrt{2\lambda}\zeta\right)} \leq\frac{\kappa E\left(\frac{1}{2}\lambda_{0}^{2},\sqrt{2 \lambda}\zeta\right)}{\lambda}\Bigg{\{}\exp\left(\lambda_{0}\mathscr{V}_{0, \zeta}(\mathscr{T})\right)-1\Bigg{\}}, \tag{44}\]
where \(M\left(\frac{1}{2}\lambda_{0}^{2},\sqrt{2\lambda}\zeta\right)\), \(N\left(\frac{1}{2}\lambda_{0}^{2},\sqrt{2\lambda}\zeta\right)\), and \(E\left(\frac{1}{2}\lambda_{0}^{2},\sqrt{2\lambda}\zeta\right)\) are auxiliary functions of the parabolic cylinder functions defined explicitly in [44], and 4
Footnote 4: This corresponds to choosing the function \(\Omega(x)\) introduced by Olver in [44] as \(\Omega(x)=\sqrt{\left|x^{2}-\zeta_{0}^{2}\right|}\), which satisfies the requirement \(\Omega(x)=\mathcal{O}(x)\), as \(x\rightarrow\pm\infty\). For more details, see [44].
\[\mathscr{V}_{\zeta_{1},\zeta_{2}}\equiv\int_{\zeta_{1}}^{\zeta_{2}}\frac{|\psi( \zeta)|}{\sqrt{|\zeta^{2}-\zeta_{0}^{2}|}}d\zeta, \tag{45}\]
is _the associated error control function_.
## Appendix B Exact Solutions with the Poschl-Teller Potential
Let us consider the case with the Poschl-Teller Potential given by
\[\left(\lambda^{2}g+q\right)=-\left(k^{2}-\frac{\beta_{0}^{2}}{\cosh^{2}y} \right). \tag{46}\]
Then, introducing the two new variables \(x\) and \(\mathcal{Y}\) via the relations
\[x=\frac{1}{1+e^{-2y}},\quad\mathcal{Y}(x)=[x(1-x)]^{ik/2}\mu_{k}, \tag{47}\]
we find that Eq.(2.1) with the above PT potential reads
\[x(1-x)\frac{d^{2}\mathcal{Y}}{dx^{2}}+\left[a_{3}-\left(a_{1}+a_{2}+1 \right)x\right]\frac{d\mathcal{Y}}{dx}-a_{1}a_{2}\mathcal{Y}=0,\] (B.3)
where
\[a_{1} = \frac{1}{2}(1+\sqrt{1-4\beta_{0}^{2}})-ik,\] \[a_{2} = \frac{1}{2}(1-\sqrt{1-4\beta_{0}^{2}})-ik,\] \[a_{3} = 1-ik.\] (B.4)
Eq.(B.3) is the standard hypergeometric equation, and has the general solution [67]
\[\mu_{k}^{\rm E}(\eta) = a_{k}\left(\frac{x}{1-x}\right)^{ik/2}\] (B.5) \[\times\ _{2}F_{1}(a_{1}-a_{3}+1,a_{2}-a_{3}+1,2-a_{3},x)\] \[+\frac{b_{k}}{[x(1-x)]^{ik/2}}\ _{2}F_{1}(a_{1},a_{2},a_{3},x).\]
Here \({}_{2}F_{1}(a_{1},a_{2},a_{3},x)\) denotes the hypergeometric function, and \(a_{k}\) and \(b_{k}\) are two independent integration constants, and are uniquely determined by the initial conditions.
## Appendix C Computing the error control function
In this appendix, we collect some useful formulae for working out the error control function explicitly. Considering the particular form of the PT potential, it is easier to compute the error control function by using the new variable \(x={\rm sech}(y)\), thus
\[dy=-\frac{\epsilon_{y}\ dx}{x\sqrt{1-x^{2}}},\] (C.1)
where \(\epsilon_{y}\) denotes the sign of \(y\). In terms of the new variable,
\[q=q_{0}^{2}x^{2},\qquad g=\beta^{2}x^{2}-k^{2}.\] (C.2)
To calculate the error control function explicitly, let us consider the cases \(g<0\) and \(g>0\) separately.
### \(g<0\)
In this case, the error control function is defined by Eq.(2.29), which can be written as
\[\mathscr{T}(\zeta)=\mathscr{T}_{1}(\zeta)+\mathscr{T}_{2}(\zeta)+\mathscr{T}_ {3}(\zeta),\] (C.3)
where
\[\mathscr{T}_{1} \equiv \int\frac{q}{\sqrt{-g}}dy=-q_{0}^{2}\epsilon_{y}\int\frac{xdx}{ \sqrt{1-x^{2}}\sqrt{k^{2}-\beta^{2}x^{2}}}=\frac{q_{0}^{2}\epsilon_{y}}{\beta }\ln\left(\frac{\sqrt{1-x^{2}}\beta+\sqrt{k^{2}-\beta^{2}x^{2}}}{\sqrt{|k^{2} -\beta^{2}|}}\right),\] \[\mathscr{T}_{2} \equiv \int\left(\frac{5g^{\prime 2}}{16g^{3}}-\frac{g^{{}^{\prime\prime} }}{4g^{2}}\right)\sqrt{-g}dy=\epsilon_{y}\int dx\left(\frac{5\beta^{4}(x^{3}- x^{5})}{4\sqrt{1-x^{2}}(k^{2}-\beta^{2}x^{2})^{5/2}}+\frac{\beta^{2}(2x-3x^{3})}{2 \sqrt{1-x^{2}}(k^{2}-\beta^{2}x^{2})^{3/2}}\right)\] \[= \epsilon_{y}\Bigg{\{}-\frac{1}{4\beta}\ln\left(\frac{\sqrt{1-x^ {2}}\beta+\sqrt{k^{2}-x^{2}\beta^{2}}}{\sqrt{|k^{2}-\beta^{2}|}}\right)+\frac {\sqrt{1-x^{2}}A}{12(k^{2}-\beta^{2})(k^{2}-\beta^{2}x^{2})^{3/2}}\Bigg{\}},\] \[\mathscr{T}_{3} \equiv \int^{\zeta}\left\{\frac{5\zeta_{0}^{2}}{4(v^{2}-\zeta_{0}^{2} )^{5/2}}+\frac{3}{4(v^{2}-\zeta_{0}^{2})^{3/2}}\right\}dv=\frac{\zeta^{3}-6 \zeta_{0}^{2}}{12\zeta_{0}^{2}(\zeta^{2}-\zeta_{0}^{2})^{3/2}},\] (C.4)
where
\[A(x)\equiv 3k^{4}+2k^{2}\beta^{2}\left(x^{2}-1\right)-3x^{2}\beta^{4}.\] (C.5)
### \(g>0\)
In this case, the error control function is defined by Eq.(2.30), which can be also written as Eq.(C.3), but now with
\[\mathscr{T}_{1} \equiv \int\frac{q}{\sqrt{g}}dy=\epsilon_{y}\frac{q_{0}^{2}}{\beta}\arcsin \left(\frac{\beta\sqrt{1-x^{2}}}{\sqrt{\beta^{2}-k^{2}}}\right),\] \[\mathscr{T}_{2} \equiv \int\left(-\frac{5g^{\prime 2}}{16g^{3}}+\frac{g^{{}^{\prime\prime}} }{4g^{2}}\right)\sqrt{g}dy=\epsilon_{y}\int dx\left(\frac{5\beta^{4}(x^{3}-x^{ 5})}{4\sqrt{1-x^{2}}(\beta^{2}x^{2}-k^{2})^{5/2}}-\frac{\beta^{2}(2x-3x^{3})}{2 \sqrt{1-x^{2}}(\beta^{2}x^{2}-k^{2})^{3/2}}\right)\] (C.6) \[= \epsilon_{y}\Bigg{\{}-\frac{1}{4\beta}\arcsin\left(\frac{\sqrt{1- x^{2}}\beta}{\sqrt{\beta^{2}-k^{2}}}\right)+\frac{\sqrt{1-x^{2}}A}{12(\beta^{2}-k ^{2})(\beta^{2}x^{2}-k^{2})^{3/2}}\Bigg{\}},\] \[\mathscr{T}_{3} \equiv \int^{\zeta}\left\{\frac{5\zeta_{0}^{2}}{4(\zeta_{0}^{2}-v^{2})^ {5/2}}-\frac{3}{4(\zeta_{0}^{2}-v^{2})^{3/2}}\right\}dv=\frac{6\zeta_{0}^{2}- \zeta^{3}}{12\zeta_{0}^{2}(\zeta_{0}^{2}-\zeta^{2})^{3/2}}.\]
|
2309.17352 | Improving Audio Captioning Models with Fine-grained Audio Features, Text
Embedding Supervision, and LLM Mix-up Augmentation | Automated audio captioning (AAC) aims to generate informative descriptions
for various sounds from nature and/or human activities. In recent years, AAC
has quickly attracted research interest, with state-of-the-art systems now
relying on a sequence-to-sequence (seq2seq) backbone powered by strong models
such as Transformers. Following the macro-trend of applied machine learning
research, in this work, we strive to improve the performance of seq2seq AAC
models by extensively leveraging pretrained models and large language models
(LLMs). Specifically, we utilize BEATs to extract fine-grained audio features.
Then, we employ Instructor LLM to fetch text embeddings of captions, and infuse
their language-modality knowledge into BEATs audio features via an auxiliary
InfoNCE loss function. Moreover, we propose a novel data augmentation method
that uses ChatGPT to produce caption mix-ups (i.e., grammatical and compact
combinations of two captions) which, together with the corresponding audio
mixtures, increase not only the amount but also the complexity and diversity of
training data. During inference, we propose to employ nucleus sampling and a
hybrid reranking algorithm, which has not been explored in AAC research.
Combining our efforts, our model achieves a new state-of-the-art 32.6 SPIDEr-FL
score on the Clotho evaluation split, and wins the 2023 DCASE AAC challenge. | Shih-Lun Wu, Xuankai Chang, Gordon Wichern, Jee-weon Jung, François Germain, Jonathan Le Roux, Shinji Watanabe | 2023-09-29T15:57:46Z | http://arxiv.org/abs/2309.17352v2 | Improving Audio Captioning Models with Fine-Grained Audio Features, Text Embedding Supervision, and LLM Mix-Up Augmentation
###### Abstract
Automated audio captioning (AAC) aims to generate informative descriptions for various sounds from nature and/or human activities. In recent years, AAC has quickly attracted research interest, with state-of-the-art systems now relying on a sequence-to-sequence (seq2seq) backbone powered by strong models such as Transformers. Following the macro-trend of applied machine learning research, in this work, we strive to improve the performance of seq2seq AAC models by extensively leveraging pretrained models and large language models (LLMs). Specifically, we utilize BEATS to extract fine-grained audio features. Then, we employ Instructor LLM to fetch text embeddings of captions, and infuse their language-modality knowledge into BEATS audio features via an auxiliary InfoNCE loss function. Moreover, we propose a novel data augmentation method that uses ChatGPT to produce caption mix-ups (i.e., grammatical and compact combinations of two captions) which, together with the corresponding audio mixtures, increase not only the amount but also the complexity and diversity of training data. During inference, we propose to employ nucleus sampling and a hybrid reranking algorithm, which has not been explored in AAC research. Combining our efforts, our model achieves a new state-of-the-art 32.6 SPIDEr-FL score on the Clotho evaluation split, and wins the 2023 DCASE AAC challenge.
Shih-Lun Wu\({}^{1}\), Xuankai Chang\({}^{1}\), Gordon Wichern\({}^{2}\), Jee-weon Jung\({}^{1}\),
Francois Germain\({}^{2}\), Jonathan Le Roux\({}^{2}\), Shinji Watanabe\({}^{1}\)+\({}^{1}\) Language Technologies Institute, Carnegie Mellon University, Pittsburgh, PA, USA
\({}^{2}\) Mitsubishi Electric Research Labs (MERL), Cambridge, MA, USA
Footnote †: This work used _Bridges2-PSC_ and _Delta-NCSA_ through allocation CIS210014 from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, supported by NSF grants #2138259, #2138286, #2138307, #2137603, and #2138296.
AAC, BEATS, LLM, mix-up, InfoNCE
## 1 Introduction
Automated audio captioning (AAC) is a multimodal task whose goal is to describe an input audio clip using text. The descriptions are not restricted to a fixed set of class labels or tags, but are free-form sentences [1], which allow better flexibility and expressivity. Research progress on AAC has accelerated in recent years thanks to the yearly DCASE challenges, the impressive performance of Transformer-based language models, and the release of the open audio captioning datasets Clotho [2] and AudioCaps [3]. Recent leading works in AAC [4, 5, 6, 7, 8, 9] all used the sequence-to-sequence (seq2seq) modeling framework, where an audio encoder is used to extract features from the input audio, and a text decoder learns to generate the caption autoregressively based on the extracted audio features.
While our work is also seq2seq-based, we extensively leverage machine learning models that are pretrained on large-scale datasets to improve AAC performance from multiple aspects, namely, audio feature extraction, auxiliary training objective, and data augmentation. We begin with rewamping the audio encoder (Section 2.1). While PANN [10], a convolution-based audio feature extractor, has long been the tried-and-true choice for AAC research [4, 5, 6], many recently proposed audio encoders [11, 12, 13] have the potential to further improve AAC performance due to more advanced architectures, pretraining objectives, and finer-grained output features. Specifically, we choose Bidirectional Encoder representation from Audio Transformers (BEATS) [13], a state-of-the-art multi-label audio tagging model pretrained on AudioSet [14], as our audio encoder.
Next, witnessing the tremendous success of large language models (LLMs) in representing and generating text [15, 16], we use text embeddings from the Instructor Transformer [17] to provide additional supervision (Section 2.2), and employ ChatGPT [18] to perform a novel mix-up data augmentation (Section 2.3). In previous AAC studies, to help linking audio features with concepts in text, which is the output space of AAC tasks, [5] pretrained the PANN encoder with an audio-caption InfoNCE [19] contrastive loss, while [6] used multitask learning to predict keywords in the caption. In our work, we combine the benefits of both representation learning and multitask training, and also leverage the LLM's rich knowledge of text. Particularly, we use Instructor to obtain text embeddings for ground-truth captions, and apply an auxiliary InfoNCE loss on a Conformer [20, 21] postencoder to refine/summarize the BEATS audio features and align them with Instructor text embeddings.
For data augmentation, other than SpecAugment [22] used commonly in audio-related tasks, researchers have leveraged the original mix-up [23] to linearly combine audio/text embeddings from two unrelated samples [8, 24], synonym substitution on ground-truth captions [7], and caption concatenation [7]. As for LLM-based efforts, ChatGPT has been used to compile the large-scale WaveCaps [7, 8, 25] AAC dataset by rewriting tags or fragmented descriptions, which are often associated with audio files on the web, to coherent sentences. Integrating the ideas of mix-up and LLM augmentation methods, we prompt ChatGPT to mix-up the captions of two audio clips, which produces more natural combined captions than simple text concatenation [7]. The text mix-ups, when paired with audio mix-ups (i.e., summations of waveforms), increase the amount, complexity, and diversity of our training data. With all the techniques above, we also discover that nucleus sampling decoding [26] followed by hybrid reranking (Section 2.4) leads to a further performance boost.
Our AAC model attains a state-of-the-art SPIDEr-FL score of 32.6 (on Clotho V2 [2] evaluation set) and is the winner of the 2023 DCASE AAC Challenge.1 Despite the numerous components introduced to our model, we show in our ablation study (Section 3.4) that
every component is indispensable to its great performance. We plan to release our codebase and the ChatGPT-generated caption mix-ups upon paper publication.
## 2 Method
### Network Architecture and Main Loss Function
We utilize BEATs [13] as our main audio encoder. The BEATs module takes a 16 kHz audio waveform as input, converts the waveform into a mel spectrogram with a 10-millisecond hop size, splits the spectrogram into 2D patches, and transforms the patches into a sequence of representations through 12 self-attention (i.e., Transformer) layers. Compared to PANN [10], which has been popular in the AAC literature, BEATs comes with some key modifications:
* **Architecture:** BEATs features a Transformer backbone, while PANN is based on a convolutional neural network (CNN).
* **Pretraining objectives:** While both BEATs and PANN are pre-trained on AudioSet [14], a general-domain, large-scale audio dataset, BEATs is first trained on masked language modeling [28] of tokenized audio, and then on multilabel audio classification. PANN is only trained on the latter.
* **Resolution:** BEATs provides more fine-grained outputs at 50 Hz, compared to PANN, which has 1-Hz outputs.
Due to these differences, and the better performance of BEATs on AudioSet multilabel classification (50.6% vs. 43.9% mean average precision), BEATs would likely be a more suitable selection for the AAC task. In our pilot experiments, we tried either to finetune the BEATs module or to keep it frozen. Both options led to similar SPDEF-FL score, so we simply freeze BEATs to reduce computation and memory footprint.
Given that the BEATs module is frozen, to enable further training on the audio features (more details in Section 2.2), we attach a convolutional downsampling layer, followed by a 2-layer2 conformer [20] postencoder on top of the BEATs module. These additional layers further contextualize the audio features, and reduce the text decoder's workload for summarizing the audio features.
Footnote 2: The number of layers is determined by hyperparameter search.
Following the recent trend of AAC models [4, 5], we adopt a 6-layer BART Transformer decoder [27] to generate captions. We use the default BART text tokenizer with a 50K vocabulary size, and train the BART's weights from scratch. The BART decoder cross-attends to the Conformer's output representations and self-attends to the historical caption tokens to generate the next caption token autoregressively. The main loss function to our BEATs-BART captioning model, applied on the BART's output distributions, is the negative log-likelihood (NLL) of audio captions, i.e.,
\[\mathcal{L}_{\mathrm{NLL}}=\mathbb{E}_{(\mathbf{x},\mathbf{y})\in\mathcal{D}_{\text{ min}}}\Big{[}\sum_{n=1}^{|\mathbf{y}|}-\log p(y_{n}|\mathbf{y}_{1:n-1};\mathbf{x})\Big{]}\,, \tag{1}\]
where \(\mathcal{D}_{\text{min}}\) is the training dataset, \(\mathbf{x}\) is the input audio waveform, \(\mathbf{y}\) is an audio caption, and \(y_{n}\) is the \(n^{\text{th}}\) token in the caption. A schematic overview of our captioning model is depicted in Fig. 1.
### Instructor Embedding Supervision
To infuse text-related knowledge into our audio (i.e., encoder) features, we leverage an LLM, Instructor-XL3 Transformer [17], to fetch the text embeddings for ground-truth captions and supervise our encoder stack with them. Instructor is based on a pretrained T5 [29] text encoder, that is then finetuned using InfoNCE loss [19] on a variety of natural language processing (NLP) tasks, such as classification, reranking, summarization, and text quality evaluation, to learn sentence-level text embeddings. Task- and domain-specific instructions are prepended to the input text as conditions, e.g., _Represent the Medicine statement for retrieval:_, hence the name Instructor. In the Massive Text Embedding Benchmark (MTEB) [30], Instructor-XL is the state of the art on text summarization and reranking tasks,4 which are closely related to audio captioning.
Footnote 3: “XL’ describes the size of the network—it has 1.5B parameters.
Footnote 4: As of May 2023, when our project was being carried out.
In our use case, we place "_Represent the audio caption:_" as the instruction to the (frozen) Instructor to fetch sentence embeddings from ground-truth captions. On our BEATs-Conformer encoder stack, we perform mean-pooling along the timestep dimension to obtain a single audio embedding for the input waveform. We denote the audio embedding and the Instructor caption embedding by \(\mathbf{a}\) and \(\mathbf{c}\) respectively. An auxiliary InfoNCE loss is computed using in-batch negative samples:
\[\mathrm{sim}(\mathbf{a},\mathbf{c}) =\exp\Big{(}\frac{\mathbf{a}^{\top}\mathbf{c}}{||\mathbf{a}||\,||\mathbf{c}||} \cdot\frac{1}{\tau}\Big{)}, \tag{2}\] \[\mathcal{L}_{\text{InfoNCE\_a}} =\mathbb{E}_{\mathcal{B}\subset\mathcal{D}_{\text{min}}}\Big{[} \sum_{i=1}^{|\mathbf{B}|}-\log\frac{\mathrm{sim}(\mathbf{a}_{i},\mathbf{c}_{i})}{\sum_{j= 1}^{|\mathbf{B}|}\mathrm{sim}(\mathbf{a}_{j},\mathbf{c}_{i})}\Big{]},\] (3) \[\mathcal{L}_{\text{InfoNCE\_c}} =\mathbb{E}_{\mathcal{B}\subset\mathcal{D}_{\text{min}}}\Big{[} \sum_{i=1}^{|\mathbf{B}|}-\log\frac{\mathrm{sim}(\mathbf{a}_{i},\mathbf{c}_{i})}{\sum_{j= 1}^{|\mathbf{B}|}\mathrm{sim}(\mathbf{a}_{i},\mathbf{c}_{j})}\Big{]},\] (4) \[\mathcal{L}_{\text{InfoNCE}} =\frac{1}{2}(\mathcal{L}_{\text{InfoNCE\_a}}+\mathcal{L}_{\text{ InfoNCE\_c}})\,, \tag{5}\]
where \(\mathrm{sim}(\cdot,\cdot)\) is the exponentiated temperature-scaled cosine sim
Figure 1: Overview of our Transformer-based captioning system. We utilize a frozen BEATs [13] to extract audio features from the mel spectrogram. On top of BEATs, we attach a Conformer [20] postencoder to further contextualize the audio features. Then, a BART [27] text decoder cross-attends to the contextualized audio features and generates the caption autoregressively. To provide text-modality guidance to our encoder stack, we extract the captions’ sentence embeddings from an instruction-tuned large language model (LLM), Instructor-XL [17], and apply an InfoNCE [19] auxiliary loss to train Conformer’s output audio representation to mimic the corresponding caption’s Instructor sentence embedding.
ilarity, \(\tau\) is the temperature hyperparameter,5\(\mathcal{B}\) denotes a sampled mini-batch, and \(i,j\) index samples in the mini-batch. The multitask loss \(\mathcal{L}\) used to train our model can hence be written as:
Footnote 5: Generally speaking, a higher temperature makes the contrastive objective more challenging, as the distribution is made less peaky. We perform a search in \(\tau\) = {0.03, 0.07, 0.2, 0.5, 1.0} and find \(\tau\) = 0.5 works the best.
\[\mathcal{L}=\mathcal{L}_{\mathrm{NLL}}+\alpha\mathcal{L}_{\mathrm{InfoNCE}}\,, \tag{6}\]
where \(\alpha\) is a hyperparameter and we find \(\alpha=1\) works well.
### ChatGPT Mix-up Augmentation
As a novel data augmentation measure, we employ another LLM, ChatGPT [18], to'mix-up' [23, 31] pairs of captions in the Clotho dataset, and create more complex and diverse in-domain training data. Specifically, we mix-up captions with different corresponding audio clips, rather than two ground-truth captions for the same audio. The corresponding audio waveforms are also mixed up to ensure consistency between audio and mixed-up captions.
We collect such mix-up augmentations using the public ChatGPT API. In the prompt, we ask ChatGPT to "_Generate a mix of the following two audio captions, and keep the generation under 25 words:_", and then provide it with two randomly sampled captions from Clotho [2]. We explicitly limit the number of words to force ChatGPT to be more concise. We use the FENSE disfluency detector [32] to filter out poor examples.6 Mix-up of audio waveforms is more straightforward: we follow the algorithm used in WavLM [33], scaling the two waveforms to ensure their relative root-mean-square energy is within \(\pm\)5 dB before adding them together.
Footnote 6: Less than 1% of ChatGPT mix-ups are detected as disfluent.
Table 1 displays a few examples of ChatGPT-generated mix-ups. We try including either 50K or 100K ChatGPT mix-ups, and using 50K yields a better performance. The API cost for generating 50K mix-ups is roughly $8.50.
### Sampling and Reranking
In past AAC research works, the most commonly used decoding algorithm has been beam search [4, 5, 6]. However, we find that, after introducing all the techniques in Section 2.1\(\sim\)2.3, around 1/3 of generations using _nucleus sampling_[26], which is known to produce more diverse and informative generations than beam search, score higher in terms of SPIDEr-FL than those using beam search. This reveals the potential advantage of a sampling-then-reranking approach.
To 'pick the right sample' with nucleus sampling, we propose a hybrid reranking algorithm that utilizes again the knowledge of both our learned audio encoder stack and our text decoder. The two reranking metrics we consider are:
* **Caption log-likelihood:** We feed the input waveform \(\mathbf{x}\) and the generated caption \(\mathbf{\hat{y}}\) into our captioning model to directly compute \(\log p(\mathbf{\hat{y}}\,|\,\mathbf{x})=\sum_{n=1}^{|\mathbf{\hat{y}}|}\log p(\hat{y}_{n} \,|\,\mathbf{\hat{y}}_{1:n-1};\mathbf{x})\) (cf. Eq. (1)). As the log-likelihood is computed on decoder outputs, we call this **decoder reranking**.
* **Audio-caption representation similarity:** We feed the generated caption \(\mathbf{\hat{y}}\) into the Instructor model to get its text embedding \(\mathbf{\hat{c}}\), and fetch the audio embedding \(\mathbf{a}\) of the input waveform \(\mathbf{x}\) from our trained audio encoder stack. Then, we compute the cosine similarity between the text and audio embeddings, i.e., \(\left(\mathbf{a}^{\top}\mathbf{\hat{c}}\right)/\left(||\mathbf{a}||\,||\mathbf{\hat{c}}||\right)\) (cf. Eq. (2)). As the representation from the audio encoder is used here, we refer to this as **encoder reranking**.
Candidate captions are ranked by the weighted sum of the two metrics above (with weights tuned on some held-out validation data), and we return the highest-scoring one as the final predicted caption.
## 3 Experiments and Results
### Training and Inference
We first pretrain the model on the combined dataset of AudioCaps7[3] and 50K ChatGPT mix-ups of samples from the better-curated but smaller Clotho [2] dataset for 10 epochs (about 13K gradient steps), and then finetune it on Clotho (development split, \(\sim\)4K samples) for 40 epochs (or 1.2K steps). Teacher-forcing is applied on the BART decoder inputs. We adopt the AdamW optimizer with a 2 \(\times\) 10\({}^{-4}\) learning rate for the 'AudioCaps \(+\) ChatGPT mix-up' pre-training stage, and 2 \(\times\) 10\({}^{-5}\) for the Clotho finetuning stage.
Footnote 7: We filter out captions with \(<\)6 words, leading to 35K remaining samples.
As the Conformer attention (see Section 2.1) is the primary memory bottleneck due to the long sequence length of audio features, there is a tradeoff between the batch size that can be used and the downsampling rate for the Conv1D layer between our BEATs and Conformer modules--using less downsampling gives the model finer-grained audio features, but a smaller batch size would lead to less reliable gradients and hamper contrastive learning [35]. Through experiments, we settle on the 3x downsampling rate, which allows a batch size of 32 and achieves the best performance.
We train the model on two NVIDIA A100 (40GB) GPUs, and the two training stages take around 6 and 3 hours respectively. Next-token prediction accuracy on the Clotho validation split is used as the checkpoint selection criterion.
At inference time, we experiment with generating {20, 50, 100} candidate captions per test case with nucleus sampling,8 and find that generating 50 strikes the best balance between performance gain and compute efficiency. Additionally, we leverage the FENSE evaluator [32] to filter out generations with fluency issues. We tune the weights of the reranking metrics (see Section 2.4) on the Clotho validation split and eventually pick {0.3, 0.7} respectively for decoder and encoder reranking metrics.9
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt}|p{113.8pt}} \hline \hline
**Clotho caption \#1** & **Clotho caption \#2** & **ChatGPT mix-up** \\ \hline water flowing over some rocks throughout a creek & in the distance fireworks pop and crackle constantly as they are set off & a serine creek bubbles over rocks as distant fireworks pop and crackle in celebration \\ \hline a muffled object is dragged along a surface in a room that echoes & several dogs bark in a room that echoes while a muffled object is dragged as noise in the background & dogs bark in a room that echoes while a muffled object is dragged as birds chirp faintly in the background \\ \hline a gate squeals as it sways while birds chirp in the background & a machine is whirring loudly at first and then slowly shuts off & as the gate sways and creaks a nearby machine loudly whirns before slowly powering down amidst chirping birds \\ \hline \hline \end{tabular}
\end{table}
Table 1: Randomly chosen samples of ChatGPT mix-up augmentations. In general, ChatGPT is able to faithfully reflect all content in the two input captions grammatically, and sometimes exhibits creativity in sentence structuring and vocabulary choice.
### Evaluation
The metrics used to evaluate the quality of generated captions are: METEOR, CIDEr, SPICE, SPIDEr, and SPIDEr-FL. METEOR and CIDEr are both based on \(n\)-gram overlap, with the former penalizing fragmentation between the ground-truth and generated captions, and the latter promoting generating informative words by weighting \(n\)-grams by their TF-IDF scores. SPICE focuses on the overlap computed on semantic graphs constructed by objects, object attributes, and relations. SPIDEr is the simple mean of CIDEr and SPICE, and it had been the official evaluation metric in DCASE AAC challenges until 2022. In 2023, the official metric was changed to SPIDEr-FL, which uses FENSE [32], i.e., a BERT-based binary classifier, to penalize the SPIDEr score of disfluent generations by 90%.
Our evaluation is done on the public 'evaluation' split10 of the Clotho [2] dataset, which consists of 1,045 samples. The evaluation results of our full model can be found in the 14 row of Table 2.
Footnote 10: DCASE challenge uses the blind ‘test’ split to rank the submissions.
### Comparison with Past and Concurrent Works
We compare our model (i.e., winner of the DCASE 2023 AAC challenge) to other top performers in the 2022 and 2023 challenges [5, 6, 7, 8]. We note that while all the best-scoring systems for each participant were ensemble models, we present the metrics for single models for fairness and practicality reasons.11 The comparison in Table 3 shows that our model is state-of-the-art in terms of the new official metric, SPIDEr-FL, and performs competitively on other metrics. Moreover, while optimizing CIDEr with reinforcement learning has been popular among challenge submissions, the resulting disfluency issues [36] get severely punished on SPIDEr-FL (see 6\({}^{\text{th}}\) row in Table 3).
Footnote 11: Ensembles can contain a wildly different # of models (e.g., 3\(\sim\)20), and the performance gain can seldom justify the extra compute required.
### Ablation Study
To show that every component in our AAC model is indispensable to achieve the best performance, we conduct a comprehensive ablation study that tries to remove components _one at a time_. Table 2 presents the results that corroborate the necessity of all of our model components--Each one of them gives at least a 2-point improvement on SPIDEr-FL,12 with BEATs audio encoder (replacing the popular PANN) being the most crucial one causing a 6-point difference.
Footnote 12: We cannot perform hybrid reranking with the ‘w/o Instructor’ setting as the audio encoder is not trained to match the caption text embedding. Thus, we simply use beam search as it outperforms decoder–only reranking.
Some intriguing additional findings are: (i) hybrid reranking is required to outperform beam search (see rows 1\(\sim\)4 in Table 2), suggesting that decoder and encoder reranking methods are strongly complementary and hence should be used together when possible; (ii) the'sampling \(+\) reranking' decoding approach improves performance the most when both BEATs encoder and ChatGPT mix-ups are used (see rows '1 vs. 4', '5 vs. 6', and '8 vs. 9' in Table 2).
## 4 Conclusion and Future Work
In this work, we improved audio captioning models from multiple aspects with an extensive use of pretrained models. We employed the BEATs Transformer to extract more fine-grained audio features. We then utilized the Instructor text embeddings for multitask learning to provide rich language-modality guidance. ChatGPT was also leveraged to generate faithful and fluent caption mix-ups which, when paired with the corresponding audio mix-ups, increased the size, diversity, and complexity of our training data. Finally, nucleus sampling and hybrid reranking were used to exploit our model's capabilities to the fullest extent. We accomplished a state-of-the-art 32.6 SPIDEr-FL score and demonstrated via a thorough ablation study that all components are crucial to our model's success.
Future endeavors may explore audio feature extractors that are pretrained with larger amounts of data [37] or multimodal supervision [38]. More advanced reinforcement learning methods [39] can also be applied to optimize captioning metrics that correlate well with human judgment [40] without introducing disfluency issues.
\begin{table}
\begin{tabular}{l l l l l l l l l l l} \hline \hline & \multicolumn{8}{c}{**Model components**} & \multicolumn{8}{c}{**Performance metrics (\(\text{in}\%\))**} \\ \cline{2-10} & Audio encoder & Instructor emb. & ChatGPT mix-up & Decoding & Reranking & METEOR & CIDEr & SPICE & SPIDEr & SPIDEr-FL \\ \hline
**Full model** & BEATs & ✓ & ✓ & Sampling & Hybrid & **19.3** & **50.6** & **14.6** & **32.6** & **32.6** \\ \hline
**w/o hybrid rerank** & BEATs & ✓ & ✓ & Sampling & Decoder only & 18.6 & 45.1 & 13.7 & 29.4 & 29.4 (\(\sim\)3.2) \\ and/or **sampling** & BEATs & ✓ & ✓ & Sampling & Encoder only & 18.0 & 43.7 & 13.4 & 28.5 & 28.5 (\(\sim\)4.1) \\ & BEATs & ✓ & ✓ & Beam search & n.a. & 18.7 & 47.4 & 13.4 & 30.4 & 30.3 (\(\sim\)2.2) \\ \hline
**w/o ChatGPT mixup** & BEATs & ✓ & ✗ & Sampling & Hybrid & 19.1 & 47.6 & 13.8 & 30.7 & 30.7 (\(\sim\)1.9) \\ & BEATs & ✓ & ✗ & Beam search & n.a. & 18.6 & 47.7 & 13.1 & 30.4 & 30.2 (\(\sim\)2.4) \\ \hline
**w/o Instructor** & BEATs & ✗ & ✓ & Beam search & n.a. & 18.6 & 45.8 & 13.4 & 29.6 & 29.4 (\(\sim\)3.2) \\ \hline
**w/o BEATs** & PANN & ✓ & ✓ & Sampling & Hybrid & 17.5 & 39.7 & 12.3 & 26.0 & 26.0 (\(\sim\)4.6) \\ & PANN & ✓ & ✓ & Beam search & n.a. & 17.4 & 41.6 & 12.2 & 26.9 & 26.7 (\(\sim\)5.9) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study results on Clotho [2] evaluation split. The results demonstrate that every component introduced in our AAC model, i.e., BEATs [13] audio encoder (Section 2.1), Instructor [17] sentence embedding supervision (Section 2.2), ChatGPT [18] mix-up augmentation (Section 2.3), and nucleus sampling [26] \(+\) reranking (Section 2.4), is beneficial to the performance.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline & \multicolumn{8}{c}{**Performance metrics (\(\text{in}\%\))**} \\ \cline{2-7} & RL & METEOR & CIDEr & SPICE & SPIDEr & SPIDEr-FL \\ \hline
**Ours** & ✗** & **19.3** & **50.6** & **14.6** & **32.6** & **32.6** \\
**Labbe et al., “23**[8] & ✗** & 19.2 & 48.5 & 13.9 & 31.2 & 31.0 \\
**Cho et al., “23**[7] & ✗ & 18.8 & 48.3 & 13.7 & 31.0 & 30.7 \\
**Vic et al., “22**[6] & ✗ & 17.8 & 44.5 & 12.7 & 28.6 & n.a. (\(\leq\)28.6) \\
**Xu et al., “25**[6] & ✗ & 17.9 & 42.1 & 12.7 & 27.4 & n.a. (\(\leq\)27.4) \\ \hline
**Cho et al., “23**[7] & ✓ & **19.5** & **52.6** & **14.3** & **33.5** & **22.5** \\
**Ye et al., “22**[6] & ✓ & 18.5 & 50.3 & 13.2 & 31.7 & n.a. (\(\leq\)31.7) \\
**Xu et al., “22**[5] & ✓ & 18.6 & 50.9 & 12.0 & 31.5 & n.a. (\(\leq\)31.5) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison (on Clotho [2] evaluation split) with top-ranking methods in recent DCASE challenges. All models presented here are single models, not ensembles. ‘RL’ indicates the use of reinforcement learning [34] to directly optimize CIDEr score. For fair comparison, best results in each group (i.e., _w/_ or _w/o_ RL) are **bold**-faced. Notice that RL can lead to a heavy punishment on SPIDEr-FL, the new DCASE official metric, due to fluency flaws. |
2309.08272 | Structural Self-Supervised Objectives for Transformers | This thesis focuses on improving the pre-training of natural language models
using unsupervised raw data to make them more efficient and aligned with
downstream applications.
In the first part, we introduce three alternative pre-training objectives to
BERT's Masked Language Modeling (MLM), namely Random Token Substitution (RTS),
Cluster-based Random Token Substitution (C-RTS), and Swapped Language Modeling
(SLM). These objectives involve token swapping instead of masking, with RTS and
C-RTS aiming to predict token originality and SLM predicting the original token
values. Results show that RTS and C-RTS require less pre-training time while
maintaining performance comparable to MLM. Surprisingly, SLM outperforms MLM on
certain tasks despite using the same computational budget.
In the second part, we proposes self-supervised pre-training tasks that align
structurally with downstream applications, reducing the need for labeled data.
We use large corpora like Wikipedia and CC-News to train models to recognize if
text spans originate from the same paragraph or document in several ways. By
doing continuous pre-training, starting from existing models like RoBERTa,
ELECTRA, DeBERTa, BART, and T5, we demonstrate significant performance
improvements in tasks like Fact Verification, Answer Sentence Selection, and
Summarization. These improvements are especially pronounced when limited
annotation data is available. The proposed objectives also achieve
state-of-the-art results on various benchmark datasets, including FEVER (dev
set), ASNQ, WikiQA, and TREC-QA, as well as enhancing the quality of summaries.
Importantly, these techniques can be easily integrated with other methods
without altering the internal structure of Transformer models, making them
versatile for various NLP applications. | Luca Di Liello | 2023-09-15T09:30:45Z | http://arxiv.org/abs/2309.08272v1 | # Structural Self-Supervised Objectives for Transformers
###### Abstract
We present a novel model for the performance of the proposed model for the performance of the proposed model. We show that the proposed model is able to predict the performance of the proposed model. We also show that the proposed model is able to predict the performance of the proposed model. |
2308.16403 | Balancing between the Local and Global Structures (LGS) in Graph
Embedding | We present a method for balancing between the Local and Global Structures
(LGS) in graph embedding, via a tunable parameter. Some embedding methods aim
to capture global structures, while others attempt to preserve local
neighborhoods. Few methods attempt to do both, and it is not always possible to
capture well both local and global information in two dimensions, which is
where most graph drawing live. The choice of using a local or a global
embedding for visualization depends not only on the task but also on the
structure of the underlying data, which may not be known in advance. For a
given graph, LGS aims to find a good balance between the local and global
structure to preserve. We evaluate the performance of LGS with synthetic and
real-world datasets and our results indicate that it is competitive with the
state-of-the-art methods, using established quality metrics such as stress and
neighborhood preservation. We introduce a novel quality metric, cluster
distance preservation, to assess intermediate structure capture. All
source-code, datasets, experiments and analysis are available online. | Jacob Miller, Vahan Huroyan, Stephen Kobourov | 2023-08-31T02:12:46Z | http://arxiv.org/abs/2308.16403v2 | # Balancing between the Local and Global Structures (LGS) in Graph Embedding
###### Abstract
We present a method for balancing between the Local and Global Structures (LGS) in graph embedding, via a tunable parameter. Some embedding methods aim to capture global structures, while others attempt to preserve local neighborhoods. Few methods attempt to do both, and it is not always possible to capture well both local and global information in two dimensions, which is where most graph drawing live. The choice of using a local or a global embedding for visualization depends not only on the task but also on the structure of the underlying data, which may not be known in advance. For a given graph, LGS aims to find a good balance between the local and global structure to preserve. We evaluate the performance of LGS with synthetic and real-world datasets and our results indicate that it is competitive with the state-of-the-art methods, using established quality metrics such as stress and neighborhood preservation. We introduce a novel quality metric, cluster distance preservation, to assess intermediate structure capture. All source-code, datasets, experiments and analysis are available online.
Keywords:Graph embedding, Graph Visualization,Local and global structures, Dimensionality Reduction, Multi-Dimensional Scaling
## 1 Introduction
Graphs and networks are a powerful tool to encode relationships between objects. Graph embeddings, which map the vertices of a graph to a set of low dimensional vectors (real valued coordinates), are often used in the context of data visualization to produce node-link diagrams. While many layout methods exist [37], dimension reduction (DR) techniques have had success in providing desirable layouts, by capturing graph structure in reasonable computation times. DR methods are used to project high-dimensional data into low-dimensional space and some of these methods only rely on the relationships between the datapoints, rather than _datapoint coordinates_ in higher dimension. These techniques are applicable for both graph embeddings and visualization. Further, local DR algorithms attempt to preserve the local neighborhoods, while global DR algorithms attempt to retain all pairwise distances.
Two popular techniques that are adapted in graph visualization are (metric) Multi-Dimensional Scaling (MDS) [7, 23] and t-distributed stochastic neighbor embedding (t-SNE) [24]. The goals of these two algorithms are somewhat orthogonal: MDS focuses on preserving all pairwise distances, while t-SNE aims
to preserve the likelihood of points being close in the embedding if they were close in the original space. MDS is said to preserve _global_ structure, while t-SNE is said to preserve _local_ neighborhoods [10]. These ideas are directly applicable to graph visualization, where we can define the distances as the graph theoretic distances, e.g., via all-pairs shortest paths (APSP) computation. In the graph layout literature, MDS is often referred to as stress minimization [14, 41], and t-SNE has been adapted to graph layout in an algorithm known as tsNET [22] and later DRGraph [42]. Choosing the "best" graph embedding algorithm depends on the graph structure and the task. MDS is effective for structured/mesh-like graphs, while t-SNE works better for clustered/dense graphs. This phenomenon also applies to local and global force-directed layouts as well [22].
Automating the selection of the "best" embedding algorithm is challenging due to its dependency on graph structure. We introduce the Local-to-Global Structures (LGS) algorithm which provides a parameter-tuneable framework that can produces embeddings that span the spectrum from local optimization to global optimization.
Smaller values of the LGS parameter prioritize local structure, while larger values emphasize global structure. LGS enables exploration of the trade-off, revealing meaningful middle ground solutions. We introduce a new metric called _cluster distance_ to measure how well this intermediate structure is preserved. Everything described in this paper is available on Github: [https://github.com/Mickey253/L2G](https://github.com/Mickey253/L2G). We provide a video and additional layouts and analysis in supplemental material.
Figure 1: Embeddings of the connected_watts_1000 graph; see Sec. 4. The top row shows LGS embeddings – from local to global – with varying neighborhood sizes \((k)\). The LGS(72) layout captures the correct underlying model. The bottom row shows tsNET [22], UMAP [25], and MDS [41] embedding of the same graph.
## 2 Background
**Dimensionality Reduction (DR)** refers to a large family of algorithms that map a set of high-dimensional datapoints in lower-dimensionsal space. Different DR algorithms aim to preserve various properties of the dataset, such as total variance, global distances, local distances, etc. In visualization contexts, the dataset is typically projected onto 2D or 3D Euclidean space. DR algorithms generally accept input of two types: sample or distance. Sample-based algorithms, such as Principal Component Analysis (PCA) [11, 19] project the high dimensional data down to the embedding space. For distance-based inputs, the algorithms directly work with distance metrics. In the case of graph embeddings, the graph-theoretic distance is used, often all-pairs shortest path (APSP).
Popular techniques in the local category include t-SNE [24], UMAP [25], LLE [35], IsoMap [38], etc. For global structure, methods such as PCA [11] and MDS [7, 23] are used. MDS has variants, but here we mean metric MDS which minimizes stress [36]. Few techniques attempt to capture both global and local structure. Chen and Buja [5] adapt MDS to capture local structure by selectively preserving distances between a subset of pairs using kNN. The underlying idea is similar to ours, but it does not provide a framework to cover the spectrum from local to global as our method does. While t-SNE's perplexity parameter aims to imitate the size of neighborhood to be preserved, in general increasing its value does not lead to a global structure preservation [39]. Anchor-t-SNE improves the global structure preservation by anchoring a set of points to use as a skeleton for the rest of the embedding [12], however, it does not provide a framework to cover the spectrum from local to global. UMAP [25, 15] also aims to preserve the local structures of a dataset. While UMAP claims to preserve the global structures better than t-SNE, we show that this is not universally true for graph data in Sec. 5.
**Graph Embedding** is a problem to assign vectors to graph vertices, capturing the graph structure. More formally, given a graph G = (V, E), find a d-dimensional vector representation of V that optimally preserves properties [4] (e.g., pairwise distances in MDS [7, 23]). We restrict ourselves to 2D node-link visualization with edges represented by straight-line segments, so the problem is reduced to finding a 2D embedding for the vertices. Aesthetic criteria are often used to evaluate the quality of a graph embedding: the number of edge crossings, average edge length, overall symmetry, etc. [34]. Aesthetic criteria enhance readability and task facilitation, but information _faithfulness_ is equally important. It ensures that the embedding accurately represents all underlying data, regardless of the task [28, 29] and graph embeddings provide a nice benefit by directly optimizing graph structure preservation. _Graph structure_ is a nebulous term; referring to inherent properties of the underlying graph such as local/global distances. _Global distance_ preservation methods capture the graph's topological structure by closely aligning embedded distances with graph-theoretic distances. This approach is ideal for connectivity-based tasks and offers insights into the global scale and shape of the data. _Local structure_ preservation methods preserve the immediate neighborhood of each vertex, effectively capturing clusters
or densely connected subgraphs. While nearby vertices in the embedding can be considered similar, distant vertices may have irrelevant distances. This can be observed in the presence of long edges in the local embedding column in Fig. 2.
Graph embedding by dimensionality reduction:In a good embedding, the drawn distance should closely match the graph-theoretic distance between vertices [20]. This observation led to the use of stress function, which MDS aims to optimize, to obtain a graph embedding [14]. Stress can be minimized by majorization [14], stochastic gradient descent (SGD) [41], etc. The MDS approach suffers from an APSP computation, which usually relies on Floyd-Warshall's \(O(|V|^{3})\), or on Johnson's \(O(|V|^{2}\log|V|+|E||V|)\) algorithms. The maximum entropy model (MaxEnt) [13] adds a negative entropy between vertices in the graph. The motivation for MaxEnt is to improve the asymptotic complexity. The MaxEnt model places neighbor nodes closer while maximizing the distance between all vertices. This is conceptually similar to the LMDS of Chen and Buja [5]. Our approach differs from the MaxEnt model in motivation: Our LGS captures local structure, global structure, or balances between the two, whereas MaxEnt is primarily concerned with speed. We cannot avoid an APSP computation, and make use of SGD to optimize our objective function in lieu of majorization.
Optimizing stress creates effective layouts, but may neglect local structures; see Fig. 2. tsNET [22] captures local structure by also adding a repulsive force between vertices to achieve cluster separation. tsNET has been sped up by making use of negative sampling and sparse approximation to avoid the APSP computation [42]. Nocaj et al. [31] achieve effects similar to tsNET by weighting edges based on "edge embeddedness" and perform MDS on the weighted graph.
## 3 The Local-to-Global Structures (LGS) Algorithm
Local methods (e.g., t-SNE) preserve local neighborhoods, while global methods (e.g., MDS) capture all pair-wise distances. We propose the Local-to-Global Structures (LGS) algorithm that achieves the following 3 goals:
**G1**: A single parameter controlling local-global embedding balance
**G2**: When this parameter is small, the embedding preserves local neighborhoods
**G3**: When this parameter is large, the embedding preserves the global structure
By "local neighborhood" of a vertex we refer to the immediate neighbors of the vertex being considered. If the nearest neighbors of each vertex in an embedding match well with the nearest neighbors in the actual graph, then the embedding accurately preserves the local structures. By "global structure" we refer to the preservation of all pairwise graph distances (including long ones) in the embedding. Finally, "intermediate structure" refers to capturing both local neighbors and global structure. Fig. 2 shows graphs exemplifying local, intermediate, and global structures and Sec. 3.2 defines formal embedding measures: neighborhood error, cluster distance, and stress. In Sec. 3.1 we explain the selection process
for the balance parameter \(k\) and the objective function to ensure that the solution aligns with the stated goals. For **G1**, we modify MDS to preserve distances in a neighborhood defined by a parameter \(k\)). Thus, preserving distances for large neighborhoods satisfies **G3**. This leaves a question for **G2**: Does applying distance preservation to a subset of pairs result in locally faithful embeddings?
### Adapting Stress Minimization for Local Preservation
We define a parameter, \(k\), that represents the size of a neighborhood surrounding each vertex. A straightforward approach would involve simply selecting the \(k\)-nearest vertices for every given vertex (as in [5]). However, the graph-theoretic distance in an undirected graph is a discrete measure, which can create complications. For example, consider the local structure graph (top row) in Fig. 2. Although the within-cluster density is high, there are many edges between different clusters. Unfortunately, there is no simple way to test if an edge is within cluster or out-of-cluster. In order to produce tsNET-like embeddings, which should pay more attention to local structures, we must avoid preserving out-of-cluster edges.
Instead of considering distances directly, we find the top \(k\) most connected vertices for each vertex based on the hypothesis that more possible walks between
Figure 2: Local embedding methods perform well on graphs with distinct local structure (block_2000), but they can distort the global shape of the graph (dwt_1005). Global methods capture the overall shape (e.g., dwt_1005), but may miss important local structures (block_2000). LGS(100) performs well for graphs with both local and global structure, such as sierpinski_3d, allowing us to see its fractal nature.
vertices indicate greater similarity; see Fig. 3. Despite \(v_{A}\) and \(v_{B}\) not sharing an edge, they have the same set of neighbors. When \(v_{A}\) and \(v_{B}\) are both neighbors to a set of vertices, we can confidently state their similarity, confirmed by their shared proximity to \(vc\), \(v_{d}\), etc. [9]. The \(c\)-th power of an adjacency matrix \((A_{G}^{c})_{i,j}\) encodes the number of \(c\)-length walks from vertex \(i\) to vertex \(j\). To find the top \(k\) "most connected" vertices for each vertex, follow this procedure: Given an adjacency matrix of an undirected graph, \(A_{G}\), raise it to the \(c\)-th power, take the sum of all powers \(\mathbf{A}^{*}=\sum_{\mathbf{1}\leq\mathbf{i}\leq\mathbf{c}}\mathbf{A}_{ \mathbf{G}}^{i}\), to obtain a matrix whose \((i,j)\)-th element shows the number of walks from \(i\) to \(j\) of length less or equal than \(c\). Since each row in \(\mathbf{A}^{*}\) corresponds to a vertex, we find the \(k\) largest values in row \(i\) (by sorting). We define these top \(k\) vertices to be the "most connected" neighborhood \(N_{k}(v_{a})\) of vertex \(v_{a}\); see Fig. 3. We further weight the power of the matrix with a decaying weight factor, \(s,0<s<1\), such that \(A^{*}=\sum_{1\leq i\leq c}s^{i}A_{G}^{i}\). We investigate a range of values for \(s\), and set \(s=0.1\); see the supplemental material. We propose a procedure to reduce the number of matrix multiplications which we used in our experiments; see Appendix B.
**Objective Function** We remark, that only preserving distances of a subset of pairs will result in poor embeddings: e.g., two vertices that cannot "see" each other can be placed arbitrarily close with no penalty. A second term is needed in the objective function to prevent this, and we add an entropy repulsion term as in [5, 13], to force pairs of vertices away from each other. For a given pairwise distance matrix \(\left[d_{ij}\right]_{i,j=1}^{n}\) we define the following generalized stress function as an objective function:
\[\sigma(X)=\sum_{(i,j)\in N_{k}}(\|X_{i}-X_{j}\|-d_{ij})^{2}-\alpha\sum_{(i,j) \notin N_{k}}\log\|X_{i}-X_{j}\|, \tag{1}\]
where \(X_{i}\) is the embedded point in \(\mathbb{R}^{d}\), \(\alpha\) is a fixed constant parameter that controls the weight of the logarithmic term, and \(N_{k}\) corresponds to the neighborhood that we aim to preserve in the embedded space. This objective function ensures that distances are preserved between the most-connected neighborhoods, while maximizing entropy. We use the negative logarithm of the distance between points, so that the repulsive force is relatively strong at small distances,
Figure 3: An example of how we may skip over immediate neighbors when selecting neighborhoods to preserve. In this case, \(c=2\). There is only one unique walk of length \(\leq 2\) from \(v_{a}\) to \(v_{c},v_{d},v_{e},v_{f}\), but there are 4 such walks from \(v_{a}\) to \(v_{b}\). In this case, \(v_{b}\) would be the first vertex added to \(v_{a}\)’s most connected neighborhood.
but quickly decays (so that distant points are not forced to be too distant from each other). While similar to LMDS [5] and MaxEnt [13], the proposed objective function in Eq. 1 differs in (1) how the set \(N_{k}\) is selected (LMDS uses a kNN search and MaxEnt preserves distances between two vertices if and only if they share an edge) and (2) LMDS and MaxEnt cannot be easily parameterized to balance local and global structure preservation. We minimize the objective function by SGD which works well for stress minimization [2, 41]. The parameter space of the algorithm is discussed in the supplemental material, Appendix C.
Figure 4: Example embeddings. The first and last columns show the two extremes tsNET (local) and MDS (global); the middle column shows UMAP. The remaining columns show a gradual increase of LGS’s \(k\) parameter, moving from local to global distance preservation (left to right). Note LGS outputs are vertically higher in each row.
### Evaluation Metrics
We discuss the evaluation metrics for embedding algorithms: local neighborhood error (NE) score, intermediate structure (CD), and global distances (Stress).
#### 3.2.1 NE Metric:
Neighborhood hits (NH) measures how well an embedding preserves local structures [5, 10]. NH is the average Jaccard similarity of the neighbors in the high-dimensional and low-dimensional embedding. Let \(Y\) be an \(n\times d\) dimensional dataset, \(X\) be its \(n\times 2\)-dimensional embedding, and a radius \(r\) defines the size of the neighborhood one intends to measure. NH is defined as:
\[NH(Y,X,r)=\frac{1}{n}\sum_{i=1}^{n}\frac{|N_{Y}(p_{i},r)\cap N_{X}(p_{i},r)|}{| N_{Y}(p_{i},r)\cup N_{X}(p_{i},r)|} \tag{2}\]
where \(N_{Y}(p_{i},r)\) denotes the \(r\) nearest points to point \(p_{i}\) in \(Y\) and \(N_{X}(p_{i},r)\) the \(r\) nearest points to point \(p_{i}\) in \(X\). For graph embeddings, this notion is called neighborhood preservation (NP) [22, 42, 13], with the main difference being that
Figure 5: The grid_cluster graph is generated so that each cluster has many out-of-cluster edges to its neighbors in a \(3\times 3\) lattice, providing a recognizable intermediate structure. tsNET and UMAP do not place clusters on a grid, MDS mixes the clusters; LGS(100) captures the \(3\times 3\) grid and shows distinct clusters.
the radius \(r\) now refers to graph-theoretic distance: all vertices with shortest path distance \(\leq r\) from vertex \(v_{i}\). Specifically, NP measures the average Jaccard similarity of a vertex's graph-theoretic neighborhood of radius \(r\) and an equally sized neighborhood of that vertex's closest embedded neighbors. Since NH and NP measure accuracy, it is desirable to maximize these values. To facilitate comparison with the other two metrics (where lower scores mean better embeddings), we use Jaccard dissimilarity instead and refer to it as Neighborhood Error (NE).
**Cluster Distance Metric:** We introduce a new metric to measure how well intermediate structures are captured in an embedding. Since the distances between clusters in t-SNE cannot be interpreted as actual distances [39], while clusters in MDS embeddings are often poorly separated, we measure how faithful the relative distances between cluster centers are represented in the embedding. When cluster labels are given as part of the input (e.g., labels, classes), we can use them to define distances between the clusters. When cluster information is not given, we use \(k\)-means clustering in the high-dimensional data case, and modularity clustering in the graph case. The distances between clusters in the high-dimensional case is given by the Euclidean distance between the cluster centers. For graphs, we measure the distance between clusters by first taking the normalized count of edges between them, then subtracting the normalized count to convert similarity into dissimilarity. This produces a cluster-distance matrix, \(\delta\). Let \(C_{1},\ldots,C_{n}\) be the set of vertices belonging to cluster \(1,\ldots,n\), then
\[\delta_{i,j}=1-\frac{1}{|E|}\sum_{u\in C_{i},v\in C_{j}}\mathds{1}(u,v\in E)\]
where \(\mathds{1}\) is the indicator function (1 if \((u,v)\) is an edge and 0 otherwise). Once \(\delta\) is computed, we compute the geometric center of each embedded cluster and compute the cluster-level stress between the graph-level-cluster and realized-cluster distances. This measure is small when similar clusters are placed closer and dissimilar clusters are placed far apart. The cluster distance (CD) is:
\[CD(\delta,\chi)=\sum_{i,j}\left(\frac{\delta_{i,j}-||\chi_{i}-\chi_{j}||}{ \delta_{i,j}}\right)^{2} \tag{3}\]
where \(\delta_{i,j}\) is the dissimilarity measure between cluster \(i\) and cluster \(j\) and \(\chi_{i}\) is the geometric center of cluster \(i\) in the embedding. Although there are several existing metrics to measure cluster accuracy, such as silhouette distance and between/within-cluster sum of squares, they are not well suited to measure the quality of intermediate embeddings. Ideally, we would need a measure that checks how well the clusters are preserved and also verifies that the relative placement of the clusters is meaningful. The CD metric verifies meaningful cluster placements by measuring all pairwise distances between cluster centers. We remark that the CD metric works best when the clusters have convex shapes (or shapes similar to spheres). For arbitrary non-convex shapes, such as half-moons or donuts, the CD metric might not provide meaningful insights.
#### 3.3.2 Stress Metric:
Stress has been used in many graph embedding evaluations.
\[\text{stress}(d,X)=\sum_{i,j}\left(\frac{d_{i,j}-||X_{i}-X_{j}||}{d_{i,j}}\right)^ {2} \tag{4}\]
where \(d\) is the given distance matrix and \(X\) is the embedding. Embeddings are scaled to ensure fair comparisons in computing stress [13, 22, 42].
## 4 LGS Embedding of Graphs
We start with a visual analysis and discussion of layouts produced by LGS. Following the convention, several embeddings of the same graph are displayed side-by-side with increasing the value of \(k\) from left to right, going from local to global. Underneath the LGS embeddings, we place t-SNE, UMAP, and MDS embeddings of the same graph. For all graph embeddings provided in this paper, we use the jet color scheme to encode edge length. An edge length of 1 (ideal for unweighted graphs) is drawn in green, while red indicates that edge has been
Figure 6: Behavior of NE and stress: as \(k\) increases NE gets worse and stress gets better (LGS transitions from preserving local to global structure); tsNET, UMAP, and MDS values are shown as dotted lines for comparison. Note that in general, we expect to see an upward trend in NE, a downward trend for stress, and a parabola shape for CD.
compressed (length \(<1\)) and blue indicates the edge is stretched (length \(>1\)). This makes clusters easy to spot as bundles of red edges, and global structure preservation apparent when most edges are green. Similar to tsNET, low values of \(k\) capture local neighborhoods well, by allowing some longer edges. As a result, clusters tend to be well separated. Note that tsNET allows even longer edges in an embedding, occasionally breaking the topology; see Fig. 1. Higher \(k\) values make LGS similar to MDS, with more uniform edge lengths. This reveals global structures (e.g., mesh, grid, lattice) but may overlook clusters.
**Grid_cluster** is a synthetic example with 900 vertices (9 clusters of size 100 each) and 10108 edges, created by stochastic block model (SBM) to illustrate the notion of cluster distance preservation. Within cluster edges are created with probability 0.8. We distinguish between two types of out-of-cluster edges. Clusters are first placed on a lattice. Out-of-cluster edges are created with probability 0.01 if they are adjacent in the lattice (no diagonals) and 0.001 otherwise. The layouts of this graph are in Fig. 5. Note the visual similarities between LGS(32) and tsNET; both seperate each cluster into dense sub-regions and place them seemingly randomly in the plane. Also note the similarities between LGS(200) and MDS, where both methods tend to miss the clusters. UMAP also fails to capture the intermediate structure built into this network: while there is a single cluster placed in the middle of the other eight, the surrounding shape is not a square. LGS(100) accurately places each cluster in the appropriate position, making it the only one that clearly shows the 3x3 underlying lattice. Although less dense and separable the clusters are more faithful in terms of placement.
**Connected_watts_1000** is a Watts-Strogatz random graph on a 1000 vertices and 11000 edges. It first assigns the vertices evenly spaced around a cycle with the nearest (7) vertices connected by an edge. Then, with low probability, some random 'chords' of the cycle are added by rewiring some of the local edges to other random vertices. This type of graph models the small-world phenomenon seen in real-world examples, such as social networks. The embeddings of connected_watts_1000, obtained by LGS, tsNET, UMAP, and MDS are in Fig. 1. We observe that tsNET and UMAP embeddings accurately capture the existence of a one-dimensional structure, but twist and break the circle to varying degrees.
Figure 7: (a-b) CD metric on the grid_cluster and sierpinkski3d graphs. Note that in these examples there are values of \(k\) which outperform competing algorithms. (d) Running time of each tested algorithm.
Meanwhile, MDS overcrowds the space, forming a classic 'hairball' where there is no discernible structure. For intermediate values of \(k\) in LGS, the circular structure in the data and the numerous chord connections become clearly visible.
**Sierpinksi_3d** models the Sierpinski pyramid with 2050 vertices and 6144 edges - a finite fractal object with recursively smaller recurring patterns (the pyramid itself is built out of smaller pyramids). These fractal properties are ideal for showcasing the LGS algorithm at work, as small local structures build upon each other to create a global shape; see Fig. 8. We observe that tsNET captures the smallest structures well but places them arbitrarily in the embedding space. UMAP does better at placing the local structures in context but still creates long edges and twists not present in the data. While MDS visually captures the fractal motifs, it'squishes' local structures. LGS can be used to balance these extremes.
We demonstrate more examples in Table 4, with additional embeddings available in the supplemental material. Note that for lower values of \(k\) the embedding obtained by LGS visually resembles the output of tsNET, while for larger values of \(k\) the obtained embedding is more similar to the outputs of MDS. We see this reflected numerically in many graphs; see Fig. 6 and Table 1. For intermediate values of \(k\), LGS often outperforms tsNET, UMap and MDS with respect to the intermediate structure preservation, measured by cluster distance (CD).
Figure 8: Sierpinksi3d graph is a fractal with regular local and global structure. LGS manages to capture the recursive nature of the underlying structure. tsNET, UMAP miss the global placement of pyramids and MDS stretches them.
## 5 Evaluation
We test LGS on a selection of real-world and synthetic graphs from [6, 22, 42]; A full list can be found in the appendix.
We compare LGS against state-of-the-art techniques for local and global embeddings: tsNET from the repository linked in [22], UMAP from the umap python library written by the authors of [25], and MDS via the python bindings from [41] with default parameters. Our implementation of LGS is available online. The experiments were performed on an Intel(r) Core(tm) i7-3770 machine (CPU @ 3.40GHz \(\times\) 8 with 32 GB of RAM) running Ubuntu 20.04.3 LTS.
#### 5.0.1 NE, CD and Stress Values and Trends
To evaluate how well LGS preserves local neighborhoods, we compare the average NE scores over several runs and present our results in Table 1. We can see a general trend: although, tsNET performs better with respect to NE values, LGS has consistently lower NE values than MDS. Additionally, as we increase the size of the neighborhood parameter, the NE values tend to increase by bringing the layouts closer to those of MDS. Interestingly, UMAP also tends to fall somewhere between tsNET and MDS on
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{low}} & \multicolumn{6}{c|}{middle} & \multicolumn{6}{c|}{high} \\ \cline{2-11} & k=32 & k=64 & k=85 & k=100 & k=150 & k=200 & \multicolumn{1}{c|}{tsnet} & \multicolumn{1}{c|}{umap} & \multicolumn{1}{c|}{mds} \\ \hline lesmis & 0.3761 & 0.3169 & 0.3126 & 0.3174 & 0.3166 & 0.3125 & 0.3058 & 0.3952 & 0.3141 \\ \hline can\_96 & 0.4393 & 0.3980 & 0.4605 & 0.4650 & 0.4658 & 0.3852 & 0.3417 & 0.4653 \\ \hline football & 0.4390 & 0.4465 & 0.4452 & 0.4377 & 0.4371 & 0.4359 & 0.4636 & 0.4693 & 0.4380 \\ \hline rajalt1 & 0.4347 & 0.4378 & 0.4377 & 0.3866 & 0.3641 & 0.3615 & 0.3942 & **0.3331** & 0.3608 \\ \hline mesh3e1 & 0.1841 & 0.1415 & 0.0933 & 0.0911 & 0.1444 & **0.0** & 0.1210 & 0.1241 & 0.0003 \\ \hline connected watts\_300 & 0.1995 & 0.3990 & 0.5311 & 0.0386 & 0.3802 & 0.5465 & 0.1805 & **0.1798** & 0.5453 \\ \hline block model 300 & 0.6474 & 0.6676 & 0.6746 & 0.6746 & 0.6741 & 0.6538 & 0.5675 & 0.6765 \\ \hline powerlaw300 & 0.5070 & 0.5075 & 0.5126 & 0.5044 & 0.5233 & 0.5062 & 0.3563 & 0.3849 & 0.5075 \\ \hline netscience & 0.4556 & 0.4597 & 0.4751 & 0.4680 & 0.5011 & 0.5118 & 0.4519 & 0.3207 & 0.5123 \\ \hline dwt\_419 & 0.3016 & 0.3151 & 0.2404 & 0.4085 & 0.2666 & 0.2303 & 0.2796 & 0.2929 & 0.2424 \\ \hline powerlaw500 & 0.5542 & 0.5630 & 0.5433 & 0.5712 & 0.5593 & 0.5432 & **0.4237** & 0.4683 & 0.5604 \\ \hline block model\_500 & 0.5396 & 0.6551 & 0.6832 & 0.6945 & 0.7024 & 0.7315 & 0.4377 & 0.4430 & 0.7151 \\ \hline connected watts 500 & 0.5788 & 0.5731 & 0.5646 & 0.5852 & 0.6347 & 0.6513 & 0.5789 & 0.5764 & 0.6808 \\ \hline grid cluster & 0.3767 & 0.3831 & 0.3767 & 0.3823 & 0.3456 & 0.3863 & 0.4810 & 0.3491 & 0.4742 \\ \hline price\_1000 & 0.7103 & 0.7108 & 0.7228 & 0.7313 & 0.7400 & 0.7468 & 0.6015 & **0.5297** & 0.943 \\ \hline connected watts\_1000 & 0.6217 & 0.6712 & 0.7870 & 0.6199 & 0.8563 & 0.8472 & 0.6283 & **0.6690** & 0.8390 \\ \hline powerlaw1000 & 0.6568 & 0.6562 & 0.6480 & 0.6382 & 0.6241 & 0.6553 & **0.3956** & 0.4266 & 0.6364 \\ \hline block model\_1000 & 0.5753 & 0.5552 & 0.5593 & 0.5179 & 0.7289 & 0.7277 & **0.4703** & 0.4880 & 0.7822 \\ \hline dwt\_1005 & 0.4990 & 0.4517 & 0.4417 & 0.4586 & 0.4499 & 0.4826 & 0.4692 & 0.4581 & 0.4372 \\ \hline three9 & 0.7337 & 0.6082 & 0.8321 & 0.6625 & 0.8012 & 0.8611 & 0.6690 & **0.5476** & 0.9242 \\ \hline CSDnd & 0.5816 & 0.5797 & 0.5711 & 0.5733 & 0.6015 & 0.6128 & 0.6310 & **0.5437** & 0.6051 \\ \hline fpga & 0.5199 & 0.5875 & 0.6043 & 0.6521 & 0.8844 & 0.6598 & **0.4136** & 0.5179 & 0.6818 \\ \hline sierpinski3d & 0.6226 & 0.5729 & 0.5313 & 0.5439 & 0.5192 & **0.4252** & 0.4504 & 0.4554 & 0.4600 \\ \hline EVA & 0.4544 & 0.4690 & 0.4822 & 0.4797 & 0.5070 & 0.5123 & 0.8763 & **0.2295** & 0.5629 \\ \hline \end{tabular}
\end{table}
Table 1: NE scores on LGS for varying values of \(k\) (left) and on competing algorithms (right). The colormap is normalized by row with dark orange representing the lowest score (best) and dark purple representing the highest (worst). Bold text indicates the lowest score in that row.
this metric. As expected, in many cases, LGS 'transitions' from tsNET to UMAP then finally to MDS as one goes from left to right, increasing \(k\).
Next, we report the average CD scores for the graphs in our benchmark in Table 2. Unlike the NE values which increase as we increase \(k\) and the stress
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \cline{2-11} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{k=32} & \multicolumn{2}{c|}{k=64} & \multicolumn{2}{c|}{k=85} & \multicolumn{2}{c|}{k=100} & \multicolumn{2}{c|}{k=150} & \multicolumn{2}{c|}{k=200} & \multicolumn{2}{c|}{tsnet} & \multicolumn{2}{c|}{urnap} & \multicolumn{1}{c|}{mds} \\ \hline lesmis & 0.1988 & 0.1702 & 0.1673 & 0.1670 & 0.1665 & 0.1656 & 0.2290 & 0.4058 & 0.1661 \\ \hline can, 96 & 0.1782 & 0.1526 & 0.1400 & 0.1390 & 0.1391 & 0.1391 & 0.2835 & 0.2237 & 0.1393 \\ \hline football & 0.2719 & 0.2620 & 0.2578 & 0.2549 & 0.2549 & 0.2544 & 0.3521 & 0.3868 & 0.2548 \\ \hline rais11 & 0.1636 & 0.1685 & 0.1774 & 0.1474 & 0.2150 & 0.2426 & 0.3167 & 0.4076 & 0.1251 \\ \hline mesh3el & 0.1125 & 0.0655 & 0.0328 & 0.0482 & 0.0613 & 0.0050 & 0.0858 & 0.0251 & 0.0049 \\ \hline connected\_watts\_300 & 0.4981 & 0.2359 & 0.2353 & 0.2267 & 0.2000 & 0.2081 & 0.4218 & 0.4072 & 0.1896 \\ \hline block\_model\_300 & 0.3592 & 0.3132 & 0.3087 & 0.3051 & 0.3052 & 0.2299 & 0.3869 & 0.4437 & 0.2963 \\ \hline powerlaw300 & 0.1975 & 0.1848 & 0.1741 & 0.1668 & 0.1644 & 0.1532 & 0.2619 & 0.3181 & **0.1505** \\ \hline netscience & 0.1914 & 0.1539 & 0.1604 & 0.1627 & 0.1493 & 0.1472 & 0.3296 & 0.5188 & **0.1131** \\ \hline dwt\_419 & 0.0686 & 0.0638 & 0.0860 & 0.1206 & 0.0521 & 0.0759 & 0.1447 & 0.1646 & **0.0312** \\ \hline powerlaw500 & 0.3360 & 0.3224 & 0.3160 & 0.3108 & 0.3047 & 0.3016 & 0.4047 & 0.4510 & **0.2855** \\ \hline connected watts\_500 & 0.3880 & 0.3380 & 0.3367 & 0.3279 & 0.3055 & 0.3019 & 0.3884 & 0.4332 & **0.2826** \\ \hline grid\_cluster & 0.5434 & 0.5335 & 0.5841 & 0.3049 & 0.2553 & 0.2478 & 0.4195 & 0.3760 & **0.2373** \\ \hline price\_1000 & 0.3543 & 0.2428 & 0.2320 & 0.2347 & 0.2127 & 0.2011 & 0.3654 & 0.2870 & **0.1848** \\ \hline connected watts\_1000 & 0.5578 & 0.3867 & 0.1963 & 0.9221 & 0.1698 & 0.8549 & 0.4169 & 0.4460 & **0.3166** \\ \hline powerlaw1000 & 0.3012 & 0.2418 & 0.2344 & 0.2410 & 0.2129 & 0.2196 & 0.2970 & 0.3616 & **0.1815** \\ \hline block model\_1000 & 0.4888 & 0.3456 & 0.3444 & 0.3449 & 0.3292 & 0.3212 & 0.4076 & 0.4379 & **0.2921** \\ \hline dwt\_1005 & 0.1858 & 0.1206 & 0.0939 & 0.1165 & 0.0091 & 0.1355 & 0.3556 & 0.0669 & **0.0424** \\ \hline btree9 & 0.4993 & 0.3069 & 0.3510 & 0.2740 & 0.2818 & 0.2563 & 0.3539 & 0.3603 & **0.2307** \\ \hline cSpnd & 0.2642 & 0.2468 & 0.2218 & 0.1925 & 0.2051 & 0.1863 & 0.2943 & **0.2436** & **0.2446** \\ \hline rpa & 0.4441 & 0.3139 & 0.2688 & 0.2498 & 0.2157 & 0.2131 & 0.5010 & 0.4575 & **0.1679** \\ \hline sierpinks3d & 0.5083 & 0.3015 & 0.2097 & 0.2304 & 0.1763 & 0.1298 & 0.5134 & 0.2525 & **0.1252** \\ \hline EVA & 0.6843 & 0.3688 & 0.7862 & 0.6387 & 0.4662 & 0.6394 & 0.3535 & 0.4913 & **0.1807** \\ \hline \end{tabular}
\end{table}
Table 3: Stress scores following the same scheme as Table 1
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|} \cline{2-11} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{k=32} & \multicolumn{2}{c|}{k=64} & \multicolumn{2}{c|}{k=85} & \multicolumn{2}{c|}{k=100} & \multicolumn{2}{c|}{k=150} & \multicolumn{2}{c|}{k=200} & \multicolumn{2}{c|}{tsnet} & \multicolumn{2}{c|}{urnap} & \multicolumn{1}{c|}{mds} \\ \hline lesmis & 0.1988 & 0.1702 & 0.1673 & 0.1670 & 0.1665 & 0.1656 & 0.2290 & 0.4058 & 0.1661 \\ \hline can, 96 & 0.1782 & 0.1526 & 0.1400 & 0.1390 & 0.1391 & 0.1391 & 0.2835 & 0.2237 & 0.1393 \\ \hline football & 0.2719 & 0.2620 & 0.2578 & 0.2549 & 0.2549 & 0.2544 & 0.3521 & 0.3868 & 0.2548 \\ \hline rais11 & 0.1636 & 0.1685 & 0.1774 & 0.1474 & 0.2150 & 0.2426 & 0.3167 & 0.4076 & 0.1251 \\ \hline mesh3el & 0.1125 & 0.0655 & 0.0328 & 0.0482 & 0.0613 & 0.0050 & 0.0858 & 0.0251 & **0.0049** \\ \hline connected\_watts\_300 & 0.4981 & 0.2359 & 0.2353 & 0.2267 & 0.2000 & 0.2081 & 0.40218 & 0.4072 & **0.1896** \\ \hline block\_model\_300 & 0.3592 & 0.3132 & 0.3087 & 0.3051 & 0.3052 & 0.2929 & 0.3869 & 0.4437 & **0.2963** \\ \hline powerlaw300 & 0.1975 & 0.1848 & 0.1741 & 0.1668 & 0.1644 & 0.1532 & 0.2619 & 0.3181 & **0.1505** \\ \hline netscience & 0.1914 & 0.1539 & 0.1604 & 0.1627 & 0.1493 & 0.1472 & 0.3296 & 0.5188 & **0.1131** \\ \hline dwt\_419 & 0
values which decrease as we increase \(k\), the best CD values are obtained for intermediate values of \(k\). This confirms that a balance between local and global optimization is needed to capture intermediate structures.
We compute and report the averaged stress scores in Table 3. MDS is consistently good at minimizing the stress, but we see a salient trade-off between the stress scores of LGS's tsNET-like embeddings with low \(k\) values and LGS's MDS-like embeddings with high \(k\) values. When we look at small neighborhoods such as \(k=16\), we tend to see high stress values, however, the values decrease as we expand the neighborhoods. UMAP does not seem to capture global structure well for these graphs, often having the highest stress values.
#### 5.5.2 Effect of \(k\) on Evaluation Metrics:
To visually explain LGS behavior, we plot examples NE, CD, and stress with respect to \(k\). In Fig. 6, we demonstrate two separate plots for each graph. It can be seen that we often fall in between the values of NE and stress that tsNET and MDS reach. These plots show what we expect to see: as \(k\) increases NE increases and stress decreases. In Fig. 7(a-b), we plot the CD values of our layout with tsNET, UMAP, MDS for comparison. In many layouts LGS indeed has the lowest CD score. Values of \(k\) were chosen to be representative of the local-global tradeoff.
## 6 Discussion and Limitations
We described LGS: an adaptable algorithmic framework for embeddings that can prioritize local neighborhoods, global structure, or a balance between the two. LGS provides flexible structure preservation choices with comparable embedding quality to previous single-purpose methods (local or global), while also outperforming state-of-the-art methods in preserving intermediate structures.
There are several limitations: Our results are based on a small number of graphs. Additional systematic experimentation would further support the usefulness of LGS Some experiments for high-dimensional datasets are in Appendix D. LGS modifies MDS's objective function to accommodate varying neighborhood sizes, similarly, one could adapt the KL divergence cost function of t-SNE. Note that t-SNE's perplexity parameter ostensibly controls the size of a neighborhood, but high perplexity values do not result in global structure preservation [39].
LGS algorithm has several hyperparameters, including \(c\), \(\alpha\), and \(k\). We provide default values for \(c\) and \(\alpha\) based on experiments, and leave \(k\) as a true hyperparameter. Our intention is for a visualization designer to adjust \(k\) as needed; to generate a spectrum of embeddings to get a sense of both local and global properties of a dataset. An interactive LGS version is not yet available. While LGS runs in seconds for graphs with a few thousand vertices, the running time can become untenable for larger instances, due to the \(O(|V|^{2})\) optimization per epoch, and pre-processing with APSP. While LGS's runtime is comparable with those of tsNET and MDS (see Fig. 7(c)) both can be sped up through the use of approximations [12, 32, 42], Speeding up LGS is a potential future work. |
2309.11045 | $Cp(X)$ for Hattori Spaces | Motivated by the main results of the articles by Hattori and Bouziad, we seek
to answer the following questions about Hattori spaces. Let A be a subset of
the real line, then:
Given a compact set $K$ in the Euclidean topology, under what conditions is
$K$ compact in the Hattori space $H(A)$? When is $H(A)$ a quasi-metrizable
space? When is $H(A)$ a semi-stratifiable space? When is $C_p(H(A))$ a normal
space? When is $C_p(H(A))$ a Lindel\"of space?
We obtain complete answers for 3 out of these 5 questions, while the last
ones remain with partial answers, among them: \
Theorem: If $\mathbb{R}\setminus A$ is analytic, then $C_p(H(A))$ is not
normal.
Moreover when we work on the Solovay Model we can improve the previous result
to only require $\mathbb{R}\setminus A$ to be uncountable. | Elmer Enrique Tovar-Acosta | 2023-09-20T03:48:24Z | http://arxiv.org/abs/2309.11045v1 | # \(Cp(X)\) for Hattori Spaces
###### Abstract
Motivated by the main results of the articles by Hattori [4] and Bouziad [3], we seek to answer the following questions about Hattori spaces. Let \(A\subseteq\mathbb{R}\), then:
1. Given a compact set \(K\) in the Euclidean topology, under what conditions is \(K\) compact in the Hattori space \(H(A)\)?
2. When is \(H(A)\) a quasi-metrizable space?
3. When is \(H(A)\) a semi-stratifiable space?
4. When is \(C_{p}(H(A))\) a normal space?
5. When is \(C_{p}(H(A))\) a Lindelof space?
We obtain complete answers for 3 out of these 5 questions, while the last ones remain with partial answers, among them:
**Theorem**. If \(\mathbb{R}\setminus A\) is analytic, then \(C_{p}(\mathbb{R}_{A})\) is not normal.
Moreover when we work on the Solovay Model we can improve the previous result to only require \(\mathbb{R}\setminus A\) to be uncountable.
keywords: _2020 MSC:_ 54A10, 54A25, 54C05 Hattori spaces, spaces of continuous functions, Lindelof property, network weight, generalized metric spaces.
## 1 Introduction
In their article [4], Hattori defines a family of intermediate topologies between the Euclidean and Sorgenfrey topologies using local bases. Namely, for every \(A\subseteq\mathbb{R}\) he defines a topology \(\tau(A)\) in \(\mathbb{R}\) as follows:
* For every \(x\in A\), the set \(\{(x-\varepsilon,x+\varepsilon)\mid\varepsilon>0\}\) remains a local base at \(x\).
* On the other hand, if \(x\in\mathbb{R}\setminus A\), then \(\{[x,x+\varepsilon)\mid\varepsilon>0\}\) is a local base at \(x\).
Note that we always have \(\tau_{e}\subseteq\tau(A)\subseteq\tau_{s}\) where \(\tau_{e}\) is the euclidean topology and \(\tau_{s}\) is the Sorgenfrey one.
We denote the space \((\mathbb{R},\tau(A))\) as \(H(A)\) and refer to it as the Hattori space associated to \(A\).
It is well-known that the Euclidean and Sorgenfrey topologies have a somewhat curious relationship - while they share several topological properties, in others, they are entirely opposite. This prompts a natural question: under what conditions do Hattori's spaces preserve the properties of either of these topologies? Examples of this can be found in papers [3] and [4], where two particularly notable patterns emerge:
* If the property is shared by both topologies, then any Hattori space maintains the property.
* If the complement of \(A\) is countable, then the Hattori space associated with \(A\) maintains the metric/completness properties of the euclidean topology (see [3]).
These two patterns raise some natural questions:
* Since both the Euclidean and Sorgenfrey topologies are quasimetrizable, is every Hattori space quasimetrizable? Note that any answer to this question breaks one of the previous patterns.
* Does these patterns hold true if we consider the space of continuous functions?
In this paper we work on the last questions, more specifically, we try to determine when is \(C_{p}(H(A))\) a normal/Lindelof space, and along the way, we construct closed and discrete families in \(C_{p}(H(A))\), this could be considered the main focus of this paper. Also we find under which conditions \(H(A)\) a quasimetrizable/semi-estratifiable space, and we give a characterization of compact sets in \(H(A)\). We start by fixing some notation and recalling some definitions.
## 2 Preliminaries
Throughout this paper we will denote the Sorgenfrey line as \(\mathbb{S}\), meanwhile we will use \(\mathbb{S}^{\star}\) to refer to the "inverted" Sorgenfrey line, that is, the space that have as local bases intervals of the form \((a,b]\). As always, \(\mathbb{R}\) represents the real line with his usual topology. \(C_{p}(X)\) represents the space of real valued continuous functions defined on \(X\) with the topology of pointwise convergence. Lastly, as we said before, \(H(A)\) represents the Hattori space associated with \(A\).
We refer to the intervals \((x-\varepsilon,x+\varepsilon)\) and \([x,x+\varepsilon)\) as \(B_{e}(x,\varepsilon)\) and \(B_{s}(x,\varepsilon)\), respectively. Also, for a fixed \(A\subseteq\mathbb{R}\) we define:
\[B_{A}(x,\varepsilon)=\begin{cases}B_{e}(x,\varepsilon),\text{ if }x\in A\\ B_{s}(x,\varepsilon),\text{ other case}\end{cases}\]
The symbols \(\mathrm{cl}_{e}\), \(\mathrm{cl}_{s}\), \(\mathrm{cl}_{H(A)}\) will denote the closures in the euclidean topology, the Sorgenfrey one, and in \(H(A)\), respectively.
We will need the concepts of quasimetrizable, \(\gamma\)-space, semi-stratifiable space, Moore space, \(\sigma\)-space, \(\Sigma\)-space, \(p\)-space, quasicomplete space and Cech complete space. We will use most of these to state exactly one theorem, so we prefer to refer the reader to [9] (explicitly, chapter 10) for the first 6 definitions, [5] for the definitions of \(p\)-space and quasicomplete and lastly, [7] for the definition of Cech complete space.
The main results concerning all of this definitions is summarized in the following proposition:
**Proposition 1**.:
* _Every Moore space is both a quasimetrizable space and a semi-stratifiable space, and every quasimetrizable space is a quasimetrizable space._
* _Every Cech-Complete space is a_ \(p\)_-space, and every_ \(p\)_-space is quasicomplete._
* _A Tychonoff space is a Moore space if and only if it is a semi-stratifiable_ \(p\)_-space_ _[_5_]__._
* _A space_ \(X\) _is a_ \(\sigma\)_-space if and only if_ \(X\) _is a_ \(\Sigma\)_-space with a_ \(G_{\delta}\) _diagonal._
Lastly an uncountable regular cardinal \(\kappa\) is a caliber of a space \(X\) if, for any family \(\mathcal{U}\subseteq\tau\setminus\{\emptyset\}\) of cardinality \(\kappa\), there exists \(\mathcal{U}^{\prime}\subseteq\mathcal{U}\) such that \(|\mathcal{U}^{\prime}|=\kappa\) and \(\bigcap\mathcal{U}^{\prime}\neq\emptyset.\) All undefined terms will be interpreted as in [7].
## 3 Metric-like properties of \(H(A)\) and some compactness results
We start our study of \(H(A)\) by answering the following question: Under which conditions on \(K\subseteq\mathbb{R}\), \(K\) is a compact set in \(H(A)\)? Since \(\tau(A)\) contains the euclidean topology, if \(K\) is compact in \(H(A)\) it is compact in \(\mathbb{R}\), so we need \(K\) to be a closed and bounded set in \(\mathbb{R}\). We have the following full caracterization of compact sets in \(H(A)\):
**Proposition 2**.: _Let \(A,K\subseteq\mathbb{R}\) with \(|K|\geq\aleph_{0}\). The following are equivalent:_
1. \(K\) _is compact in_ \(H(A)\)_._
2. * \(K\setminus A\) _is countable._ * _For every_ \(x\in K\setminus A\) _there exists_ \(\varepsilon>0\) _such that_ \((x-\varepsilon,x)\cap K=\emptyset\)
Proof. Start by supposing that \(K\) is compact in \(H(A)\). Since \(\tau_{e}\subseteq\tau(A)\), \(K\) is compact in the euclidean topology. Now let's verify that \(K\) satisfies (1). Note that \(K\) is compact and submetrizable (again, because \(\tau_{e}\subseteq\tau(A)\)), so \(K\) is a compact space with a \(G_{\delta}\) diagonal, hence a metrizable space (this is a classic result byneider [12], see also [9, Ch. 9, SS 2 ]). So \(K\setminus A\), as a subspace of a metric space, it's also metrizable. On the other hand, the topology he inherits as a subspace of \(H(A)\) is the one he inherits as a subspace of \(\mathbb{S}\) (this is because \(K\setminus A\subseteq\mathbb{R}\setminus A\), see 2.1 in [4]). Since the only metrizable subspaces of \(\mathbb{S}\) are countable we conclude that \(K\setminus A\) satisfies the first part of (2).
Let's move on to the second part of (2). Let \(x\in\mathbb{R}\setminus A\) and suppose, for the sake of contradiction, that for every \(\varepsilon>0\), \((x-\varepsilon,x)\cap K\neq\emptyset\). In particular, for each \(n\in\mathbb{N}\) take \(x_{n}\in(x-\varepsilon,x)\cap K\) and wlog suppose that \(x_{n}<x_{n+1}\). Take the following open cover of \(K\), \(\mathcal{U}=\{(-\infty,x_{2})\}\cup\{(x_{n-1},x_{n+1})\ |\ n\geq 2\}\cup\{[x,\infty)\}\) (note that the last set is open because \(x\notin A\)). \(\mathcal{U}\) lacks finite subcovers since every element of our sequence belongs to exactly one element of \(\mathcal{U}\), thus contradicting the compactness of \(K\). With this, we conclude the (1)\(\to\) (2) implication.
Now let's prove the reverse implication, suppose that \(K\) is a compact set in the euclidean topology and satisfies (1). Let \(\mathcal{U}\subseteq\tau(A)\) be a open cover of \(K\) in \(H(A)\). For each \(x\in K\) let \(U_{x}\in\mathcal{U}\) such that \(x\in U_{x}\). We can take \(\varepsilon_{x}>0\) that satisfies the following conditions:
* If \(x\in A\), then \((x-\varepsilon_{x},x+\varepsilon_{x})\subseteq U_{x}\).
* If \(x\notin A\), then \([x,x+\varepsilon)\subseteq U_{x}\) and \((x-\varepsilon,x)\cap K=\emptyset\). (\(\star\))
Let's consider \(\mathcal{V}=\{B_{e}(x,\varepsilon_{x})\ |\ x\in K\}\). This is an open cover for \(K\) consisting of open sets in the euclidean topology, and therefore exists a finite set \(F\subseteq K\) such that
\[K\subseteq\bigcup_{x\in F}B_{e}(x,\varepsilon_{x})\]
But (\(\star\)) implies that, if \(x\in K\setminus A\), then we can remove the left side of the interval \((x-\varepsilon_{x},x+\varepsilon_{x})\) without removing points of \(K\), thus:
\[K\subseteq\bigcup_{x\in F}B_{A}(x,\varepsilon_{x})\subseteq\bigcup_{x\in F}U _{x}\]
And so, we have found a finite subcover of \(\mathcal{U}\). With this, we conclude that \(K\) is compact in \(H(A)\).
The first part of condition (1) does not come as a surprise because it is often encountered when demonstrating that specific properties of \(\mathbb{R}\) are preserved in \(H(A)\). On the other hand, the second part may appear sudden or unrelated. However, upon closer examination and with the following reinterpretation in mind, it does make sense: Condition (1) states that if \(x\in K\setminus A\), then \(x\) is not an accumulation point of \(K\) in \(Y\), and so we can restate our Proposition as follows:
**Proposition 3**.: _Let \(A,K\subseteq\mathbb{R}\) with \(|K|\geq\aleph_{0}\). Then \(K\) is compact in \(H(A)\) if and only if \(K\) is compact in the euclidean topology, \(K\setminus A\) is countable and \(\text{der}_{Y}(K)\cap(K\setminus A)=\emptyset\)._
Since \(H(A)\) is first-countable and hereditarily Lindelof, we have actually proven a bit more, namely:
**Theorem 1**.: _Let \(A,K\subseteq\mathbb{R}\) with \(K\) infinite and compact in the euclidean topology. Then the following are equivalent:_
* \(K\) _is compact in_ \(H(A)\)_._
* \(K\) _is countably compact in_ \(H(A)\)_._
* \(K\) _is sequentially compact in_ \(H(A)\)_._
* \(|K\setminus A|\leq\aleph_{0}\) _and_ \(\text{der}_{Y}(K)\cap(K\setminus A)=\emptyset\)_._
Now we move on to the question: When is \(H(A)\) a quasimetrizable/semi-stratifiable? The first question will be answered with a classical condition, namely, if \(|\mathbb{R}\setminus A|\leq\aleph_{0}\). The second question will require a more elaborate condition. Let's start by addressing the semi-stratifiable case with a couple of simple results:
**Lemma 2**.: _Let \(A\subseteq\mathbb{S}\) with a countable network. Then \(A\) itself is countable._
Take a subset \(A\) of \(\mathbb{S}\) and suppose that \(A\) is semi-stratifiable. Note that \(A\) is a quasimetrizable space (since \(\mathbb{S}\) is), so \(A\) is a semi-stratifiable, regular and quasimetrizable space, thus is a Moore space (See [9, Ch. 10, SS 8]). Since \(\mathbb{S}\) is hereditarily Lindelof, \(A\) is a Lindelof Moore space, thus second countable and the last Lemma allow us to conclude:
**Proposition 4**.: _Let \(A\subseteq\mathbb{S}\) be semi-stratifiable. Then \(A\) is countable._
The following Theorem can be obtained by using the results of [3, SS 3] and Proposition 1. It contains most of the results concerning metric/completeness properties of Hattori spaces.
**Theorem 3**.: _Let \(A\subseteq\mathbb{R}\). The following are equivalent:_
* \(\mathbb{R}\setminus A\) _is countable._
* \(H(A)\) _is Polish._
* \(H(A)\) _is completely metrizable._
* \(H(A)\) _is Cech-Complete._
* \(H(A)\) _is a Moore space._
* \(H(A)\) _is a p-space._
* \(H(A)\) _is quasicomplete._
* \(H(A)\) _is a_ \(\sigma\)_-space._
* \(H(A)\) _is a_ \(\Sigma\)_-space._
We are ready to add "is semi-stratifiable" to these equivalences. Note that if \(H(A)\) is a semi-stratifiable space, then \(\mathbb{R}\setminus A\) is semi-stratifiable too, but this set inherits the same topology as a subspace of \(\mathbb{S}\), thus Proposition 4 allow us to conclude that \(\mathbb{R}\setminus A\) is countable. The other implication is trivial, so we get:
**Proposition 5**.: _Let \(A\subseteq\mathbb{R}\). Then \(H(A)\) is semi-stratifiable iff \(\mathbb{R}\setminus A\) is countable._
Since both \(\mathbb{R}\) and \(\mathbb{S}\) are quasimetrizable spaces, intuition suggests that \(H(A)\) should be quasimetrizable regardless of the choice of \(A\). This is similar to other properties shared between \(\mathbb{R}\) and \(\mathbb{S}\), such as being hereditarily Lindelof and separable, being a Baire space, etc. However, this is not the case here, and the proof of this fact can be found in [2] and [8], where Bennett proves what is known as the "\(\gamma\)-space conjecture" (see Theorem 3.1 of [2]) for separable generalized ordered spaces and Kolner later modified said Theorem a little (see Theorem 10 of [8]). We must mention that the credit for following these ideas belongs to Li and Lin in [11], and we will expand on their ideas by adding a couple of simple results connecting Benett and Kolner results. We present Bennett's theorem adapted to our context.
**Theorem 4**.: _Let \(A\subseteq\mathbb{R}\). The following are equivalent:_
* \(H(A)\) _us a_ \(\gamma\)_-space._
* _There exists a sequence of sets_ \((R_{n})_{n=1}^{\infty}\) _such that:_
* \(\mathbb{R}\setminus A=\bigcup_{n=1}^{\infty}R_{n}\)__
* _For each_ \(p\in A\cap\text{cl}_{e}(R_{n})\)_, there exists_ \(y<p\) _such that_ \((y,p)\cap\text{cl}_{e}(R_{n})=\emptyset\)_._
* \(H(A)\) _is quasimetrizable._
This technically completely solves our question, but it is not very enlightening. Upon closer analysis of Bennett's proof, it turns out that we can deduce more, leading to the following result, of which we will prove the first implication for the sake of completness:
**Theorem 5**.: _Let \(A\subseteq\mathbb{R}\). The following are equivalent:_
* \(H(A)\) _is a_ \(\gamma\)_-space._
* _There exists a sequence of sets_ \((R_{n})_{n=1}^{\infty}\) _such that:_
* \(\mathbb{R}\setminus A=\bigcup_{n=1}^{\infty}R_{n}\)__
_._ * _For each_ \(p\in A\cap cl_{e}(R_{n})\)_, there exists_ \(y<p\) _such that_ \((y,p)\cap cl_{e}(R_{n})=\emptyset\)_._ * _If_ \((x_{n})\subseteq R_{m}\) _is an increasing sequence, then it does not converge in_ \(H(A)\)_._
* \(H(A)\) _is quasimetrizable._
Proof.: Let \(g:\mathbb{N}\times H(A)\to\tau_{A}\) be a \(g\) function that satisfies the conditions of Definition 3, let \(Q=\{q_{k}\mid k\in\mathbb{N}\}\) be an enumeration of the rationals and \(Q_{k}=\{q_{i}\mid i\leq k\}\). For every \(n,m,k\in\mathbb{N}\) we define:
\[R(n,m,k)=\{x\in\mathbb{R}\mid g(n,x)\subseteq[x,\infty)\ \&\ \alpha(n,x)=m\ \&\ g(m,x)\cap Q_{k}\setminus\{x\}\neq\emptyset\}\]
Let's see that \(\mathbb{R}\setminus A=\bigcup_{(n,m,k)\in\mathbb{N}^{3}}R(n,m,k)\). Given \(x\in\mathbb{R}\setminus A\), we know that \([x,\infty)\) is an open set, and since \(g\) is a \(\gamma\)-function, this implies that there exists \(n\in\mathbb{N}\) such that \(g(x,n)\subseteq[x,\infty)\). This gives us our first parameter. For the second parameter, we can take \(m=\alpha(n,x)\). Finally, since \(\mathbb{Q}\) is a dense set in \(H(A)\) and \(g(m,x)\setminus\{x\}\) is a non-empty open set, there exists \(k\in\mathbb{N}\) such that \(Q_{k}\cap g(m,x)\setminus\{x\}\neq\emptyset\). Thus, \(x\in R(n,m,k)\).
The other contention is simple. This is because the first condition defining \(R(n,m,k)\) implies that \([x,\infty)\) is an open set, and therefore \(x\in\mathbb{R}\setminus A\).
Let's consider \((x_{j})\subseteq R(n,m,k)\), a sequence such that \(x_{j}<p\) for all \(j\in\mathbb{N}\). We will see that \((x_{j})\) cannot converge to \(p\) in \(H(A)\). If \(p\in\mathbb{R}\setminus A\), then it is clear that \((x_{j})\) cannot converge to \(p\) in \(H(A)\). Therefore, necessarily \(p\in A\), let's suppose that the sequence does converge.
**Claim:** One of the following cases applies:
* There exists \(q\in Q_{k}\) such that \(p\in(-\infty,q]\) and \((-\infty,q]\cap Q_{k}\setminus\{p\}=\emptyset\).
* There exist \(a,q\in Q_{k}\) such that \(b<q\), \(p\in(b,q]\) and \((b,q]\cap Q_{k}\setminus\{p\}=\emptyset\).
First, let's see that it is impossible that for every \(q\in Q_{k}\), \(q<p\). Let's assume that it happens, wlog \(q_{k}=\max Q_{k}\). Since \((x_{j})\) converges to \(p\), there exists \(N\in\mathbb{N}\) such that, \(x_{s}\in(q_{k},p)\) for every \(s\geq N\). Since \(x_{s}\in R(n,m,k)\) we conclude \(g(n,x_{s})\subseteq[x_{s},\infty)\), \(\alpha(n,x_{s})=m\) y \(Q_{k}\cap g(m,x_{s})\setminus\{x_{s}\}\neq\emptyset\), take \(q\) in the last intersection, it follows that \(g(m,q_{i})\subseteq g(n,x_{s})\subseteq[x_{s},\infty)\) and thus \(q_{i}\in[x_{s},\infty)\) which is a contradiction. We conclude that there exists some \(q\in Q_{k}\) such that \(p\leq q\). The cases from the claim come from the following:
* If for every \(q\in Q_{k}\) we have \(p\leq q\), it is enough to take \(q=\min Q_{k}\) so the first case from the claim is satisfied.
* There exist \(q_{i},q_{j}\in Q_{k}\) such that \(q_{i}<p\leq q_{j}\). In this case we take \(a=\max\{x\in Q_{k}\mid x<p\}\) y \(q=\min\{x\in Q_{k}\mid p\leq x\}\) and it follows that these points satisfy the second point from the claim.
Continuing the proof, now we assert that there exist \(N_{1}\) and \(N_{2}\) in \(\mathbb{N}\) such that:
* For all \(i\geq N_{1}\), \(q\in g(m,x_{i})\) (this \(q\) is the same from the corresponding case of the claim).
* For all \(i\geq N_{2}\), \(x_{i}\in g(m,p)\).
We prove the existence of \(N_{1}\) following the cases from the claim:
* In this case \(N_{1}=1\). For all \(i\in\mathbb{N}\) we have \(g(m,x_{i})\subseteq g(n,x_{i})\subseteq[x_{i},\infty)\), the first inclusion follows from \(m=\alpha(n,x_{i})\). Since \((-\infty,q)\cap Q_{k}\setminus\{p\}=\emptyset\) and \(g(m,x_{i})\cap Q_{k}\setminus\{x_{i}\}\neq\emptyset\) we conclude that there exists \(q_{j}\in Q_{k}\cap g(m,x_{i})\) such that \(q_{j}\geq q\geq p>x_{i}\) thus the interval \([x_{i},q_{j}]\) is contained in \(g(m,x_{i})\), this because this last set is convex. This guarantees \(q\in g(m,x_{i})\).
* By convergence, there exists \(N_{1}\in\mathbb{N}\) such that, for all \(i\geq N_{1}\) we have \(x_{i}\in(a,p)\subseteq(a,q)\). Let \(z\in Q_{k}\cap g(m,x_{i})\setminus\{x_{i}\}\), since \(g(m,x_{i})\subseteq[x_{i},\infty)\), the election of \(a\) and \(q\) guarantees \(z\geq q\). Thus \([x_{i},z]\subseteq g(m,x)\) and it follows that \(q\in g(m,x_{i})\).
With this, we have proven the existence of \(N_{1}\) for both cases. As for \(N_{2}\), it is much simpler as we can directly apply the definition of convergence to the open set \(g(m,p)\).
Now, let \(i,j>N_{1}+N_{2}\) such that \(x_{i}<x_{j}\) (we can always do this fixing \(i\) and using convergence). Notice that \(x_{j},q\in g(m,x_{j})\) and thus \([x_{j},q]\subseteq g(m,x_{j})\), from where we conclude that \(p\in g(m,x_{j})\). Since \(g\) is a \(\gamma\)-function, we deduce
\[g(m,p)\subseteq g(n,x_{j})\subseteq[x_{j},\infty)\]
Let's remember that \(x_{i}\in g(m,p)\), thus \(x_{j}\leq x_{i}\), which is a contradiction. Therefore, \((x_{j})\) does not converge in the euclidean sense to \(p\), but since \(p\in A\), this implies that the sequence does not converge to \(p\) in \(H(A)\).
With what we have done so far, we have concluded that if \((x_{n})\) is a sequence contained in \(R(n,m,k)\), and \(x_{i}<p\) for all \(i\), then the sequence does not converge to \(p\) in \(H(A)\). As a particular case, no increasing sequence contained in \(R(n,m,k)\) can converge in the Euclidean sense.
Moreover, if \(p\in A\cap\operatorname{cl}_{\varepsilon}(R(n,m,k))\), then there exist \(\varepsilon>0\) such that \((p-\varepsilon,p)\cap R(n,m,k)=\emptyset\), on the contrary we could construct a sequence fully contained in \(R(n,m,k)\) that converges to \(p\) and stays strictly to the left of \(p\), but we just proved that this is imposible. notice that for this \(\varepsilon>0\) we have \((p-\varepsilon,p)\cap\operatorname{cl}_{\varepsilon}(R(n,m,k))=\emptyset\). With this, we have simultaneously proven that both conditions we were looking for are satisfied for each \(R(n,m,k)\), so we conclude this proof.
This new version of the Theorem tells us a bit more, but it is still not clear enough which sets satisfy b). However, upon closer analysis, we obtain the following:
**Proposition 6**: _Suppose that \(H(A)\) is a quasimetrizable space, and let \(\mathbb{R}\setminus A=\bigcup_{n=1}^{\infty}R_{n}\) where each \(R_{n}\) satisfies \(b)\) from Theorem 11. Then for every \(n\in\mathbb{N}\), \(R_{n}\) is a closed set in \(\mathbb{S}^{\star}\). That is, \(\mathbb{R}\setminus A\) is a \(F_{\sigma}\) set in \(Y\)._
Let's assume that there exists \(p\in\mathrm{cl}_{Y}(R_{n})\setminus R_{n}\). With this, for each \(m\in\mathbb{N}\), we can choose a point \(x_{m}\in(p-\frac{1}{m},p)\cap R_{n}\). Moreover, we can select these points in such a way that they give us an increasing sequence contained in \(R_{n}\) that converges to \(p\), which is impossible. We conclude that such a \(p\) does not exist, and therefore, \(R_{n}\) is closed in \(Y\).
Now we have an important condition: the set \(\mathbb{R}\setminus A\) must be an \(F_{\sigma}\) set in \(Y\). Notice also that if \(\mathbb{R}\setminus A=\bigcup_{n=1}^{\infty}R_{n}\) is an \(F_{\sigma}\) set in \(Y\), then each \(R_{n}\) satisfies part b) of Theorem 10. To see this, suppose \(p\in A\cap\mathrm{cl}_{e}(R_{n})\). Then, \(p\notin R_{n}=\mathrm{cl}_{Y}(R_{n})\), so there exists \(y<p\) such that \((y,p]\cap R_{n}=\emptyset\), and thus \((y,p)\cap R_{n}=\emptyset\), which implies \((y,p)\cap\mathrm{cl}_{e}(R_{n})=\emptyset\). With all this in mind, we can now give a definitive answer to our original question. Note that the following is essentially Theorem 10 of [8].
**Theorem 6**: _Let \(A\subseteq\mathbb{R}\). The following are equivalent:_
* \(H(A)\) _is a_ \(\gamma\)_-space._
* \(\mathbb{R}\setminus A\) _is a_ \(F_{\sigma}\) _in_ \(Y\)_._
* \(H(A)\) _is a quasimetrizable space._
To conclude this section, let's consider an explicit example of a set \(A\subseteq\mathbb{R}\) such that \(H(A)\) is not quasimetrizable. According to our theorem, it suffices to find a set \(B\) that is not an \(F_{\sigma}\) set in \(\mathbb{S}^{\star}\), which is a simpler task than the original problem.
**Example 1**: _A Hattori space that is not quasimetrizable. Note that since \(\mathbb{S}\) is a Baire space, \(\mathbb{S}^{\star}\) is also a Baire space. Moreover, \(\mathbb{R}\setminus\mathbb{Q}\) is a \(G_{\delta}\) set in \(Y\) because it is a \(G_{\delta}\) set in the Euclidean topology. From these two observations, we can conclude that if we simply follow the usual proof that \(\mathbb{Q}\) is not \(G_{\delta}\) in \(\mathbb{R}\), we can deduce that \(\mathbb{Q}\) is not \(G_{\delta}\) in \(\mathbb{S}^{\star}\), and therefore \(\mathbb{R}\setminus\mathbb{Q}\) is not an \(F_{\sigma}\) set in \(\mathbb{S}^{\star}\). Hence, we conclude that \(H(\mathbb{Q})\) is not a quasimetrizable space._
## 4 Lindelof property and normality of \(\boldsymbol{C_{p}(H(A))}\)
Let's move on to the question of when \(C_{p}(H(A))\) is a Lindelof space. In order to prove that \(C_{p}(H(A))\) is a Lindelof space, it is enough to prove that it has countable network weight. Our next Proposition gives us a clear answer to this problem.
**Proposition 7**: _Let \(A\subseteq\mathbb{R}\). Then \(\text{nw}(H(A))=|\mathbb{R}\setminus A|+\aleph_{0}\)._
Proof.: One inequality is trivial, so let's prove the other one. Let \(\mathcal{B}\) be a network for \(H(A)\), let's see that \(|\mathcal{B}|\geq|\mathbb{R}\setminus A|\). For each \(x\in\mathbb{R}\setminus A\) we have \([x,x+1)\in\tau(A)\) and therefore it exists \(B_{x}\in\mathcal{B}\) such that \(x\in B_{x}\subseteq[x,x+1)\), this is a one to one assignation, so \(|\mathcal{B}|\geq\mathbb{R}\setminus A\). Thus \(\operatorname{nw}(H(A))\geq\mathbb{R}\setminus A+\aleph_{0}\).
Thanks to the previous proposition and recalling that \(\operatorname{nw}(X)=\operatorname{nw}(C_{p}(X))\), we obtain:
**Theorem 7**.: _Let \(A\subseteq\mathbb{R}\). If \(|\mathbb{R}\setminus A|\leq\aleph_{0}\) then \(C_{p}(H(A))\) is Lindelof._
Moreover, we can do more. First, let's recall the following result (see [13], problem 249):
**Proposition 8**.: _Let \(\omega_{1}\) be a caliber for a space \(X\). Then \(C_{p}(X)\) is a Lindelof \(\Sigma-\)space if and only if \(X\) has a countable network._
Since \(H(A)\) is separable, \(\omega_{1}\) is a caliber for \(H(A)\), thus we can use the last result and Proposition 7 to conclude:
**Theorem 8**.: _Let \(A\subseteq\mathbb{R}\). Then \(C_{p}(H(A))\) is a Lindelof \(\Sigma-\)space if and only if \(\mathbb{R}\setminus A\) is countable._
Let's move on with the normality of \(C_{p}(H(A))\). Since the condition \(|\mathbb{R}\setminus A|\leq\aleph_{0}\) seems to preserve many properties of \(\mathbb{R}\) to \(H(A)\), the natural question would be:
**Question 1**.: _Let \(A\subseteq\mathbb{R}\). Is \(C_{p}(H(A))\) a normal space if and only if \(|\mathbb{R}\setminus A|\leq\aleph_{0}\)?_
In the following we will try to solve this question. One implication is straightforward, if \(|\mathbb{R}\setminus A|\leq\aleph_{0}\), then \(H(A)\) is a separable metric space and thus \(C_{p}(H(A))\) is normal. So, the real question is:
**Question 2**.: _Let \(A\subseteq\mathbb{R}\). If \(C_{p}(H(A))\) is normal, \({}_{\hat{\varrho}}|\mathbb{R}\setminus A|\leq\aleph_{0}\)?_
To tackle this problem, we will proceed by contrapositive. If the set \(\mathbb{R}\setminus A\) satisfies certain conditions (in addition to being uncountable), then \(C_{p}(H(A))\) contains an uncountable closed discrete set, and therefore \(C_{p}(H(A))\) is not normal by Jones Lemma.
One way to prove that \(C_{p}(\mathbb{S})\) is not normal is by showing that it contains a closed and discrete subspace of size \(\mathfrak{c}\). Explicitly, for every \(a\in(0,1)\) define the following function:
\[f_{a}(x)=\begin{cases}0&x\in(-\infty,-1)\\ 1&x\in[-1,-a)\\ 0&x\in[-a,a)\\ 1&x\in[a,1)\\ 0&x\in[1,\infty)\end{cases}\]
In this way, \(\{f_{a}\ |\ x\in(0,1)\}\) is a closed and discrete subspace of \(C_{p}(\mathbb{S})\) with cardinality \(\mathfrak{c}\) (a proof of this fact can be found in [6])
We can replicate the previous construction on any interval which have the sorgenfrey topology, so we get the following partial results:
**Proposition 9**.: _Let \(A\subseteq\mathbb{R}\). If there exists \(a<b\) such that \([a,b]\subseteq\mathbb{R}\setminus A\), then \(C_{p}(H(A))\) contains a closed and discrete subspace of cardinality \(\mathfrak{c}\), and thus is not normal space._
Containing a non trivial closed interval is equivalent to containing a non trivial open interval, so:
**Proposition 10**.: _Let \(A\subseteq\mathbb{R}\). If \(\text{int}_{e}(\mathbb{R}\setminus A)\neq\emptyset\), then \(C_{p}(H(A))\) contains a closed and discrete subspace of cardinality \(\mathfrak{c}\), and thus is not normal space._
Particularly, if \(A\) is a closed set (and it is not \(\mathbb{R}\)) then \(C_{p}(H(A))\) is not a normal space.
From here on, the idea is to gradually weaken the condition of containing intervals as much as possible to reach a more general result. Let's start with the following:
**Proposition 11**.: _Let \(A\subseteq\mathbb{R}\) such that \(B=\mathbb{R}\setminus A\) is symmetric, meaning that if \(x\in B\), then \(-x\in B\), and \(|B|=\mathfrak{c}\). Additionally, let's assume that \(B\cap(0,1)\) is dense in \((0,1)\) and there exists \(q\in B\cap[1,\infty)\). Then \(C_{p}(H(A))\) contains a closed and discrete subset of size \(\mathfrak{c}\), and therefore it is not a normal space._
Proof.: Let \(B=\mathbb{R}\setminus A\). For each \(a\in(0,1)\cap B\) we define:
\[f_{a}(x)=\begin{cases}0&x\in(-\infty,-q)\\ 1&x\in[-q,-a)\\ 0&x\in[-a,a)\\ 1&x\in[a,q)\\ 0&x\in[q,\infty)\end{cases}\]
By defining \(f_{a}\) in this way, we note that the "jumps" occur at points where the local topology is that of the Sorgenfrey Line, so we do not break continuity in \(H(A)\), so \(\mathcal{F}\subseteq C_{p}(H(A))\). Let's proceed to show that it is indeed a closed and discrete set.
Let \(a\in B\cap(0,1)\). Let's consider the basic open set \(V=\big{[}f_{a},\{a,-a\},\frac{1}{2}\big{]}\) and let's veruify the following cases;
**Case 1.- \(0<b<a<1\)**. Here we have that \(f_{b}(-a)=1\), while \(f_{a}(-a)=0\), so \(f_{b}\notin V\).
**Case 2.-**\(0<a<b<1\). This time \(f_{b}(a)=0\) but \(f_{a}(a)=1\), and so, once again, we conclude that \(f_{b}\notin V\).
From the previous cases we conclude that \(V\cap\mathcal{F}=\{f_{a}\}\), so \(\mathcal{F}\) is a discrete subset of \(C_{p}(H(A))\). Proving that it is closed is the difficult part. Note that our functions only take values in \(\{0,1\}\), if \(g\in C_{p}(H(A))\) is such that \(g(z)\notin\{0,1\}\) for some \(z\in\mathbb{R}\), then \(U=[g,\{z\},\min\{|g(z)|,|1-g(z)|\}]\) satisfies \(U\cap\mathcal{F}=\emptyset\). With the previous argument, we can suppose wlog that \(g\) only takes the values \(0\) or \(1\). Once again, we have two cases:
**Case 1.-** There exists \(a\in B\cap(0,1)\) such that \(g(a)\neq g(-a)\). Here we have two sub cases:
**Sub case 1.1.-**\(g(a)=0\) y \(g(-a)=1\). Let's take \(V=[g,\{\pm a\},\frac{1}{2}]\). Suppose there exists \(b\in B\cap(0,1)\) such that \(f_{b}\in V\). Then we have that \(f_{b}(a)\in(-\frac{1}{2},\frac{1}{2})\), that is, \(f_{b}(a)=0\), analogously, \(f_{b}(-a)=1\). \(f_{b}(a)=0\) and \(a>0\) imply that \(a\in(0,b)\), so \(a<b\). On the other hand \(f_{b}(-a)=1\) implies that \(-a\in[-q,-b)\), so \(-a<-b\) and thus \(a>b\). Since these two inequalities contradict each other, we deduce that such \(b\) cannot exist, so \(V\cap\mathcal{F}=\emptyset\).
**Sub case 1.2.-**\(g(a)=1\) y \(g(-a)=0\). Once again let's take \(V=[g,\{\pm a\},\frac{1}{2}]\) and suppose there exists \(b\in B\cap(0,1)\) such that \(f_{b}\in V\). Similar reasoning to the previous paragraph allows us to deduce that \(f_{b}(a)=1\) and \(f_{b}(-a)=0\), so \(a\in[b,q)\) and \(-a\in[-b,b)\), that is, \(a\leq b\) and \(-b\leq a\), thus \(a=b\). With this we obtain \(V\cap\mathcal{F}\subseteq\{f_{a}\}\), since \(C_{p}(H(A))\) is Hausdorff we can find an open set \(U\) such that \(g\in U\) but \(f_{b}\notin U\), so \(V\cap U\cap\mathcal{F}=\emptyset\) and we conclude this sub case.
**Case 2.-** For all \(y\in B\cap(0,1)\), \(g(y)=g(-y)\). Once more, we have sub cases:
\(g\) **is constant in \((0,1)\)**
**Sub case 2.1.1-**\(g(x)=0\) for all \(x\in(0,1)\).
By continuity we obtain \(g(-1)=0\), now let's define \(V=[g,\{-1\},\frac{1}{2}]\). For every \(b\in B\cap(0,1)\), \(f_{b}(-1)=1\) so \(f_{b}\notin V\) and we conclude.
**Sub case 2.1.2-**\(g(x)=1\) for all \(x\in(0,1)\).
It suffices to take \(V=[g,\{0\},\frac{1}{2}]\) and reason in the same way as the last sub case noting that \(f_{b}(0)=0\) for all \(b\in B\cap(0,1)\).
\(g\) **is not constant in \((0,1)\)**
**Claim:** There exist \(c,d\in B\cap(0,1)\) such that \(g(c)\neq g(d)\).
Since \(g\) is not constant in \((0,1)\) there exist \(x,y\in(0,1)\) such that \(g(x)\neq g(y)\), due to continuity we can find \(\varepsilon>0\) such that \(g\) is constant in \([x,x+\varepsilon)\) and \([y,y+\varepsilon)\) (it does not matter if the points have the euclidean or Sorgenfrey topology). Thanks to density we can find \(c\in[x,x+\varepsilon)\cap B\cap(0,1)\) and \(d\in[y,y+\varepsilon)\cap B\cap(0,1)\),
it follows that \(g(c)\neq g(d)\). Moreover, wlog \(c<d\).
**Sub case 2.2.1**\(g(c)=0\) and \(g(d)=1\).
Let's define \(\mathcal{U}=\{a\in B\cap(0,d)\mid g(a)=0\}\) and let \(u=\sup\mathcal{U}\). Notice that \(u\leq d\) and let's see that \(u\in B\). Suppose that \(u\notin B\), since \(g\) is continuous there exists \(\varepsilon>0\) such that \(g[(u-\varepsilon,u+\varepsilon)]\subseteq\{g(u)\}\), we take \(z\in(u-\varepsilon,u)\cap\mathcal{U}\), then \(0=g(z)=g(u)\). By density of \(B\) we find an \(w\in B\cap(0,1)\cap(u,u+\varepsilon)\) and thus \(g(w)=0\), a contradiction. We conclude that \(u\in B\).
Moreover, \(g(u)=1\), since in other case we could find some point in \((u,d)\cap B\cap(0,1)\) such that \(g(u)=0\) contradicting the choice of \(u\) (if \(u=d\) there was nothing to do). Construct an increasing sequence \((x_{n})\subseteq U\cap(0,1)\) converging to \(u\), it follows that \(g(x_{n})=0\) for all \(n\), then \((-x_{n})\) is a decrecing sequence that converges to \(-u\) and \(g(x_{n})=g(-x_{n})=0\), due to continuity we conclude \(g(-u)=0\), but, since \(g(u)=1\), we also have \(g(-u)=1\), a contradiction. We deduce that this case is impossible.
**Sub case 2.2.2**\(g(c)=1\) y \(g(d)=0\).
Analogously to the previous case, we define \(\mathcal{U}=\{a\in B\cap(0,d)\mid g(a)=1\}\) and take \(u=\sup\mathcal{U}\), once again \(u\leq d\). Suppose \(u\notin B\), by continuity there exists \(\varepsilon>0\) such that \(g[(u-\varepsilon,u+\varepsilon)]\subseteq\{g(u)\}\). Since there exists \(z\in U\cap(u-\varepsilon,u)\) we conclude that \(g(u)=1\), now we take \(w\in B\cap(0,1)\cap(u,u+\varepsilon)\), but this implies \(g(u)=1\)(note that \(u+\varepsilon<d\), otherwise \(g(d)=1\)), a contradiction. So we conclude \(u\in B\). From here onward evrything proceeds analogously to the previous case to get a contradiction.
Since the last two sub cases lead us to contradictions, we conclude that the condition \(g(a)=g(-a)\) for all \(a\in B\), implies that \(g\) is constant in \((0,1)\), a case that we solved previously.
The crucial parts for the construction were as follows, let \(B=\mathbb{R}\setminus A\):
* \(B\) is symmetric.
* \(B\) is dense in some interval, say \((0,b)\).
* There exists \(q\in[b,\infty)\cap B\).
Reading carefully we note that the second condition implies the third in the following sense: if \(B\cap(0,b)\) is dense in \((0,b)\), then \(B\cap(0,y)\) is dense in \((0,y)\) for every \(y<b\) and we know there exists \(q\in(y,b)\cap B\) since \((y,b)\) is an open set of \((0,b)\).
We can generalize the result by extending the possible symmetries. Remember that the reflection with respect to a point \(a\in\mathbb{R}\) is given by \(r_{a}(x)=2a-x\), and all these reflections satisfy the following property: they transform increasing sequences "on the right of a" into decreasing sequences "on the left of a". This
is a key argument for the final part of the previous proof.
We generalize the symmetry condition with the following definition:
**Definition 1**.: _Let \(A\subseteq\mathbb{R}\) and \(z\in\mathbb{R}\). We say that \(A\) is symmetric with respect to \(z\) if for every \(x\in A\), \(r_{z}(x)\in A\)._
Also let's recall the following classic results ([1]):
**Proposition 12**.: _If \(C_{p}(X)\) is normal, then it is collection-wise normal._
**Proposition 13**.: _Let \(X\) collection-wise normal and ccc. Then \(e(X)=\aleph_{0}\)._
The preceding results allow us to improve on our reasoning, we do not need a closed and discrete set of cardinality \(\mathfrak{c}\), we only need that it is uncountable.
With this new ideas we can improve Proposition 6 in the following way:
**Theorem 9**.: _Let \(A\subseteq\mathbb{R}\) y \(B=\mathbb{R}\setminus A\), suppose that there exists \((z-a,z+a)=I\) such that :_
* \(B\cap I\) _is dense in_ \(I\)_._
* \(B\cap I\) _is symmetric with respect to_ \(z\)_._
* \(|B\cap I|>\aleph_{0}\)__
_Then \(C_{p}(H(A))\) contains a closed and discrete subspace of size \(|B\cap I|\) and thus is not normal._
While the conditions of the theorem may seem difficult to achieve, it is relatively simple to construct sets \(B\) that satisfy them. For example, we start with \(C=\mathbb{Q}\cap(-1,1)\setminus 0\). This set is already dense in \((-1,1)\) and symmetric with respect to zero. We can then add as many irrationals as we want while maintaining symmetry, and the result will still be dense. In particular, we have the following:
**Proposition 14**.: _For every \(\aleph_{0}<\kappa\leq\mathfrak{c}\), there exists \(B\subseteq\mathbb{R}\) that satisfies the conditions of the previous theorem and has a cardinality of \(\kappa\). Moreover, there exist \(\mathfrak{c}\) distinct sets that satisfy these conditions._
Proof.: It suffices to perform the same construction while varying the symmetry point. In other words, we start with \(\mathbb{Q}\cap(z-1,z+1)\setminus z\) and then add the desired irrationals. These sets will be distinct because the initial set of rationals is different for each \(z\).
Theorem 3 allows us to construct closed and discrete sets by leveraging the density and symmetry of a sufficiently large set. Now, let's maintain the symmetry but shift to the opposite side by considering sets that are nowhere dense.
Let \(\Delta\) be the Cantor set with its usual construction, that is, \(\Delta=\bigcap_{n=1}^{\infty}C_{n}\) where each \(C_{n}\) is the union of \(2^{n}\) intervals of length \(\frac{1}{3^{n}}\). With this in mind, we will say that \(u\in(0,1)\) is a left (right) endpoint of \(\Delta\) if there exists \(n\in\mathbb{N}\) such that \(u\) is the left (right) endpoint of one of the intervals that form \(A_{n}\). Furthermore, we write \([0,1]\setminus\Delta=\bigcup_{n=1}^{\infty}I_{n}\), where the \(I_{n}\) are the intervals discarded when constructing \(\Delta\). We will need the following results concerning \(\Delta\).
**Lemma 10**.: _Let \(u\in\Delta\). Then:_
* \(u\) _is a left endpoint of_ \(\Delta\) _iff there exists_ \(u^{-}\in\Delta\) _such that_ \(u^{-}<u\) _and_ \((u^{-},u)\cap\Delta=\emptyset\)_._
* \(u\) _is a right endpoint_ \(\Delta\) _iff there exists_ \(u^{+}\in\Delta\) _such that_ \(u^{+}>u\) _and_ \((u,u^{+})\cap\Delta\neq\emptyset\)_._
* _If_ \(u\) _is not a left endpoint, then there exists_ \((x_{n})\subseteq\Delta\) _strictly increasing sequence such that_ \(x_{n}\to u\) _(in the euclidean sense). Moreover, for every_ \(\varepsilon>0\)_,_ \((u-\varepsilon,u)\cap\Delta\neq\emptyset\)_._
* _If_ \(u\) _is not a right endpoint, then there exists_ \((x_{n})\subseteq\Delta\) _strictly decreasing sequence such that_ \(x_{n}\to u\) _(in the euclidean sense). Moreover, for every_ \(\varepsilon>0\)_,_ \((u,u+\varepsilon)\cap\Delta\neq\emptyset\)_._
Explicitly, \(u^{+}\) is the left endpoint of the next interval that forms \(A_{n}\) (ordered in increasing order), and similarly for \(u^{-}\).
**Theorem 11**.: _Let \(A\subseteq\mathbb{R}\) such that \(\Delta\subseteq\mathbb{R}\setminus A\). Then \(C_{p}(H(A))\) contains a closed and discrete set of cardinality \(\mathfrak{c}\) and thus is not normal._
Proof.: Let \(B=[\frac{1}{2},1)\cap\Delta\) and \(f(x)=1-x\), which is the reflection with respect to \(\frac{1}{2}\). Note that \(f[\Delta]=\Delta\). For each \(b\in[\frac{1}{2},1)\cap\Delta\), we define a function as follows:
\[g_{b}(x)=\begin{cases}0&x\in(-\infty,0)\\ 1&x\in[0,f(b))\\ 0&x\in[f(b),b)\\ 1&x\in[b,1)\\ 0&x\in[1,\infty)\end{cases}\]
Note that the "jumps" of \(g_{b}\) occur at points in \(\Delta\), so continuity is preserved. Therefore, \(\mathcal{F}=\{g_{b}\ |\ b\in B\}\subseteq C_{p}(H(A))\). Additionally, by construction, \(g_{b}\) is constant on \(I_{n}\) for all \(n\in\mathbb{N}\). Let's see that \(\mathcal{F}\) is closed and discrete in \(C_{p}(H(A))\).
Let's see that \(\mathcal{F}\) is discrete.
Given \(b\in B=\Delta\cap[\frac{1}{2},1)\), we define \(U=[g_{b},\{b,f(b)\},\frac{1}{2}]\). Let's take \(c\in B\), we have the following two cases:
* \(c<b\)**.** Then \(f(b)<f(c)\) and thus \(g_{c}(f(b))=1\), but \(g_{b}(f(b))=0\), which implies \(g_{c}\notin U\).
* \(c>b\)**.** This time \(g_{b}(b)=1\) while \(g_{c}(b)=0\), once again, \(g_{c}\notin\mathcal{F}\).
From this two cases we deduce that \(U\cap\mathcal{F}=\{g_{b}\}\) and thus \(\mathcal{F}\) is a discrete subspace.
Now we only need to show that \(\mathcal{F}\) is closed. For this, we will consider the following cases, let \(g\in C_{p}(H(A))\).
**Case 1.**\(g[\mathbb{R}]\) is not contained in \(\{0,1\}\).
It follows that there exists \(x\in\mathbb{R}\) such that \(g(x)\notin\{0,1\}\), so we can define
\[\varepsilon=\min\{|g(x)|,|1-g(x)|\}>0\]
under this conditions, let's take \(U=[g,\{x\},\varepsilon]\), it follows trivially that for every \(b\in B\), \(|g_{b}(x)-g(x)|\geq\varepsilon\) and thus \(g_{b}\notin U\).
Henceforth, we assume that \(g[\mathbb{R}]\subseteq\{0,1\}\).
**Case 2.-\(g\)** is not constant in the intervals discarded during the construction of \(\Delta\).
Then there exists \(n\in\mathbb{N}\) such that \(g\) is not constant on \(I_{n}\), so there exist \(x,y\in I_{n}\) such that \(g(x)=0\) and \(g(y)=1\). Let's take \(U=[g,\{x,y\},\frac{1}{2}]\). For every \(b\in B\), since \(g_{b}\) is constant in \(I_{n}\), one of the quantities \(|g(x)-g_{b}(x)|\) or \(|g(y)-g_{b}(y)|\) is exactly \(1\) and thus \(g_{b}\notin U\).
From here onwards, we will also assume that \(g\) is constant on each interval \(I_{n}\).
**Case 3.-** There exists \(a\in B\) such that \(g(a)\neq g(f(a))\).
First let's suppose \(g(a)=0\) and \(g(f(a))=1\). Let \(U=[g,\{a,f(a)\},\frac{1}{2}]\) y \(b\in B\), then:
* If \(b<a\), we get \(g_{b}(a)=1\).
* If \(b>a\), then \(f(b)<f(a)\), so \(g_{b}(f(a))=0\).
Both cases lead us to \(g_{b}\notin U\) and thus \(\mathcal{F}\cap U\subseteq\{g_{a}\}\), since the space is Hausdorff this suffices. The case \(g(a)=1\) and \(g(f(a))=0\) is analogous.
We add to our list of assumptions \(g(a)=g(f(a))\) for all \(a\in B\).
**Case 4.- \(g\)** is constant in \(B\).
* \(g(x)=0\) for all \(x\in B\). By continuity and the fact that \(g(f(b))=g(b)\) we deduce that \(g(0)=0\). We take \(V=[g,\{0\},\frac{1}{2}]\), since for all \(b\in B\), \(g_{b}(0)=1\) it follows that \(g_{b}\notin U\).
* \(g(x)=1\) for all \(x\in B\). In this case we take \(V=[g,\{\frac{1}{2}\},\frac{1}{2}]\), for all \(b\in B\), \(g_{b}(\frac{1}{2})=0\) thus \(g_{b}\notin U\).
In both cases we conclude. Lastly,
**Case 5.- \(g\)** is not constant \(B\).
First let's suppose that there exist \(a,b\in B\) with \(b<a\) such that \(g(b)=0\) and \(g(a)=1\) (the other case is analogous). Let's define the following set
\[W=\{c\in\Delta\ |\ c<a\ \&\ g(c)=0\}\]
\(b\in a\) so \(W\) is non empty, on the other hand, \(a\) is an upper bound for \(W\), thus \(\sup W=u\) exists, \(b\leq u\leq a\) and \(u\) is in \(\Delta\cap[\frac{1}{2},a]\) since this set is closed in the euclidean topology.
**Claim:**\(u\) is not a left endpoint.
Suppose that it is a left endpoint and let's see that this implies \(u\neq a\). If \(u=a\) and \(u\) is a left endpoint, then \(u^{-}\) is an upper bound for \(W\) (since \(g(u)=g(a)=1\)), which contradicts the choice of \(u\). Now, since \(u\) is a left endpoint, we can take a strictly decreasing sequence \((x_{n})\subseteq B\) that converges to \(u\) in the usual sense. Moreover, without loss of generality, we can assume that \(x_{n}\in(u,a)\) for all \(n\in\mathbb{N}\), which in turn allows us to assume that \(g(x_{n})=1\) for all \(n\) (otherwise, it would contradict the choice of \(u\)). The continuity of \(g\) then implies that \(g(u)=1\), which again leads us to \(u^{-}\) being an upper bound for \(W\). Thus, \(u\) cannot be a left endpoint.
**Claim:**\(g(u)=0\).
Suppose \(g(u)=1\). Since \(u\) is not a left endpoint, we can take a strictly increasing sequence \((x_{n})\subseteq\Delta\cap[\frac{1}{2},1)\) that converges to \(u\) in the usual sense. Moreover, as \(u=\sup W\) and \(u\notin W\), we can choose the sequence in such a way that \(g(x_{n})=0\) for all \(n\). It follows that \(f(x_{n})\) is a strictly decreasing sequence converging to \(f(u)\) and such that \(g(f(x_{n}))=g(x_{n})=0\) for all \(n\in\mathbb{N}\). This implies that \(g(f(u))=0\) by continuity, and therefore \(0=g(f(u))=g(u)=1\), which is impossible. We conclude that \(g(u)=0\), which in turn allows us to conclude that \(u<a\).
**Claim:**\(u\) is a right endpoint.
Suppose this does not happen. By continuity, there exists \(\varepsilon>0\) such that \(g[[u,u+\varepsilon)]\subseteq\{0\}\), since \(u\) is not a right endpoint
\[(u,u+\varepsilon)\cap(u,a)\cap\Delta\neq\emptyset\]
which contradicts the fact that \(u\) is the supremum of \(W\).
**Claim:** There does not exist a strictly increasing sequence \((x_{n})\subseteq\Delta\) converging to \(u\) such that \(g(x_{n})=1\) for all \(n\in\mathbb{N}\). That is, there exists \(\varepsilon>0\) such that \(g[(u-\varepsilon,u)]\subseteq\{0\}\).
Suppose that such a sequence exists. Then, \((f(x_{n}))\) is a strictly decreasing sequence converging to \(f(u)\). By the continuity of \(g\), we can conclude that \(1=g(f(u))=g(u)=0\), which is a contradiction. From this point forward in
the proof, we will use this \(\varepsilon\).
Let's take \(y\in(u-\varepsilon,u)\cap\Delta\), this is posible since \(u\) is not a left endpoint, moreover, since \(u\) is a right endpoint, we can consider \(u^{+}\), and furthermore, we have that \(u^{+}\leq a\) and \(g(u^{+})=1\). IN order to finish the proof let's prove that \(U=[g,\{u,u^{+},y\},\frac{1}{2}]\) satisfies
\[U\cap\mathcal{F}\subseteq\{g_{u},g_{u^{+}}\}\]
which allow us to conclude since \(C_{p}(H(A))\) is Hausdorff.
Let \(c\in\Delta\cap[\frac{1}{2},1)\). We have three final sub cases:
* \(c<y\). In this case \(g_{c}(y)=1\) and \(g(y)=0\).
* \(c\in[y,u)\). Here \(g_{c}(u)=1\) and \(g(y)=0\).
* \(b>u^{+}\). We have \(g_{c}(u^{+})=0\) but \(g(u^{+})=1\),
In any case we deduce that \(g_{c}\notin U\), and so we can conclude.
The previous theorem has quite strong conclusions. Let's start by recalling a couple of results. The first one is a classical result concerning analytic spaces, and the second one is a theorem by Sorgenfrey.
**Theorem 12**.: _Let \(B\subseteq\mathbb{R}\) be an analytic and uncountable set. Then \(B\) contains a set homeomorphic to the Cantor set \(\Delta\)._
**Theorem 13**.: _Let \(C,D\subseteq\mathbb{R}\) be compact nowhere dense sets. Then there exists an order isomorphism \(f:C\to D\) between them. In particular, any two Cantor sets in \(\mathbb{R}\) are order isomorphic._
Thanks to these two results, we have that if \(\mathbb{R}\setminus A\) is an analytic set (for example, open, closed, etc.), then it contains a copy of the Cantor set, let's call it \(C\). Moreover, we can provide an order isomorphism between \(\Delta\) and \(C\), which allows us to "transfer" our construction of the usual Cantor set to this case in the following way:
Let \(g\) be the order isomorphism. Since \(g\left(\frac{1}{3}\right)<g\left(\frac{2}{3}\right)\), we can take an intermediate point, let's say \(c\). This point will act as a "division" point, similar to how \(\frac{1}{2}\) did originally. Now, we define \(D=C\cap[c,\infty)\setminus g(1)\) and \(I=(-\infty,c]\cap C\setminus g(0)\). These two sets are disjoint, and we have \(C=D\cup I\cup g(0),g(1)\). We also need the following auxiliary function \(\phi:C\to C\) given by \(\phi(g(x))=g(1-x)\). Since \(g\) is bijective, \(\phi\) is well-defined and bijective. Moreover, the fact that \(g\) preserves the order implies that \(\phi[D]=I\). Notice that \(\phi\) transforms increasing sequences in \(D\) into decreasing sequences in \(I\). This function acts as a "pseudo-symmetry," taking on the role of \(f(x)=1-x\) in the original construction.
The only thing we haven't reinterpreted is the concept of left or right endpoint, but we can do that very simply. A left/right endpoint of \(C\) is the image under \(g\) of a corresponding type of endpoint in \(\Delta\). For example, \(g\left(\frac{2}{3}\right),g\left(\frac{2}{9}\right)\) are left endpoints of \(C\). All the conclusions of Lemma 1 can be naturally adapted to this definition using the fact that \(g\) preserves order and is a homeomorphism.
Finally, for each \(d\in D\), we define
\[h_{d}(x)=\begin{cases}0&x\in(-\infty,g(0))\\ 1&x\in[g(0),\phi(d))\\ 0&x\in[\phi(d),d)\\ 1&x\in[d,g(1))\\ 0&x\in[g(1),\infty)\end{cases}\]
Under this reinterpretation of the key concepts used in the proof of Theorem 4, we can replicate that proof for this more general case by making the corresponding replacements to conclude that \(\{h_{d}\ |\ d\in D\}\) is a closed and discrete set in \(C_{p}(H(A))\). Therefore, we can state the following result:
**Theorem 14**.: _Let \(A\subseteq\mathbb{R}\) such that \(\mathbb{R}\setminus A\) is uncountable and analytic. Then \(C_{p}(H(A))\) is not normal._
Unfortunately, the previous theorem is not an "if and only if" statement, as the following example shows:
**Example 2**.: _Let \(B\) be a Bernstein set that is also an additive subgroup of \(\mathbb{R}\) (see [14] and [10]). It follows that \(B\) is an uncountable set that is symmetric with respect to \(0\), dense in \(\mathbb{R}\), but not analytic. Therefore, by Proposition 11, we can conclude that \(C_{p}(H(\mathbb{R}\setminus B))\) is not normal, even though \(B\) is not analytic._
All these partial advances naturally lead to the following question:
**Question 3**.: _If \(\mathbb{R}\setminus A\) is uncountable, then, is \({}_{\dot{\varrho}}C_{p}(H(A))\) not normal?_
One of the most significant models of ZF is Solovay's model, in which the Axiom of Choice is not satisfied fully (only the Axiom of Dependent Choice, DC), and furthermore, in this model, \(\mathbb{R}\) has the Perfect Set Property, meaning that every infinite subset is either countable or contains a perfect set. Since every perfect set is analytic, this implies that every non-countable subset of \(\mathbb{R}\) contains a copy of \(\Delta\). Therefore, in this model, we have an affirmative answer to the previous question:
**Theorem 15**.: _(In Solovay Model) Let \(A\subseteq\mathbb{R}\). Then if \(\mathbb{R}\setminus A\) is uncountable then \(C_{p}(H(A))\) is not normal._
From the last Theorem it follows that, if there exist an uncountable \(B\) such that \(C_{p}(H(B))\) is normal, then it would need AC for his construction. Thus it will not be easy to describe such a set. |
2309.07627 | Adaptive Reduced Basis Trust Region Methods for Parameter Identification
Problems | In this contribution, we are concerned with model order reduction in the
context of iterative regularization methods for the solution of inverse
problems arising from parameter identification in elliptic partial differential
equations. Such methods typically require a large number of forward solutions,
which makes the use of the reduced basis method attractive to reduce
computational complexity. However, the considered inverse problems are
typically ill-posed due to their infinite-dimensional parameter space.
Moreover, the infinite-dimensional parameter space makes it impossible to build
and certify classical reduced-order models efficiently in a so-called "offline
phase". We thus propose a new algorithm that adaptively builds a reduced
parameter space in the online phase. The enrichment of the reduced parameter
space is naturally inherited from the Tikhonov regularization within an
iteratively regularized Gau{\ss}-Newton method. Finally, the adaptive parameter
space reduction is combined with a certified reduced basis state space
reduction within an adaptive error-aware trust region framework. Numerical
experiments are presented to show the efficiency of the combined parameter and
state space reduction for inverse parameter identification problems with
distributed reaction or diffusion coefficients. | Michael Kartmann, Tim Keil, Mario Ohlberger, Stefan Volkwein, Barbara Kaltenbacher | 2023-09-14T11:47:59Z | http://arxiv.org/abs/2309.07627v2 | # Adaptive Reduced Basis Trust Region Methods for Parameter Identification Problems+
###### Abstract
In this contribution, we are concerned with model order reduction in the context of iterative regularization methods for the solution of inverse problems arising from parameter identification in elliptic partial differential equations. Such methods typically require a large number of forward solutions, which makes the use of the reduced basis method attractive to reduce computational complexity. However, the considered inverse problems are typically ill-posed due to their infinite-dimensional parameter space. Moreover, the infinite-dimensional parameter space makes it impossible to build and certify classical reduced-order models efficiently in a so-called "offline phase". We thus propose a new algorithm that adaptively builds a reduced parameter space in the online phase. The enrichment of the reduced parameter space is naturally inherited from the Tikhonov regularization within an iteratively regularized Gauss-Newton method. Finally, the adaptive parameter space reduction is combined with a certified reduced basis state space reduction within an adaptive error-aware trust region framework. Numerical experiments are presented to show the efficiency of the combined parameter and state space reduction for inverse parameter identification problems with distributed reaction or diffusion coefficients.
Introduction
To perform a forward solution of a partial differential equation (PDE), all involved coefficients and parameters must be known. If some parameters cannot be measured directly, they have to be determined from indirect measurements instead. This procedure is called the _inverse problem_ or parameter identification in PDEs (cf., e.g., [14, 15]). Often these problems are ill-posed in the sense that the reconstructed parameter does not depend continuously on the measurements. To overcome this lack of stability, regularization methods are used to approximate the desired parameter in a stable way [1, 1, 13, 14, 15]. Regularization can be achieved by different strategies such as Tikhonov, Morozov, or Ivanov regularization or, e.g., by the concept of regularization by discretization. Classical iterative regularization methods are for example the Landweber method, the Levenberg-Marquardt method, or the iteratively regularized Gauss-Newton method (IRGNM), which we will consider in this paper and which has been extensively studied in different settings [16, 17]. Typically, these methods need a high number of forward solutions of the respective PDE. To improve the efficiency of the algorithm, adaptive finite element approximations were used in [13, 14].
Another possibility to reduce the computational complexity is the use of model order reduction (MOR) techniques [1, 15], which have been applied to PDE-constrained optimization [11, 12, 13] and inverse problems [1, 1, 14, 15]. A particular MOR method is the reduced basis (RB) method [17, 18], where the full-order model (FOM) is projected onto a low-dimensional space spanned by a reduced basis consisting of snapshots, i.e. of the solution of the PDE corresponding to meaningful parameters. The traditional way of applying the reduced basis method is a so-called offline-online decomposition, where the reduced-order model (ROM) is constructed by a goal-oriented greedy algorithm in the offline stage. In the online stage, the ROM and an a posteriori error estimator are available to perform cheap, but certified computations. The efficiency of the greedy algorithm depends heavily on the parameterization and is restricted to low-dimensional and bounded parameter spaces. Adaptive reduced basis methods for PDE-constrained optimization overcome this behavior partially [16, 1, 17, 18, 19, 20] by constructing a small problem-dependent basis along the optimization path. In the context of inverse problems, the authors of [1] introduced the RB Landweber algorithm which uses adaptive reduced basis approximations to solve an inverse problem without the theoretical need for the parameter space to be bounded. In their numerical experiments, they tackled parameter spaces of dimension 900. Nevertheless, the RB reduction becomes infeasible if the parameter space gets high-dimensional (e.g., as high-dimensional as the underlying FOM approximation) because (i) the projection of all affine components is costly and (ii) the assembly of residual-based error estimators is prohibitively costly since per parameter and per RB element one Riesz-representative needs to be computed, which involves the solution of a system of FOM complexity.
In the literature, there are parameter reduction strategies, such as active subspace methods [14], used by the authors in [12, 13]. In their numerical experiments, the parameter set was of dimension 10. We also refer to [11] for an extension of the active subspace method to the multifidelity case. In inverse problems, however, the ill-posedness only occurs if the parameter space is infinite-dimensional, i.e. in a discrete setting if the parameter space can be of arbitrary dimension. Correspondingly there is a need for adaptive RB methods that can efficiently deal with a parameter space of arbitrary dimension. To this end combined state and parameter reduction methods that are particularly tailored towards the solution of inverse problems are needed. In the context of Bayesian inverse problems such approaches have been proposed e.g. in [12, 13]. We also refer to [13] for an approach based on cross-gramians in the context of large-scale control problems.
As the main contribution of this paper, we propose an adaptive reduced basis algorithm that can deal with an infinite-dimensional parameter space in an efficient manner and serves as an iterative regularization algorithm for the parameter identification problem. The key ingredient is to reduce the dimension of the parameter space by adaptively constructing a reduced basis for it consisting of the FOM gradients of the discrepancy functional. We show that this is a natural choice that is inherited from the Tikhonov regularization. Since the number of affine components on the reduced parameter space is low-dimensional, the RB method for state space reduction can be applied efficiently. Consequently, our novel approach is a certified, adaptive, error-aware trust region setting to significantly speed up the solution process of the parameter identification problem by parameter- and state space model order reduction.
The article is organized as follows. In Section 2, we introduce the parameter identification problem and present the IRGNM. In Section 3, the adaptive parameter and model reduction approach is described. First, we propose the IRGNM with adaptive parameter space reduction in Section 3.1. On top of that, the classical RB model reduction is applied in Section 3.2. In Section 3.3, we present the combined parameter- and state space reduced error-aware trust region iteratively regularized Gauss-Newton method. In Section 4, all proposed algorithms are compared in terms of computation time and the quality of the reconstruction, and finally, in Section 5 some concluding remarks and an outlook are given.
## 2 Parameter identification for elliptic PDEs
### Problem formulation
Let \(\mathscr{Q}\), \(\mathscr{H}\), \(V\) be (real) Hilbert spaces and \(\mathscr{Q}_{\mathsf{ad}}\subset\mathscr{Q}\) be a closed, convex set. We are interested in identifying space-dependent parameters \(q\in\mathscr{Q}_{\mathsf{ad}}\) from noisy measurements \(y^{\delta}\in\mathscr{H}\) of the exact solution \(u\in V\) solving the linear, elliptic partial differential equation (PDE)
\[a(u,v;q)=\ell(v)\quad\text{for all }v\in V \tag{2.1}\]
with a \(q\)-dependent bilinear form \(a(\cdot\,,\cdot\,;q):V\times V\to\mathbb{R}\) and linear form \(\ell\in V^{\prime}\). Furthermore, we assume that the exact measurements \(y\in\mathscr{H}\) are obtained from the solution \(u\in V\) through a linear bounded (observation) operator \(\mathcal{C}:V\to\mathscr{H}\), i.e. \(y=\mathcal{C}u\).
**Assumption 2.1**.: _The bilinear form \(a(\cdot\,,\cdot\,;q)\) is assumed to be continuous and coercive for any \(q\in\mathscr{Q}_{\mathsf{ad}}\). Further, for any \(u,v\in V\) the map \(q\mapsto a(u,v;q)\) is linear._
**Remark 2.2**.: _From the Lax-Milgram theorem (cf., e.g., [10, pp. 315-316]) it is well-known that Assumption 2.1 implies the existence of a unique solution \(u=u(q)\in V\) to the forward problem (2.1) for any \(q\in\mathscr{Q}_{\mathsf{ad}}\). We can introduce the solution operator \(\mathcal{S}:\mathscr{Q}_{\mathsf{ad}}\to V\), where \(u(q)=\mathcal{S}(q)\). Notice that (2.1) is a linear problem for any \(q\in\mathscr{Q}_{\mathsf{ad}}\), but \(\mathcal{S}\) is usually a non-linear operator._
Next, we express the parameter identification problem as an operator equation by introducing the forward operator
\[\mathcal{F}:\mathscr{Q}_{\mathsf{ad}}\to\mathscr{H}\quad\text{with } \mathcal{F}=\mathcal{C}\circ\mathcal{S}.\]
The parameter identification problem reads as follows: find a parameter \(q^{\mathsf{e}}\in\mathscr{Q}_{\mathsf{ad}}\) such that for a given exact data \(y^{\mathsf{e}}=\mathcal{C}u^{\mathsf{e}}\in\mathrm{range}(\mathcal{F})\) with \(u^{\mathsf{e}}=\mathcal{S}(q^{\mathsf{e}})\) the following equation holds:
\[\mathcal{F}(q^{\mathsf{e}})=y^{\mathsf{e}}\quad\text{in }\mathscr{H}.\] ( **IP** )
Throughout our work we assume that the given noisy data \(y^{\delta}\in\mathscr{H}\) and \(y^{\mathsf{e}}\in\mathrm{range}(\mathcal{F})\) satisfy
\[\left\|y^{\mathsf{e}}-y^{\delta}\right\|_{\mathscr{H}}\leq\delta,\]
where the noise level \(\delta>0\) is assumed to be known.
**Assumption 2.3** (Assumptions on the forward operator).: _We assume that \(\mathcal{F}:\mathscr{Q}_{\mathsf{ad}}\to\mathscr{H}\) is injective, continuously Frechet-differentiable and that (**IP**) is ill-posed in the sense that \(\mathcal{F}\) is not continuously invertible._
Due to the injectivity in Assumption 2.3 and \(y^{\mathsf{e}}\in\mathrm{range}(\mathcal{F})\), \(q^{\mathsf{e}}\) is the unique solution to (**IP**). Further, the ill-posedness of (**IP**) poses challenges, since instead of the exact data, only the noisy measurement \(y^{\delta}\) is available. Therefore, simple inversion fails to provide a reasonable reconstruction of the exact state. Instead one has to apply regularization methods that construct continuous approximations of the discontinuous inverse of \(\mathcal{F}\), in order to obtain a stable approximate solution of (**IP**).
Let us introduce two guiding examples that are studied in our numerical experiments carried out in Section 4.
**Example 2.4**.: We consider two possible scenarios based on the examples presented in [11]. See also the works [14, 15, 16, 17]. Let
for \(d\in\{1,2,3\}\) be a bounded, convex domain. Using [15, Corollary 1.2.2.3] this implies that \(\Omega\) has a Lipschitz-continuous boundary \(\Gamma=\partial\Omega\). We choose \(\mathscr{H}=L^{2}(\Omega)\), \(H=L^{2}(\Omega)\), \(V=H^{1}_{0}(\Omega)\) and \(\mathcal{C}\) to be the compact, injective embedding from \(V\) into \(\mathscr{H}\).
1. _Reconstruction of the reaction coefficient_: Let \(q_{\mathsf{a}}\in L^{\infty}(\Omega)\) be a given bound with \(q_{\mathsf{a}}\geq 0\) a.e. (almost everywhere) in \(\Omega\). We set \(\mathscr{Q}=L^{2}(\Omega)\) and \[\mathscr{Q}_{\mathsf{ad}}\coloneqq\big{\{}q\in\mathscr{Q}\,\big{|}\,q_{ \mathsf{a}}\leq q\text{ in }\Omega\text{ a.e.}\big{\}}.\] (2.2) Note that \(\mathscr{Q}_{\mathsf{ad}}\) is closed and convex in \(\mathscr{Q}\). For given \(q\in\mathscr{Q}_{\mathsf{ad}}\) and \(f\in H\) we consider the elliptic problem \[-\Delta u(\mathbf{x})+q(\mathbf{x})u(\mathbf{x})=f(\mathbf{x})\text{ f.a.a. }\mathbf{x}\in\Omega,\quad u(\mathbf{x})=0\text{ f.a.a. }\mathbf{x}\in\Gamma,\] (2.3) where 'f.a.a.' means 'for almost all'. Setting \[a(u,v;q)\coloneqq\int_{\Omega}\nabla u\cdot\nabla v+quv\,\mathrm{d}\mathbf{x}, \quad\ell(v)\coloneqq\int_{\Omega}fv\,\mathrm{d}\mathbf{x}\quad\text{for }v\in V\] it follows that the weak formulation of (2.3) can be expressed in the form (2.1).
2. _Reconstruction of the diffusion coefficient_: Let \(q_{\mathsf{a}}\in L^{\infty}(\Omega)\) be a given bound with \(q_{\mathsf{a}}>0\) a.e. in \(\Omega\). We set \(\mathscr{Q}=H^{2}(\Omega)\hookrightarrow C(\overline{\Omega})\) and \(\mathscr{Q}_{\mathsf{ad}}\) is given as in (2.2). For given \(q\in\mathscr{Q}_{\mathsf{ad}}\) and \(f\in H\) we consider the following elliptic problem \[-\nabla\cdot\big{(}q(\mathbf{x})\nabla u(\mathbf{x})\big{)}=f(\mathbf{x})\text{ f.a.a. }\mathbf{x}\in\Omega,\quad u(\mathbf{x})=0\text{ f.a.a. }\mathbf{x}\in\Gamma.\] (2.4) For \[a(u,v;q)\coloneqq\int_{\Omega}q\nabla u\cdot\nabla v\,\mathrm{d}\mathbf{x}, \quad\ell(v)\coloneqq\int_{\Omega}fv\,\mathrm{d}\mathbf{x}\quad\text{for }v\in V\] the weak formulation of (2.4) is given by (2.1).
In both cases Assumption 2.1 is satisfied. For Assumption 2.3, the Frechet-differentiability is clear, too. Moreover, sufficient for the lack of continuity of the inverse of \(\mathcal{F}\) is the compactness and sequential weak closedness of the operator \(\mathcal{F}\) according to [14, Propositions 9.1 and 9.2]. Since \(\mathcal{C}\) is compact, the compactness is clear. The weak sequential closedness has been proven in [16] under the assumption of \(H^{2}(\Omega)\)-regularity of the state \(u\). In our case, this is fulfilled according to [15, Theorem 3.2.12]. As \(\mathcal{C}\) and \(\mathcal{S}\) are injective, the operator \(\mathcal{F}=\mathcal{C}\circ\mathcal{S}\) is injective as well. \(\Diamond\)
### Iteratively regularized Gauss-Newton method
To overcome the ill-posedness of \(\mathcal{F}\) we utilize an iteratively regularized Gauss-Newton method (IRGNM) as a regularization method. We are interested in the minimization of the discrepancy
\[\min\hat{J}(q)\coloneqq\frac{1}{2}\left\|\mathcal{F}(q)-y^{\delta}\right\|_{\mathscr{ H}}^{2}\quad\text{subject to (s.t.)}\quad q\in\mathscr{Q}_{\mathsf{ad}} \tag{2.5}\]
in a stable way with respect to the noise level \(\delta\), which will be achieved by adding a Tikhonov term and by early stopping in an iteratively linearized procedure (see below). In this paper, we concentrate on local convergence properties of our methods, which is often done if one is interested in the regularization ([10]). For this reason, we make use of the following assumption.
**Assumption 2.5**.: _There exists a convex, closed, and bounded subset \(\mathscr{C}\subset\mathscr{Q}_{\mathsf{ad}}\) such that (2.5) has a unique (local) minimizer \(\bar{q}\) in the interior of \(\mathscr{C}\)._
**Remark 2.6**.: _It follows from Assumption 2.5 that the minimization problem_
\[\min\hat{J}(q)\quad\text{s.t.}\quad q\in\mathscr{C} \tag{2.6}\]
_can be considered as a locally unconstrained problem. Thus, first-order necessary optimality conditions are \(\nabla\hat{J}(\bar{q})=0\) in \(\mathscr{Q}\), where \(\nabla\hat{J}\) stands for the gradient of \(\hat{J}\). Since we usually have \(y^{\mathsf{e}}\neq y^{\delta}\), we expect also \(q^{\mathsf{e}}\neq\bar{q}\) and therefore \(\hat{J}(\bar{q})>0\)._
Let \(q^{k}\in\mathscr{C}\) be a given iterate which is sufficiently close to the local solution \(\bar{q}\). It follows from Assumptions 2.3 and 2.5 that \(\mathcal{F}\) is continuously Frechet-differentiable on \(\mathscr{C}\subset\mathscr{Q}_{\mathsf{ad}}\). Now, the IRGNM update scheme results from linearizing \(\mathcal{F}\) at \(q^{k}\) and minimizing the Tikhonov functional of the linearization
\[q(\alpha_{k})=\operatorname*{arg\,min}_{q\in\mathscr{C}}\frac{1}{2}\left\| \mathcal{F}(q^{k})+\mathcal{F}^{\prime}(q^{k})(q-q^{k})-y^{\delta}\right\|_{ \mathscr{H}}^{2}+\frac{\alpha_{k}}{2}\left\|q-q_{\circ}\right\|_{\mathscr{Q}} ^{2},\quad(\mathbf{IP}_{\alpha}^{k})\]
where \(q_{\circ}\in\mathscr{C}\) is the regularization center and \(\alpha_{k}>0\) a Tikhonov regularization parameter.
**Remark 2.7**.: _Throughout this work, it is assumed that \((\mathbf{IP}_{\alpha}^{k})\) possesses a unique solution \(q(\alpha_{k})\) which lies in the interior on \(\mathscr{C}\). Therefore, \((\mathbf{IP}_{\alpha}^{k})\) can be considered as a (locally) unconstrained problem so that we neglect the constraint \(q\in\mathscr{Q}_{\mathsf{ad}}\) in the following. In [10], it is shown under suitable conditions on \(\mathcal{F}\) (see Remark 2.8-(ii)) that the iterates produced by the IRGNM stay in a ball in the interior of \(\mathscr{Q}_{\mathsf{ad}}\) (cf. Assumption 2.5)._
We accept the iterate \(q^{k+1}\coloneqq q(\alpha_{k})\) if \(\alpha_{k}>0\) is chosen such that
\[\theta\hat{J}(q^{k})\leq\left\|\mathcal{F}^{\prime}(q^{k})(q(\alpha_{k})-q^{k })+\mathcal{F}(q^{k})-y^{\delta}\right\|_{\mathscr{H}}^{2}\leq\Theta\hat{J}(q ^{k}), \tag{2.7}\]
where \(0<\theta<\Theta<1\) holds. Condition (2.7) can be obtained via an inexact Newton method as in [11], whereas we use a backtracking technique (see Algorithm 1): If the first inequality in (2.7) is not fulfilled, then \(\alpha_{k}\) is too small when solving \((\mathbf{IP}_{\alpha}^{k})\), so we increase \(\alpha_{k}\) and solve \((\mathbf{IP}_{\alpha}^{k})\) again. Similarly, we decrease \(\alpha_{k}\) if the second inequality in (2.7) is not fulfilled.
If \(y^{\delta}\notin\text{range}(\mathcal{F})\), the iteration cannot converge, but it is possible to develop early stopping criteria to prevent noise amplification. For this we use the so-called _discrepancy principle_: we stop the iteration after \(k_{*}(\delta,y^{\delta})\) steps provided
\[\hat{J}(q^{k_{*}(\delta,y^{\delta})})\leq\frac{1}{2}(\tau\delta)^{2}\leq\hat{J} (q^{k}),\quad\ k=0,...,k_{*}(\delta,y^{\delta}). \tag{2.8}\]
The parameter \(\tau>1\) reflects the fact that we can not expect the discrepancy to be lower than the noise in the given data. Also, condition (2.7) can be understood as a linearized discrepancy principle. Note that the iterates depend on the noise level (\(q^{k}=q^{k,\delta}\)), but we suppress the superscript \(\delta\) for readability.
**Remark 2.8**.:
1. _Let us turn back to_ Example 2.4-(ii)_. Since we do not want to consider \(H^{2}\)-regular coefficients in our numerical experiments, we assume that piecewise linear finite elements can be used to approximate the coefficients; see also_ _[_11, 12_, 13_]__._
2. _A local convergence analysis of an adaptive discretization of the IRGNM is given in_ _[_10, 11_]__. It turns out that a tangential cone condition and a weak closedness condition on the forward operator_ \(\mathcal{F}\) _play an essential role in the analysis. For our particular examples these conditions were verified in_ _[_12_, 13_, 14_]__. However, in this work, we do not focus on the theoretical foundations in the sense of regularization theory but rather concentrate on the numerical realization of our reduced-order approach while assuming that our iterates converge._
In the following, we will need the gradient of the discrepancy \(\hat{J}\) and the optimality conditions for \((\mathbf{IP}_{\alpha}^{k})\), which are stated in the following remark.
**Remark 2.9**.:
1. _For_ \(q\in\mathscr{C}\) _the gradient_ \(\nabla\hat{J}(q)\in\mathscr{Q}\) _of_ \(\hat{J}\) _satisfies_ \[\left\langle\nabla\hat{J}(q),d\right\rangle_{\mathscr{Q}}=\left\langle\mathcal{ B}^{\prime}_{u}p,d\right\rangle_{\mathscr{Q}^{\prime},\mathscr{Q}}=\partial_{q} a(u,p;d)\quad\text{for all }d\in\mathscr{Q},\] (2.9) _where_ \(u=\mathcal{S}(q)\) _holds and_ \(p=p(q)\in V\) _is the solution of the adjoint equation_ \[a(v,p;q)=-\left\langle\mathcal{C}^{*}(\mathcal{C}u-y^{\delta}),v\right\rangle_{V }\quad\text{for all }v\in V\] (2.10)
_with the adjoint operator_ \(\mathcal{C}^{*}:\mathscr{H}\to V\) _of_ \(\mathcal{C}\)_. Moreover,_ \(\mathcal{B}^{\prime}_{u}:V\to\mathscr{D}^{\prime}\) _is the dual operator of_ \(\mathcal{B}_{u}:\mathscr{Q}\to V^{\prime}\) _given as_ \[\left\langle\mathcal{B}_{u}d,v\right\rangle_{V^{\prime},V}=\partial_{q}a(u,v;d) \quad\text{for all }(d,v)\in\mathscr{Q}\times V.\]
2. _Suppose that_ Assumptions 2.3 _and_ 2.5 _hold. Let_ \(u^{k}=\mathcal{S}(q^{k})\in V\) _be the state at the current iterate_ \(q^{k}\in\mathscr{C}\)_. Due to_ \(\alpha_{k}>0\) _problem_ \((\mathbf{IP}^{k}_{\alpha})\) _is a linear-quadratic, strictly convex optimization problem with a unique (unconstrained) minimizer; see_ _[_10_, Theorem 2.14]__. The update_ \(q^{k+1}\in\mathscr{C}\) _is optimal for_ \((\mathbf{IP}^{k}_{\alpha})\) _if and only if there exists the linearized state_ \(\delta u^{k+1}\in V\) _and the linearized adjoint state_ \(\delta p^{k+1}\in V\) _satisfying the optimality system_ \[a(\delta u^{k+1},v;q^{k})+\left\langle\mathcal{B}_{u^{k}}(q^{k+1} -q^{k}),v\right\rangle_{V^{\prime},V} =0\quad\text{for all }v\in V,\] (2.11a) \[a(v,\delta p^{k+1};q^{k})+\left\langle\mathcal{C}(u^{k}+\delta u ^{k})-y^{\delta},\mathcal{C}v\right\rangle_{\mathscr{H}^{\varrho}} =0\quad\text{for all }v\in V,\] (2.11b) \[\left\langle\alpha_{k}(q^{k+1}-q_{\circ})+\mathcal{J}_{\mathscr{Q }}^{-1}\mathcal{B}^{\prime}_{u^{k}}\delta p^{k+1},q\right\rangle_{\mathscr{Q }} =0\quad\text{for all }q\in\mathscr{Q},\] (2.11c) _where_ \[\mathcal{J}_{\mathscr{Q}}:\mathscr{Q}\to\mathscr{Q}^{\prime}\] _is the Riesz-isomorphism._
3. _Note that_ (2.11c) _implies the relationship_ \[q^{k+1}=q_{\circ}-\frac{1}{\alpha_{k}}\mathcal{J}_{\mathscr{Q}}^{-1}\mathcal{ B}^{\prime}_{u^{k}}\delta p^{k+1}\quad\text{in }\mathscr{Q}.\] (2.12) _In_ Section 3.1 _we will essentially use_ (2.12) _to adaptively build finite-dimensional (reduced) parameter spaces for the iterates_ \(\{q^{k}\}_{k\geq 0}\)_, where we utilize_ \(q_{\circ}\) _as well as the ansatz spaces for the states and the dual solutions._
## 3 Adaptive RB schemes for infinite-dimensional parameter spaces
Every step of the IRGNM consists of solving the linear-quadratic PDE-constrained optimization problem \((\mathbf{IP}^{k}_{\alpha})\) on \(\mathscr{Q}\) at least once. Thus, Algorithm 1 can be considered a many-query context for the parameterized equation (2.1). Hence, using the RB method to speed up the solution process seems natural. Another reason for applying model reduction methods in the context of inverse problems are situations where the noise level is so high, such that only a vague reconstruction of the parameter is possible.
The fundamental problem in applying the RB method in our context is that the parameter space \(\mathscr{Q}\) is infinite-dimensional and has to be discretized itself. In a discrete setting with \(\mathscr{Q}_{h}=\operatorname{span}\left\{\phi_{1},...,\phi_{N_{\mathscr{Q}}}\right\}\), one would replace the infinite-dimensional parameter with its finite element type approximation
\[q=\sum_{i=1}^{N_{\mathscr{Q}}}\mathfrak{q}_{i}\phi_{i}\in\mathscr{Q}_{h} \tag{3.1}\]
with coefficients \(\mathsf{q}_{1},\ldots,\mathsf{q}_{N_{\mathscr{D}}}\in\mathbb{R}^{N_{\mathscr{D}}}\) to obtain a parameter affine decomposition of the bilinear form \(a\).
For instance, if we can write the bilinear form \(a\) as
\[a(u,v;q)=a_{1}(u,v)+a_{2}(u,v;q)\quad\text{for $u,v\in V$ and $q\in\mathscr{D}$} \tag{3.2}\]
and use (3.1), we derive the affine decomposition (cf. Assumption 2.1) of \(a\) as
\[a(u,v;q)=a_{1}(u,v)+\sum_{i=1}^{N_{\mathscr{D}}}\mathsf{q}_{i}\,a_{2}(u,v; \phi_{i})\quad\text{for all $u,v\in V$}. \tag{3.3}\]
Note that (3.2) holds for our state equations introduced in Example 2.4.
Importantly, the affine decomposition will consist of an arbitrarily large number \(N_{\mathscr{D}}\in\mathbb{N}\) of affine components determined by the discretization density of the space \(\mathscr{D}_{h}\). This leads to the following known challenges in RB methods:
1. A high number of affine components need to be projected onto the reduced space.
2. The assembly of residual-based error estimates is infeasible because it involves the computation of Riesz representatives for each affine component per reduced basis element.
In what follows, we resolve these issues and derive an RB method for the IRGNM in two steps:
* **Step 1:** We first concentrate on a problem-specific dimension reduction of the infinite-dimensional parameter space \(\mathscr{D}\).
* **Step 2:** We propose a classical RB approximation of the primal and dual state space \(V\) on top of the reduced parameter space that overcomes both of the above-mentioned problems (i) and (ii).
### Step 1: Reduction of the parameter space
Since the parameter space in our case is generally infinite-dimensional, we aim at reducing the number of affine components first and to adaptively construct a low-dimensional reduced space \(\mathscr{D}_{r}=\operatorname{span}\left\{\Phi_{1},...,\Phi_{n_{\mathscr{D}}} \right\}\subset\mathscr{D}\) with dimension \(n_{\mathscr{D}}\ll N_{\mathscr{D}}\). For \(q=\sum_{i=1}^{n_{\mathscr{D}}}\mathsf{q}_{i}\Phi_{i}\in\mathscr{D}_{r}\) and a bilinear form \(a\) satisfying Assumption 2.1 and (3.2) we obtain a low-dimensional representation
\[a(u,v;q)=a_{1}(u,v)+\sum_{i=1}^{n_{\mathscr{D}}}\mathsf{q}_{i}a_{2,i}(u,v) \quad\text{for all $u,v\in V$} \tag{3.4}\]
with \(a_{2,i}(u,v)\coloneqq a_{2}(u,v;\phi_{i})\) for \(i=1,...,n_{\mathscr{D}}\). As a motivation for how to choose the snapshots for the parameter space, we recall Remark 2.9 and therefore consider the first-order optimality condition of the regularized discrepancy
\[\hat{J}_{\alpha}(q)\coloneqq\hat{J}(q)+\frac{\alpha}{2}\left\|q-q_{\circ} \right\|_{\mathscr{D}}^{2}\quad\text{for $\alpha>0$, $q_{\circ}\in\mathscr{D}$ and $q\in\mathscr{D}_{\mathsf{ad}}$}.\]
Suppose \(\bar{q}\) is a local unconstrained minimizer of \(\hat{J}_{\alpha}\), then it holds using (2.9)
\[\nabla\hat{J}_{\alpha}(\bar{q})=\alpha_{k}(\bar{q}-q_{\circ})+\mathcal{J}_{ \mathscr{Q}}^{-1}\mathcal{B}_{\bar{u}}^{\prime}p(\bar{q})=0; \tag{3.5}\]
compare also (2.12). It follows from (3.5) that the optimal parameter \(\bar{q}\) is in \(\mathscr{Q}_{r}\) provided \(q_{\circ}\in\mathscr{Q}_{r}\) and \(\mathcal{J}_{\mathscr{Q}}^{-1}\mathcal{B}_{\bar{u}}^{\prime}p(\bar{q})\in \mathscr{Q}_{r}\). If one assumes a low-dimensional RB representation of the adjoint state the upper optimality condition connects the reduced basis of the parameter space with the reduced basis of the adjoint state. Therefore, we will adaptively enrich the reduced parameter space with the gradients \(\nabla\hat{J}(q^{k})=\mathcal{J}_{\mathscr{Q}}^{-1}\mathcal{B}_{u^{k}}^{ \prime}p(q^{k})\) of the iterates \(q^{k}\).
The IRGNM with adaptive parameter space reduction (cf. Algorithm 2) consists of outer and multiple inner iterations. In the outer iteration, we construct the reduced parameter space and in every inner iteration, we solve a subproblem with the IRGNM on the current reduced parameter space. Suppose we are given an iterate \(q^{k}\in\mathscr{C}\) sufficiently close to the solution. The current reduced parameter space \(\mathscr{Q}_{r}^{k}\) contains the current iterate \(q^{k}\), the regularization center \(q_{\circ}\), and the current gradient \(\nabla\hat{J}(q^{k})\in\mathscr{Q}\) as a basis function. Then, instead of (2.6) we solve the low-dimensional minimization problem
\[\min\hat{J}(q)\quad\text{s.t.}\quad q\in\mathscr{C}\cap\mathscr{Q}_{r}^{k} \tag{3.6}\]
using the IRGNM to get the next iterate \(q^{k+1}\in\mathscr{Q}_{r}^{k}\). Here we suppose the following hypothesis (cf. Assumption 2.5 and Remark 2.6):
**Assumption 3.1**.: _For any \(k\in\mathbb{N}\) problem (3.6) has a unique (local) minimizer \(\bar{q}^{k+1}\) in the interior of \(\mathscr{C}\cap\mathscr{Q}_{r}^{k}\). Moreover, \(\hat{J}\) is continuously Frechet-differentiable in a neighborhood of \(q^{k+1}\)._
**Remark 3.2**.: _It follows from Assumption 3.1 that (3.6) can be considered as a locally unconstrained minimization problem so that the constraint \(q\in\mathscr{C}\) can be neglected._
As a stopping criterion for (3.6), one can use a discrepancy principle (compare (2.8)) with a possibly modified noise level \(\bar{\delta}_{k}\geq\delta\). To update the parameter space, we compute the new gradient \(\nabla\hat{J}(q^{k+1})\) and construct the space \(\mathscr{Q}_{r}^{k+1}\) by adding \(\nabla\hat{J}(q^{k+1})\) to \(\mathscr{Q}_{r}^{k}\) and subsequent (Gram-Schmidt) orthonormalization. The initial regularization parameter \(\alpha_{0}^{k}\) of the inner IRGNM at outer step \(k\) is chosen as follows: \(\alpha_{0}\coloneqq\alpha_{0}^{0}>0\) and \(\alpha_{0}^{k}\coloneqq\alpha_{1}^{k-1}\), where \(\alpha_{1}^{k-1}\) is the first accepted inner regularization parameter satisfying (2.7) of the previous outer iteration \(k-1\). This procedure is repeated until the convergence criterion (2.8) is fulfilled. We summarize the method in Algorithm 2.
Another reason why the gradient is expected to be a good choice for the enrichment of the parameter space is that it provides information in the direction of the steepest descent of the objective \(\hat{J}\), i.e., we have \(q^{k}-t_{k}\nabla\hat{J}(q^{k})\in\mathscr{Q}_{r}^{k}\) for any stepsize \(t_{k}>0\), which implies a sufficient decrease of the objective \(\hat{J}\). After a few iterations, the space \(\mathscr{Q}_{r}^{k}\) contains also approximate information about the curvature in the form \(\nabla\hat{J}(q^{k})-\nabla\hat{J}(q^{k-1})\) and \(q^{k}-q^{k-1}\). This is used in Quasi-Newton methods ensuring superlinear local convergence. In this way, one can
interpret Algorithm 2 as a subspace optimization method (cf. [13]; see also [20] in an inverse problems context), that collects a set of search directions and computes the exact step length in each iteration.
**Remark 3.3**.: _Other snapshot choices for the parameter space are possible:_
* _local or coarse snapshots consisting of indicator functions of some patches of the underlying mesh,_
* _curvature information by using an approximate solution of the linear-quadratic subproblem_ \((\mathbf{IP}_{\alpha}^{k})\) _or hessian products of_ \(\hat{J}_{\alpha}\) _in certain directions,_
* _gradients_ \(\nabla\hat{J}(q)\) _for_ \(q\in\operatorname{span}\left\{q^{k}-q^{k-1},\nabla\hat{J}(q^{k})-\nabla\hat{J} (q^{k-1})\right\}\)_, where the gradient serves as a non-linear mechanism to generate new information for_ \(\mathscr{Q}_{r}^{k}\)_,_
* _subspace optimization techniques to construct_ \(\mathscr{Q}_{r}^{k}\) _(see_ [13]_)._
Our described procedure serves two major purposes:
* Reducing the number of unknowns for the IRGNM substantially such that \((\mathbf{IP}_{\alpha}^{k})\) does not operate on \(\mathscr{Q}\) (or \(\mathscr{Q}_{h}\)) but on the low dimensional space \(\mathscr{Q}_{r}\).
* Reducing the number of affine components from \(N_{\mathscr{Q}}\) in (3.3) to \(n_{\mathscr{Q}}\) in (3.4).
Point (i) alone is of interest in terms of regularization by discretization even if no additional state reduction is performed. Hence, we also investigate this method in the numerical experiments. However, since one has to solve repeatedly a FOM subproblem (3.6), the method will not pay off in terms of computational time unless the dimension of the parameter space dominates the cost of the FOM algorithm. If this is not the case, we suggest solving the subproblems (3.6) inexactly by using a reduced-order approximation \(\hat{J}_{r}\) of \(\hat{J}\). Significantly, Point (ii) is the reason why we can use an efficient RB approximation for the state space \(V\). This approach is explained in the next section.
### Step 2: Reduction of the state space
If a reduced parameter space \(\mathscr{Q}_{r}\) is available, i.e., we have the low-dimensional affine representation (3.4) of the bilinear form \(a\) on \(\mathscr{Q}_{r}\), we can efficiently apply the standard RB method to reduce the space \(V\) for parameters in \(\mathscr{Q}_{r}\). To this end, let a reduced state space \(V_{r}\) of dimension \(n_{V}\in\mathbb{N}\) be given. Given a parameter \(q\in\mathscr{Q}_{r}\cap\mathscr{Q}_{\mathsf{ad}}\), the state and parameter RB approximation of the state is given as \(u_{r}=u_{r}(q)\in V_{r}\)
\[a(u_{r},v;q)=\ell(v)\quad\text{for all }v\in V_{r}. \tag{3.7}\]
We define the RB solution operator
\[\mathcal{S}_{r}:\mathscr{Q}_{r}\cap\mathscr{Q}_{\mathsf{ad}}\to V_{r},\quad \mathscr{Q}_{r}\cap\mathscr{Q}_{\mathsf{ad}}\ni q\mapsto\mathcal{S}_{r}(q)=u_ {r}(q)\in V_{r}.\]
Furthermore, the RB forward operator \(\mathcal{F}_{r}\) and the RB discrepancy \(\hat{J}_{r}:\mathscr{Q}_{r}\cap\mathscr{Q}_{\mathsf{ad}}\to\mathbb{R}\) are introduced by
\[\hat{J}_{r}(q):=\frac{1}{2}\left\|\mathcal{F}_{r}(q)-y^{\delta}\right\|_{ \mathscr{H}}^{2}\quad\text{with }\mathcal{F}_{r}=\mathcal{C}\circ\mathcal{S}_{r} \tag{3.8}\]
so that we solve
\[\min\hat{J}_{r}(q)\quad\text{s.t.}\quad q\in\mathscr{C}\cap\mathscr{Q}_{r} \tag{3.9}\]
instead of (3.6). Similar to Assumption 3.1, we suppose that (3.9) possesses a unique (local) solution \(\bar{q}_{r}\) in the interior of \(\mathscr{C}\cap\mathscr{Q}_{r}\). Let us mention that later both \(\mathscr{Q}_{r}\) and \(V_{r}\) are enriched and therefore depend on the iteration counter \(k\).
Similar reduction schemes are used to compute the derivatives of \(\mathcal{F}_{r}\) and \(\hat{J}_{r}\). For instance, the adjoint \(p_{r}=p_{r}(q)\in V_{r}\) is given as the solution of
\[a(v,p_{r};q)=-\left\langle\mathcal{C}^{*}(\mathcal{C}u_{r}-y^{\delta}),v \right\rangle_{V_{r}}\quad\text{for all }v\in V_{r}\]
and the gradient of the reduced discrepancy can be expressed as
\[\nabla\hat{J}_{r}(q)=\mathcal{J}_{\mathscr{Q}}^{-1}\mathcal{B}_{u_{r}}p_{r} \in\mathscr{Q}_{r}. \tag{3.10}\]
To control the RB error of \(\hat{J}_{r}\) (w.r.t. to the state space reduction), we need an a-posteriori error estimate. For fixed \(q\in\mathscr{Q}_{r}\), \(u\in V\) we define the primal residual \(r_{pr}(u;q)\in V^{\prime}\) as
\[r_{pr}(u;q)[v]\coloneqq\ell(v)-a(u,v;q)\quad\text{for all }v\in V,\]
whereas the adjoint residual is given for \(p\in V\) as
\[r_{du}(u,p;q)[v]\coloneqq-\left\langle\mathcal{C}^{*}(\mathcal{C}u-y^{\delta}, v)_{V}-a(v,p;q)\quad\text{for all }v\in V.\right.\]
With these definitions, we can formulate the error estimator for the discrepancy \(\hat{J}\).
**Proposition 3.4** (A-posteriori error estimate for \(\hat{J}\)).:
_Let \(q\in\mathscr{Q}_{r}\cap\mathscr{Q}_{\mathsf{ad}}\) and \(\underline{a}_{q}>0\) be the (\(q\)-dependent) coercivity constant for \(a(\cdot,\cdot\,;q)\). Then:_
\[|\hat{J}(q)-\hat{J}_{r}(q)|\leq\Delta_{\,\hat{J}}(q),\]
_with_
\[\Delta_{\,\hat{J}}(q) \coloneqq\frac{\left\|\mathcal{C}\right\|_{\mathscr{L}(V,\mathscr{ H})}^{2}}{2}\Delta_{pr}(q)^{2}+\left\|r_{du}(u_{r},p_{r};q)\right\|_{V}\!\! \Delta_{pr}(q),\] \[\Delta_{pr}(q) \coloneqq\frac{1}{a_{q}}\left\|r_{pr}(u_{r};q)\right\|_{V^{\prime }}\!\!.\]
Proof.: The proof is analoguous to the error estimation in [13, Proposition 3.6].
The offline/online decomposition of the error estimator is heavily dependent on the RB sizes \(n_{\mathscr{Q}}\) and \(n_{V}\) since the number of needed Riesz-representatives is of the order of \(n_{\mathscr{Q}}n_{V}\). Furthermore, projecting the operators (3.7) onto \(V_{r}\) is also dependent on the number of affine coefficients. Hence, a low number of affine components \(n_{\mathscr{Q}}\) is crucial.
### Parameter and state space RB trust region IRGNM
The efficient error bound on \(\mathscr{Q}_{r}\) from Section 3.2 enables to develop a certification of the RB model of the state equation by an error-aware trust region strategy. In this section, we combine ideas from the trust region reduced basis (TRRB) method from [13, 14, 15] and the strategies presented in the last two subsections to (i) cope with the infinite-dimensional parameter space and (ii) solve the ill-posed problem (**IP**) with an adaptive IRGNM in a trust region (TR) setting. For (i) we adaptively enrich the reduced parameter space \(\mathscr{Q}_{r}\) with the gradient of the current iterate (as described in Section 3.1). For (ii), we use a parameter and state space reduced inner IRGNM instead of a state space reduced Newton method as in [13]. In particular, we only enforce an error-aware TR criterion on the state space.
Just as the IRGNM method with pure parameter space reduction, the parameter and state space reduced TR-IRGNM (\(\mathscr{Q}_{r}\)-\(V_{r}\)-IRGNM) consists of outer and multiple inner iterations. In every outer iteration \(k\), we enrich the reduced parameter space \(\mathscr{Q}_{r}^{k}\) as well as the reduced state space \(V_{r}^{k}\). The parameter space reduction \(\mathscr{Q}_{r}^{k}\) simplifies the affine decomposition enabling the state space RB method to be efficient. Let Assumption 2.5 be valid and \(q^{0}\in\mathscr{C}\subset\mathscr{Q}_{\mathsf{ad}}\) be sufficiently close to the (unconstrained) solution \(\bar{q}\). The initial reduced state space \(V_{r}^{0}\) contains the primal and dual states \(u(q^{0}),p(q^{0})\in V\). Since the adjoint state is already computed, the gradient \(\nabla\hat{J}(q^{k})\) is cheaply available and is added to the initial reduced parameter space \(\mathscr{Q}_{r}^{0}\). As before, we suppose \(q^{0},q_{\circ}\in\mathscr{Q}_{r}^{0}\). If the iterate \(q^{k}\) does not satisfy the overall discrepancy principle (2.8), we perform the following steps to update the iterate.
Computation of the AGC point.Throughout we assume that the iterates \(q^{k}\) belong to \(\mathscr{C}\subset\mathscr{D}_{\mathsf{ad}}\). First, we compute the approximated generalized Cauchy (AGC) point \(q^{k}_{{}_{\text{AGC}}}\in\mathscr{D}_{r}^{k}\), which can be identified in the inverse problem setting as a Landweber-type update and is defined as follows
\[q^{k}_{{}_{\text{AGC}}}=q^{k}-t_{k}\nabla\hat{J}_{r}(q^{k})\in\mathscr{D}_{r}^ {k}. \tag{3.11}\]
In (3.11) the scalar \(t_{k}>0\) is a stepsize chosen to satisfy \(q^{k}_{{}_{\text{AGC}}}\in\mathscr{D}_{\mathsf{ad}}\), a sufficient decrease condition in the reduced objective
\[\hat{J}_{r}(q^{k}_{{}_{\text{AGC}}})-\hat{J}_{r}(q^{k})\leq-\frac{\kappa_{{}_ {\text{arm}}}}{t_{k}}\left\|q^{k}-q^{k}_{{}_{\text{AGC}}}\right\|_{\mathscr{D} }^{2}, \tag{3.12}\]
for \(\kappa_{{}_{\text{arm}}}>0\) and the TR condition
\[\mathcal{R}_{\hat{J}}^{k}(q^{k}_{{}_{\text{AGC}}})\leq\eta^{(k)}. \tag{3.13}\]
Here \(\eta^{(k)}>0\) denotes the current TR radius and the relative error estimator is defined as
\[\mathcal{R}_{\hat{J}}^{k}(q)\coloneqq\frac{\Delta_{\hat{J}}(q)}{\hat{J}_{r}(q )}\quad\text{for }q\in\mathscr{D}_{r}^{k}\cap\mathscr{D}_{\mathsf{ad}}\]
with \(\Delta_{\hat{J}}(q)\) from Proposition 3.4. The Landweber step/AGC point ensures a sufficient decrease in every iteration and serves as an initial guess for the subproblem solver. Conditions (3.12) and (3.13) are found via a backtracking strategy, which is initialized for the computation of the AGC point as \(\bar{t}=0.5\|\nabla\hat{J}(q^{0})\|_{\mathscr{D}}^{-1}\), which is the step length that was used in [11, 12] for the Landweber algorithm.
The TR subproblem.We compute the trial step \(q_{{}_{\text{trial}}}\in\mathscr{C}\cap\mathscr{D}_{r}^{k}\) by solving the parameter and state space-reduced problem
\[\min_{q\in\mathscr{C}\cap\mathscr{D}_{r}^{k}}\hat{J}_{r}(q)\quad\text{s.t.} \quad\mathcal{R}_{\hat{J}}^{k}(q)\leq\eta^{(k)}.\] ( \[\mathbf{IP}_{r}^{k}\] )
Analogously to Remark 2.6 we assume that (\(\mathbf{IP}_{r}^{k}\)) admits a locally unique (unconstrained) solution in \(\mathscr{D}_{r}^{k}\) so that we can neglect the convexity constraint \(q\in\mathscr{C}\) in the numerical solution method. Thus, (\(\mathbf{IP}_{r}^{k}\)) can be solved using the IRGNM with Tikhonov regularization (see Algorithm 1). Again, the additional TR constraint is treated using an Armijo-type backtracking technique (see [16]), which means the error estimator \(\Delta_{\hat{J}}\) has to be cheaply available. For the initialization of the Armijo backtracking we use the initial step length \(\bar{t}=1\) in this case. As an initial guess, we use the AGC point \(q^{k}_{{}_{\text{AGC}}}\), which is defined in (3.11). The initial regularization parameter \(\alpha_{0}^{k}\) is chosen as in Section 3.1. Note that we only control the approximation quality of the state-space RB with \(\Delta_{\hat{J}}\) which also contains approximation information about the primal and the dual state. However, apart from the dual model, we do not control derivative information which has shown advantageous in terms of computational efficiency
in former works (see e.g. [BKM\({}^{+}\)22]). As stopping criteria for the subproblem, we use on the one hand a reduced discrepancy principle
\[\|\mathcal{F}_{r}(q_{\text{\tiny trial}})-y^{\delta}\|_{\mathscr{H}}\leq\tilde{ \tau}\tilde{\delta}_{k} \tag{3.14}\]
with a (possibly) modified noise level \(\tilde{\delta}_{k}\geq\delta\), e.g. \(\tilde{\delta}_{k}=\hat{J}(q^{k})\) and \(\tilde{\tau}\geq 0\). In (3.14) we denote by \(q_{\text{\tiny trial}}\) the solution to \((\mathbf{IP}_{r}^{k})\) for given \(\eta^{(k)}\). On the other hand, we use the usual TR condition
\[\beta_{1}\eta^{(k)}\leq\mathcal{R}_{\hat{J}}^{k}(q_{\text{\tiny trial}})\leq \eta^{(k)}, \tag{3.15}\]
where \(\beta_{1}\in(0,1)\) is close to one to prevent the trial step from being near the boundary of the trust region, where the ROM is inaccurate.
Acceptance of the trial step and modification of the TR radius.We consider cheaply available sufficient and necessary optimality conditions for the acceptance of the trial step \(q_{\text{\tiny trial}}\) (see [QGVW17, BKM\({}^{+}\)22]). A sufficient condition for acceptance is the following
\[\hat{J}_{r}(q_{\text{\tiny trial}})+\Delta_{\hat{J}}(q_{\text{\tiny trial}}) <\hat{J}_{r}(q_{\text{\tiny AGC}}^{k}) \tag{3.16}\]
and a necessary condition is
\[\hat{J}_{r}(q_{\text{\tiny trial}})-\Delta_{\hat{J}}(q_{\text{\tiny trial}})> \hat{J}_{r}(q_{\text{\tiny AGC}}^{k}). \tag{3.17}\]
If these cheap conditions do not give information about acceptance, we use the corresponding FOM condition:
\[\hat{J}(q_{\text{\tiny trial}})\leq\hat{J}_{r}(q_{\text{\tiny AGC}}^{k}). \tag{3.18}\]
In the case of acceptance, we set \(q^{k+1}=q_{\text{\tiny trial}}\) and enlarge the TR radius, if the RB objective predicts the actual reduction, i.e., we check if
\[\varrho^{(k)}\coloneqq\frac{\hat{J}(q^{k})-\hat{J}(q^{k+1})}{\hat{J}_{r}(q^{k })-\hat{J}_{r}(q^{k+1})}\geq\beta_{2}\in\bigg{[}\frac{3}{4},1\bigg{)}. \tag{3.19}\]
Note that in case of acceptance, we enrich the spaces \(\mathscr{Q}_{r}^{k}\) and \(V_{r}^{k}\), such that the FOM quantity \(\hat{J}(q^{k+1})\) is available anyway. In the case of rejection, we set \(q^{k+1}=q^{k}\) and diminish the TR radius.
Extending the reduced spaces \(\mathscr{Q}_{r}\) and \(\boldsymbol{V}_{r}\).In case of acceptance, we compute the state \(u(q^{k+1})\) and check the FOM discrepancy principle. If this is not fulfilled, we compute the adjoint state \(p(q^{k+1})\) and the FOM gradient \(\nabla\hat{J}(q^{k+1})\). Then, we enrich the space \(\mathscr{Q}_{r}^{k}\) by orthonormalization and enlarge the affine decomposition. Afterward, we update the RB space \(V_{r}^{k}\) using \(u(q^{k+1})\) and \(p(q^{k+1})\) by orthonormalization, and the error estimator is extended by the Riesz representatives of the new affine component and the new state space RB elements. For another combined parameter and state space enrichment strategy we refer to Remark 3.5. The method can be furthermore combined with the skip-basis-enrichment strategy from [BKM\({}^{+}\)22].
Assembly of the error estimator.The assembly of the error estimator is the most expensive part in terms of computation time and FOM solves (see Section 4) for the computation of the Riesz representatives, especially if \(n_{\mathscr{Q}}\) and \(n_{V}\) are large. In this case, it may be more expensive to online/offline decompose the residual norms in Proposition 3.4 than to evaluate them online in the Armijo-type line search. If this is the case in iteration \(k>2\), we do not assemble the error estimator anymore but compute it online in the next iteration. Formally, we stop the assembly of the error estimator in iteration \(k+1\) if it holds
\[K^{k}_{\text{\tiny{ass}}}>K^{k}_{\text{\tiny{online}}},\]
where
\[K^{k}_{\text{\tiny{ass}}}\coloneqq\dim(V^{k}_{r})\big{(}\dim(\mathscr{Q}^{k}_ {r})-\dim(\mathscr{Q}^{k-1}_{r})\big{)}+\dim(\mathscr{Q}^{k-1}_{r})\big{(} \dim(V^{k}_{r})-\dim(V^{k-1}_{r})\]
is the number of Riesz representatives that are needed to update the error estimator. On the other hand, \(K^{k}_{\text{\tiny{online}}}\) is the number of FOM solves, which would be needed to compute the error estimator online in iteration \(k\). In detail, this is twice the number of total Armijo iterations for the computation of the AGC point and the steplength in every inner iteration of the inner IRGNM at outer iteration \(k\), since in every Armijo iteration we have to evaluate the error estimator \(\Delta_{J}\) online, which involves the computation of the primal and dual residual norm (cf. Proposition 3.4). The whole procedure of the parameter and state space reduced method is depicted in Algorithm 3.
**Remark 3.5** (Other enrichment strategies).: _One can also enrich the state RB space with the sensitivities \(\delta u\) and \(\delta p\) at \(q^{k}\) in a direction \(d\) (see (2.11)). Then curvature information in the form \(\nabla^{2}\tilde{J}_{\alpha}(q^{k})d\) is available for an enrichment of \(\mathscr{Q}^{k}_{r}\), where \(\tilde{J}_{\alpha}\) is the linearized discrepancy cost from \((\mathbf{IP}^{k}_{\alpha})\). We tested this approach numerically for \(d=\nabla\hat{J}(q^{k})\) but did not achieve a more efficient RB method for our examples._
**Remark 3.6** (Convergence).: _A convergence result for an error-aware TRRB method for PDE-constrained parameter optimization using only state space reduction was given in [3]. For the IRGNM, a convergence and regularization result for adaptive finite element approximations was given in [10]. Further, the authors in [11] proved convergence and a regularization property for ill-posed problems for a standard metric TR algorithm under similar conditions as the ones mentioned in Remark 2.8. Thus, we expect a convergence result also for our method under usual conditions, in particular, assuming more regularity for the primal and dual variables. However, this aspect is not addressed in the present paper and is subject to future work._
## 4 Numerical experiments
In this section, we compare the numerical performance of the proposed algorithms applied on the two scenarios from Example 2.4. For the sake of brevity, in this section, we call them as follows:
```
0: Initial guess \(q^{0}\), discrepancy parameters \(\tau,\tilde{\tau}>0\), noise level \(\delta\), TR radius \(\eta^{(0)}\), boundary parameter \(\beta_{1}\in(0,1)\), tolerance for enlargement of the radius \(\beta_{2}\in[3/4,1)\), shrinking factor \(\beta_{3}\in(0,1)\), initial regularization parameter \(\alpha_{0}^{0}\), Armijo parameter \(\kappa_{\rm{arm}}>0\), regularization center \(q_{\circ}\).
1: Set \(k=0\) and initialize the RB model at \(q^{0}\): create \(V_{r}^{0}=\mathrm{span}\{u(q^{0}),p(q^{0})\}\), \(\mathscr{D}_{r}^{0}=\mathrm{span}\{q_{\circ},q^{0},\nabla\hat{J}(q^{0})\}\) by orthonormalization.
2:while\(\|\mathcal{F}(q^{k})-y^{\delta}\|_{\mathscr{H}}>\tau\delta\)do
3: Compute AGC point \(q^{k}_{\rm{AGC}}\) according to (3.11).
4: Solve (\(\mathbf{IP}_{r}^{k}\)) using IRGNM with stopping criteria (3.14), (3.15) for \(q_{\rm{trial}}\)
5: with initial regularization \(\alpha_{0}^{k}\).
6:if (3.16) then
7: Accept trial step \(q^{k+1}=q_{\rm{trial}}\), update \(V_{r}^{k}\) at \(u(q^{k+1}),\ p(q^{k+1})\) and
8:\(\mathscr{D}_{r}^{k}\) at \(\nabla\hat{J}(q^{k+1})\).
9: Compute \(\varrho^{(k)}\) from (3.19).
10:if\(\varrho^{(k)}>\beta_{2}\)then
11: Enlarge radius \(\eta^{(k+1)}=\beta_{3}^{-1}\eta^{(k+1)}\).
12:endif
13:elseif not (3.17) then
14: Reject trial step, set \(q^{k+1}=q^{k}\) and shrink radius \(\eta^{(k+1)}=\beta_{3}\eta^{(k+1)}\).
15:else
16: Compute the FOM discrepancy \(\hat{J}(q_{\rm{trial}})\).
17:if (3.18) then
18: Accept trial step \(q^{k+1}=q_{\rm{trial}}\), update \(V_{r}^{k}\) at \(u(q^{k+1}),p(q^{k+1})\) and
19:\(\mathscr{D}_{r}^{k}\) at \(\nabla\hat{J}(q^{k+1})\).
20:if\(\varrho^{(k)}>\beta_{2}\)then
21: Enlarge radius \(\eta^{(k+1)}=\beta_{3}^{-1}\eta^{(k+1)}\).
22:endif
23:else
24: Reject trial step, set \(q^{k+1}=q^{k}\), shrink radius \(\eta^{(k+1)}=\beta_{3}\eta^{(k+1)}\).
25:endif
26:endif
27: In case of acceptance update \(\tilde{\delta}_{k+1}\), \(\alpha_{0}^{k+1}\) and set \(k=k+1\).
28:endwhile
```
**Algorithm 3** (Parameter & state space reduced TR-IRGNM: \(\mathscr{D}_{r}\)-\(V_{r}\)-IRGNM)
1. FOM-IRGNM: the standard FOM IRGNM (cf. Algorithm 1).
2. \(\mathscr{Q}_{r}\)-IRGNM: the parameter space reduced IRGNM (cf. Algorithm 2).
3. \(\mathscr{Q}_{r}\)-\(V_{r}\)-IRGNM: the combined parameter and state space reduced TRGNM (cf. Algorithm 3).
**Remark 4.1**.: _Let us recall that \(\mathscr{Q}_{r}\)-IRGNM is not considered as a solution method on its own, but rather as a necessary step to obtain a parameter space reduction that enables us to build an efficient \(\mathscr{Q}_{r}\)-\(V_{r}\)-IRGNM._
For the simulations we used a Python implementation, using pyMOR ([16]) for the built-in full-order discretization and the model reduction part. The source code to reproduce the results is available at [14]. Our numerical findings were obtained on a MacBook Pro 2020 with a 2,3 GHz Quad-Core i7 and 16 GB RAM.
### Computational details
For the following model problems, we choose the computational domain \(\Omega=(0,1)^{2}\subset\mathbb{R}^{2}\). The FOM discretization of the spaces \(\mathscr{Q}\), \(\mathscr{H}\), \(H\), and \(V\) is realized by the space \(Q_{h}\) spanned by piecewise bilinear finite element (FE) basis functions on quadrilateral cells with \(N_{Q}=90,601\) dofs. The noisy data \(y^{\delta}\) is generated as follows: First, the exact parameter \(q^{\boldsymbol{\mathsf{e}}}\) is interpolated on a very fine mesh with \(361,201\) dofs, then the state equation for the exact parameter is solved to obtain the exact state \(u^{\boldsymbol{\mathsf{e}}}\), which is interpolated on the grid used for the computations. Thereafter, we add uniformly distributed noise \(\xi\in\mathscr{H}\setminus\{0\}\) with noise level \(\delta>0\) such that
\[y^{\delta}=\mathcal{C}u^{\boldsymbol{\mathsf{e}}}+\delta\,\frac{\xi}{\|\xi\|_ {\mathscr{H}}}.\]
The noise level was set to \(\delta=10^{-5}\) and the parameters for the inner and outer IRGNM are chosen as
\[\theta=0.4,\quad\Theta=0.9,\quad\tau=3.5,\quad\tilde{\tau}=1,\quad\tilde{ \delta}_{k}=\delta.\]
The TR parameters are chosen as:
\[\beta_{1}=0.95,\quad\beta_{2}=0.75,\quad\beta_{3}=0.5,\quad\eta^{(0)}=0.1, \quad\kappa_{\text{\tiny{arm}}}=10^{-12}.\]
In all numerical experiments, the inequality constraints in \(\mathscr{Q}_{\mathsf{ad}}\) are not taken explicitly into account and the optimization problems are considered as unconstrained optimization problems in \(\mathscr{C}\) (cf. Assumption 2.5), which was also done, e.g., in [14, 15, 16]. Moreover, we use a discretize-before-optimize (DBO) approach. In particular, we solve the linear-quadratic subproblems (\(\mathbf{IP}_{\alpha}^{k}\)) using the conjugate gradient (CG) algorithm. For the \(\mathscr{Q}_{r}\)-IRGNM algorithm we solve the subproblems inexactly by limiting the maximum number of inner IRGNM iterations to \(k_{\text{max}}^{\text{inner}}=2\) (cf. Algorithm 2) since this is enough to obtain a new snapshot for the reduced parameter space, while
it avoids unnecessary FOM CG iterations. In the following numerical experiments, we consider the two scenarios from Example 2.4 for a right-hand side \(f\equiv 1\), where we reconstruct a reaction/diffusion coefficient starting from a background \(q^{0}=q_{\circ}\equiv 3\in\mathscr{Q}_{\mathsf{ad}}\). The lower bound in the definition of \(\mathscr{Q}_{\mathsf{ad}}\) in (2.2) is given as \(q_{\mathsf{a}}\equiv 0.001\). In all experiments, the observation operator is the canonical embedding and we have \(\|\mathcal{C}\|_{\mathcal{L}(V,\mathscr{H})}=1\) for the error estimator \(\Delta_{\hat{J}}\) in Proposition 3.4.
We compare the three proposed algorithms in terms of FOM PDE solves to measure the impact of the state space reduction and the number of FOM \(B_{u}\) (and \(B_{u}^{\prime}\)) applications, which are needed to compute the gradient in (2.9) and in each conjugate gradient iteration (see the optimality system (2.11)) to measure the impact of the parameter space reduction. Moreover, we compare the algorithms in terms of total computational time and the quality of the reconstruction.
**Remark 4.2** (The operator \(\mathcal{B}_{u}\)).: _Let \(u\in V\) be fixed. We discuss the application of the operators \(\mathcal{B}_{u}:\mathscr{Q}\to V^{\prime}\) and \(\mathcal{B}_{u}^{\prime}:V\to\mathscr{Q}^{\prime}\) for the two cases from Example 2.4._
1. _For the reaction problem we have for_ \(q\in\mathscr{Q}\) _and_ \(v\in V\)__ \[\langle\mathcal{B}_{u}q,v\rangle_{V^{\prime},V}=\int_{\Omega}quv\ d\mathbf{x}\] _and for_ \(p\in V\) _and_ \(v\in\mathscr{Q}\)__ \[\langle\mathcal{B}_{u}^{\prime}p,v\rangle_{\mathscr{Q}^{\prime},\mathscr{Q} }=\int_{\Omega}vup\ d\mathbf{x}.\] _Since we discretize the spaces_ \(\mathscr{Q}\) _and_ \(V\) _using the same FE space_ \(Q_{h}\)_, we obtain a discrete symmetric operator_ \(B_{u}\in\mathbb{R}^{N_{Q}\times N_{Q}}\)_, which is the weighted mass matrix with_ \[(B_{u})_{i,j}=\int_{\Omega}uv_{i}v_{j}\ d\mathbf{x}\quad\text{for }i,j=1...,N_{Q}.\]
2. _For the diffusion problem we have for_ \(q\in\mathscr{Q}\) _and_ \(v\in V\)__ \[\langle\mathcal{B}_{u}q,v\rangle_{V^{\prime},V}=\int_{\Omega}q\nabla u\cdot \nabla v\ d\mathbf{x}.\] _and for_ \(p\in V\) _and_ \(v\in\mathscr{Q}\)__ \[\langle\mathcal{B}_{u}^{\prime}p,v\rangle_{\mathscr{Q}^{\prime},\mathscr{Q} }=\int_{\Omega}v\nabla u\cdot\nabla p\ d\mathbf{x}.\] _is no more symmetric and can be discretized by a matrix_ \(B_{u}\in\mathbb{R}^{N_{Q}\times N_{Q}}\) _such that_ \[(B_{u})_{i,j}=\int_{\Omega}v_{j}\nabla u\cdot\nabla v_{i}\ d\mathbf{x}\quad\text{ for }i,j=1...,N_{Q}\] _and the discrete actions of_ \(\mathcal{B}_{u}\) _and_ \(\mathcal{B}_{u}^{\prime}\) _can be realized on the FE level by multiplication of_ \(B_{u}\) _and its transpose_ \(B_{u}^{T}\) _respectively._
3. _For the diffusion problem we have for_ \(q\in\mathscr{Q}\) _and_ \(v\in V\)__ \[\langle\mathcal{B}_{u}q,v\rangle_{V^{\prime},V}=\int_{\Omega}q\nabla u\cdot \nabla v\ d\mathbf{x}.\] _and for_ \(p\in V\) _and_ \(v\in\mathscr{Q}\)__ \[\langle\mathcal{B}_{u}^{\prime}p,v\rangle_{\mathscr{Q}^{\prime},\mathscr{Q} }=\int_{\Omega}v\nabla u\cdot\nabla p\ d\mathbf{x}.\] _is no more symmetric and can be discretized by a matrix_ \(B_{u}\in\mathbb{R}^{N_{Q}\times N_{Q}}\) _such that_ \[(B_{u})_{i,j}=\int_{\Omega}v_{j}\nabla u\cdot\nabla v_{i}\ d\mathbf{x}\quad\text{ for }i,j=1...,N_{Q}\] _and the discrete actions of_ \(\mathcal{B}_{u}\) _and_ \(\mathcal{B}_{u}^{\prime}\) _can be realized on the FE level by multiplication of_ \(B_{u}\) _and its transpose_ \(B_{u}^{T}\) _respectively._
### Run 1: Reconstruction of the reaction coefficient
We consider the situation in Example 2.4 (i) of reconstructing the reaction coefficient \(q\in\mathscr{Q}=L^{2}(\Omega)\); cf. also [10]. Thus, we study the parametrized problem
\[\begin{array}{ll}-\Delta u(\mathbf{x})+q(\mathbf{x})u(\mathbf{x})=f(\mathbf{x})&\text{for all }\mathbf{x}\in\Omega,\\ u(\mathbf{x})=0&\text{for all }\mathbf{x}\in\partial\Omega.\end{array} \tag{4.1}\]
We choose the \(H^{1}_{0}(\Omega)\)-norm on \(V\). The coercivity constant of the corresponding bilinear form for a parameter \(q\) is explicitly given as \(a_{q}=1\) and the initial Tikhonov regularization for all methods was chosen as \(\alpha_{0}=1\). Moreover, we choose the same exact parameter \(q^{\mathsf{e}}\) (see Figure 1) as in [10], but shifted by the background \(q_{\circ}\), i.e.
\[q^{\mathsf{e}}=q_{\circ}+q_{1}^{\mathsf{e}}+q_{2}^{\mathsf{e}},\]
where the two Gaussian distributions \(q_{1}^{\mathsf{e}}\), \(q_{1}^{\mathsf{e}}\) are given for \(x\in\Omega\) as
Figure 1: Run 1: The exact parameter \(q^{\mathsf{e}}\) and its three reconstructions \(q^{\mathrm{FOM}}\), \(q^{\mathscr{Q}_{r}}\) and \(q^{\mathscr{Q}_{r}\text{-}V_{r}}\).
\[q_{1}^{\mathbf{\mathsf{e}}}(\mathbf{x}) =\frac{1}{2\pi\sigma^{2}}\exp\Bigg{(}-\frac{1}{2}\Bigg{(}\bigg{(} \frac{2x_{1}-0.5}{0.1}\bigg{)}^{2}+\bigg{(}\frac{2x_{2}-0.5}{0.1}\bigg{)}^{2} \Bigg{)}\Bigg{)},\] \[q_{2}^{\mathbf{\mathsf{e}}}(\mathbf{x}) =\frac{1}{2\pi\sigma^{2}}\exp\Bigg{(}-\frac{1}{2}\Bigg{(}\bigg{(} \frac{0.8x_{1}-0.5}{0.1}\bigg{)}^{2}+\bigg{(}\frac{0.8x_{2}-0.5}{0.1}\bigg{)}^{ 2}\Bigg{)}\Bigg{)}.\]
A comparison of the algorithms in terms of computation time, FOM PDE solves and FOM \(\mathcal{B}_{u}/\mathcal{B}_{u}^{\prime}\) applications, reduced basis sizes, and iterations needed to reach the discrepancy threshold \(\tau\delta\) is given in Table 1.
We observe the impact of the reduction of the parameter space from the FOM \(\mathcal{B}_{u}/\mathcal{B}_{u}^{\prime}\) applications in the FOM-IRGNM algorithm compared to the methods using the parameter space reduction, which reduced the number of FOM \(\mathcal{B}_{u}/\mathcal{B}_{u}^{\prime}\) applications by a factor of 100. Additionally, the impact of the state space reduction is well visible, since the number of FOM PDE solves is significantly reduced by the \(\mathscr{Q}_{r}\)-\(V_{r}\)-IRGNM algorithm compared to the FOM-IRGNM algorithm. Also, from Table 1 we deduce that the verification of the reduced order model of the error estimator is the most expensive part of the \(\mathscr{Q}_{r}\)-\(V_{r}\)-IRGNM algorithm since 112 of the total 136 FOM solves are needed to certify the reduced order model in this case. The \(\mathscr{Q}_{r}\)-IRGNM algorithm is slower than the FOM method, while the \(\mathscr{Q}_{r}\)-\(V_{r}\)-IRGNM algorithm gives a speed-up of about 3.5. This is also depicted in the left plot in Figure 2, where the discrepancy is plotted against the CPU time. We conclude that the parameter-reduced algorithms can reconstruct the parameter up to the discrepancy threshold by only a small reduced parameter basis of size \(n_{\mathscr{Q}}=9\) (\(n_{\mathscr{Q}}=14\)). The gradient norm at the reconstructed parameters is of the order of \(10^{-9}\). The reconstructed parameters are depicted in Figure 1.
All methods capture the most important characteristics of \(q^{\mathbf{\mathsf{e}}}\), while the reconstruction of the FOM has the best quality. The relative errors are very small and given as
\[\frac{\|q^{\text{FOM}}-q^{\mathscr{Q}_{r}}\|_{\mathscr{Q}}}{\|q^{\text{FOM}} \|_{\mathscr{Q}}}\approx 2\,\%\quad\text{and}\quad\frac{\|q^{\text{FOM}}-q^{ \mathscr{Q}_{r}-V_{r}}\|_{\mathscr{Q}}}{\|q^{\text{FOM}}\|_{\mathscr{Q}}} \approx 5\,\%.\]
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Algorithm & time [s] & FOM solves & FOM \(\mathcal{B}_{u}/\mathcal{B}_{u}^{\prime}\) & \(n_{\mathscr{Q}}\) & \(n_{V}\) & o. iter \\ \hline FOM-IRGNM & 51 & 888 & 842 & – & – & 22 \\ \(\mathscr{Q}_{r}\)-IRGNM & 67 & 1131 & 13 & 14 & – & 12 \\ \(\mathscr{Q}_{r}\)-\(V_{r}\)-IRGNM & 14 & 24(+112) & 8 & 9 & 16 & 7 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Run 1: Comparison of the performance of all algorithms for the reconstruction of the reaction coefficient. For the \(\mathscr{Q}_{r}\)-\(V_{r}\)-IRGNM algorithm, the number of FOM solves needed for the error estimator is in brackets.
### Run 2: Reconstruction of the diffusion coefficient
We consider the situation in Example 2.4 (ii), that is we consider the parametrized problem
\[-\nabla\cdot\big{(}q(\mathbf{x})\nabla u(\mathbf{x})\big{)} =f(\mathbf{x})\quad\text{for all }\mathbf{x}\in\Omega, \tag{4.2}\] \[u(\mathbf{x}) =0\qquad\quad\text{for all }\mathbf{x}\in\Omega.\]
We choose the \(H^{1}_{0}(\Omega)\)-norm on \(V\), which results in a coercivity constant of \(\underline{\alpha}_{q}=\operatorname{ess}\inf_{\Omega}q\). We use the same model problem as in [1], where the exact parameter is given as \(q^{\mathsf{e}}=q_{0}+c_{\text{\tiny cont}}\chi_{\Omega_{1}}-2\chi_{\Omega_{2}}\) (see Figure 3) with
\[\Omega_{1} =[5/30,9/30]\times[3/30,27/30]\] \[\quad\cup\big{(}[9/30,27/30]\times\big{(}[3/30,7/30]\cup[23/30,27/3 0]\big{)}\big{)},\] \[\Omega_{2} =\big{\{}x\in\Omega\,|\,\|x-(18/30,15/30)^{T}\|<4/30\big{\}},\]
and contrast parameter \(c_{\text{\tiny cont}}=2\). In our numerical experiments, we observe that regularizing with the \(L^{2}(\Omega)\)-norm resulted in small artifacts of high amplitude on some cells of the mesh for the FOM-IRGNM algorithm. This relates also to the fact that from a theoretical point of view, the canonical Hilbert space for \(\mathscr{Q}\) is \(H^{2}(\Omega)\) (see Example 2.4 (ii)), and the canonical Banach space is \(W^{1,4}(\Omega)\) (see [13]). Since we neither want to use \(H^{2}(\Omega)\) FE spaces nor work in a Banach space setting, we simply choose \(\mathscr{Q}=H^{1}(\Omega)\) equipped with its canonical norm (cf. Remark 2.8). The initial Tikhonov regularization for all methods was chosen as \(\alpha_{0}=10^{-3}\). The enrichment of the parameter space is done as follows. The discrete gradient of the regularized cost function \(\hat{J}_{\alpha}\) for \(\alpha>0\) is given as
\[\nabla\hat{J}(q)=\alpha S(q-q_{\circ})+B_{u}^{T}p\in\mathbb{R}^{N_{\mathscr{Q }}}, \tag{4.3}\]
where \(S\in\mathbb{R}^{N_{\mathscr{Q}}\times N_{\mathscr{Q}}}\) is the discretized Neumann operator and \(B_{u}^{T}\in\mathbb{R}^{N_{\mathscr{Q}}\times N_{\mathscr{Q}}}\) the discretization of the operator \(\mathcal{B}_{u}^{\prime}\). To improve the quality of the reconstruction for the parameter-reduced methods, we perform a smoothing of the
Figure 2: The residuals \(\|\mathcal{F}(q^{k})-y^{\delta}\|_{\mathscr{H}}\) per computation time needed for iteration \(k\) for Run 1 (left plot) and for Run 2 (right plot).
gradient by choosing the snapshot to enrich the reduced parameter space as
\[q=S^{-1}B_{u}^{T}p, \tag{4.4}\]
which corresponds to the optimize-then-discretize gradient \(J_{\mathscr{Q}}^{-1}\mathcal{B}_{u}^{\prime}p\) from (2.9) and the optimality condition (2.11). Note that also (4.4) comes from setting (4.3) equal zero, solving for \(q\) and neglecting the scaling of \(\alpha\) and the term \(q_{\circ}\), which is already in the basis.
A comparison of the algorithms is given in Table 2 and in the right plot in Figure 2. The forward FOM solves are reduced by a factor of 4 and 60 for the \(\mathscr{Q}_{r}\)-IRGNM and the \(\mathscr{Q}_{r}\)-\(V_{r}\)-IRGNM, respectively. Correspondingly, in our numerical experiments we observe a speedup in computation time of about 3 and 10 for the \(\mathscr{Q}_{r}\)-IRGNM and the \(\mathscr{Q}_{r}\)-\(V_{r}\)-IRGNM, respectively.
As in the first example, we observe in Table 2 that the most expensive part of the TRRB algorithm is the certification by the error estimator. The FOM gradients at the reconstructed parameter are of order \(10^{-9}\). Further, the parameter reduced methods obtain a low-dimensional representation of the optimal parameter by only using 29 basis functions. The reconstructed parameters are
depicted in Figure 3. The relative \(L^{2}(\Omega)\) error norms are given as
\[\frac{\|q^{\mathrm{FOM}}-q^{\mathscr{D}_{r}}\|_{L^{2}(\Omega)}}{\|q^{\mathrm{FOM} }\|_{L^{2}(\Omega)}}\approx 2\,\%\quad\text{and}\quad\frac{\|q^{\mathrm{FOM}}-q^{ \mathscr{D}_{r}-V_{r}}\|_{L^{2}(\Omega)}}{\|q^{\mathrm{FOM}}\|_{L^{2}(\Omega)}} \approx 2\,\%\]
and the relative \(H^{1}(\Omega)\) error norms are given as
\[\frac{\|q^{\mathrm{FOM}}-q^{\mathscr{D}_{r}}\|_{\mathscr{D}}}{\|q^{\mathrm{FOM }}\|_{\mathscr{D}}}\approx 20\,\%\quad\text{and}\quad\frac{\|q^{\mathrm{FOM}}-q^{ \mathscr{D}_{r}-V_{r}}\|_{\mathscr{D}}}{\|q^{\mathrm{FOM}}\|_{\mathscr{D}}} \approx 20\,\%.\]
## 5 Conclusion
We introduced a new adaptive parameter and state reduction IRGNM for the solution of parameter-identification problems. The reduced parameter space is enriched with gradients, which are available anyway, if the reduced state space is enriched using both, the primal and dual solutions. The low dimensionality of the reduced parameter spaces allows us to efficiently build and certify a state-space RB model on the reduced parameter space. This enables to deal with high or even infinite-dimensional parameter spaces for an RB approximation. Numerical experiments with parameter spaces of the size of the FE discretization show the efficiency of the proposed approach for inverse parameter identification problems with distributed reaction or diffusion coefficients.
|
2309.08514 | On the rna number of powers of cycles | A signed graph $(G,\sigma)$ on $n$ vertices is called a \textit{parity signed
graph} if there is a bijective mapping $f \colon V(G) \rightarrow
\{1,\ldots,n\}$ such that $f(u)$ and $f(v)$ have same parity if $\sigma(uv)=1$,
and opposite parities if $\sigma(uv)=-1$ for each edge $uv$ in $G$. The
\emph{rna} number $\sigma^{-}(G)$ of $G$ is the least number of negative edges
among all possible parity signed graphs over $G$. In other words,
$\sigma^{-}(G)$ is the smallest size of an edge-cut of $G$ such that the sizes
of two sides differ at most one.
Let $C_n^{d}$ be the $d\text{th}$ power of a cycle of order $n$. Recently,
Acharya, Kureethara and Zaslavsky proved that the \emph{rna} number of a cycle
$C_n$ on $n$ vertices is $2$. In this paper, we show for $2 \leq d < \lfloor
\frac{n}{2} \rfloor$ that $2d \leq \sigma^{-}(C_n^{d}) \leq d(d+1)$. Moreover,
we prove that the graphs $C_n^{2}$ and $C_n^{3}$ achieve the upper bound of
$d(d+1)$. | Deepak Sehrawat, Anil Kumar, Sweta Ahlawat | 2023-09-15T16:19:33Z | http://arxiv.org/abs/2309.08514v1 | # On the _rna_ number of powers of cycles
# On the _rna_ number of powers of cycles
Deepak Sehrawat\({}^{*}\), Anil Kumar, Sweta Ahlawat
Department of Mathematics
Pandit Neki Ram Sharma Government College Rohtak
Rohtak, India - 124001
\({}^{*}\)Email: [email protected]
**Abstract.** A signed graph \((G,\sigma)\) on \(n\) vertices is called a _parity signed graph_ if there is a bijective mapping \(f\colon V(G)\to\{1,\ldots,n\}\) such that \(f(u)\) and \(f(v)\) have same parity if \(\sigma(uv)=1\), and opposite parities if \(\sigma(uv)=-1\) for each edge \(uv\) in \(G\). The _rna_ number \(\sigma^{-}(G)\) of \(G\) is the least number of negative edges among all possible parity signed graphs over \(G\). In other words, \(\sigma^{-}(G)\) is the smallest size of an edge-cut of \(G\) such that the sizes of two sides differ at most one.
Let \(C_{n}^{d}\) be the \(d\)th power of a cycle of order \(n\). Recently, Acharya, Kureethara and Zaslavsky proved that the _rna_ number of a cycle \(C_{n}\) on \(n\) vertices is \(2\). In this paper, we show for \(2\leq d<\lfloor\frac{n}{2}\rfloor\) that \(2d\leq\sigma^{-}(C_{n}^{d})\leq d(d+1)\). Moreover, we prove that the graphs \(C_{n}^{2}\) and \(C_{n}^{3}\) achieve the upper bound of \(d(d+1)\).
**AMS subject classifications.** 05C22, 05C38, 05C40, 05C78
**Key words.** power of a cycle, parity labeling, parity signed graph, _rna_ number, edge-cut
## 1 Introduction
In this paper, all the graphs and signed graphs are considered to be simple, connected and undirected. For any graph theoretic term that is used but not defined in this paper, we refer the reader to [3]. If \(G\) is a graph, then we denote the set of vertices and the set of edges of \(G\) by \(V(G)\) and \(E(G)\), respectively, and the distance between vertices \(u\) and \(v\) in \(G\) by \(\operatorname{dist}_{G}(u,v)\).
A _signed graph_\((G,\sigma)\) consists of a graph \(G\) together with a function \(\sigma\colon E(G)\to\{1,-1\}\). In \((G,\sigma)\), graph \(G\) is called the _underlying graph_ of \((G,\sigma)\) and \(\sigma\) a _signature_ of \((G,\sigma)\). We say an edge \(e\) in \((G,\sigma)\) is _positive_ if \(\sigma(e)=1\), and _negative_ otherwise. By \((G,+)\), we denote a signed graph \((G,\sigma)\) such that \(\sigma(e)=1\) for each \(e\in E(G)\).
A special type of signed graph known as _parity signed graph_ (see Definition 1) introduced by Acharya and Kureethara in [1]. Acharya _et al._[2] further characterized some families of parity signed graphs. Parity labeling of a graph is equivalent to a partition of the vertex set of a graph into two subsets \(A\) and \(B\) such that \(||A|-|B||\leq 1\). Such vertex partition is known as a _Harary partition_.
The _rna_ number of a graph \(G\), denoted \(\sigma^{-}(G)\), is the least number of negative egdes among all the possible parity signed graphs over \(G\). In other words, it is the least size of a cut whose sides are nearly equal. The concept of the _rna_ number of a parity signed graph was also introduced by Acharya and Kureethara [1]. Due to its definition, it is more reasonable to say the _rna_ number of a graph. The _rna_ numbers of some families of graphs such as stars, wheels, paths, cycles and complete graphs are computed in [1, 2]. Kang et al. [4] proved that \(\sigma^{-}(G)\leq\lfloor\frac{2m+n}{4}\rfloor\), where \(G\) is a graph on \(n\) vertices with \(m\) edges. Further, they characterized all the parity signed graphs achieving the bound \(\lfloor\frac{2m+n}{4}\rfloor\). They also solved some open problems concerning the _rna_ number. Very recently, Sehrawat and Bhattachariya [6] studied the _rna_ number for the class of generalized Petersen graphs.
The _dth power_ of a simple graph \(G\) is the graph \(G^{d}\) whose vertex set is \(V(G)\) and two distinct vertices \(u,v\) are adjacent in \(G^{d}\) if and only if \(1\leq\operatorname{dist}_{G}(u,v)\leq d\). It is important to note that if \(C_{n}\) is a cycle on \(n\) vertices, then for \(d\geq\lfloor\frac{n}{2}\rfloor\) the graph \(C_{n}^{d}\) is the complete graph \(K_{n}\). This holds true because \(\operatorname{dist}_{C_{n}}(u,v)\leq\lfloor\frac{n}{2}\rfloor\) for any two vertices \(u\) and \(v\) in \(C_{n}\). It is known from [1, Proposition 2.8] that \(\sigma^{-}(K_{n})=\lfloor\frac{n}{2}\rfloor\lceil\frac{n}{2}\rceil\), so \(\sigma^{-}(C_{n}^{d})=\lfloor\frac{n}{2}\rfloor\lceil\frac{n}{2}\rceil\) for all \(d\geq\lfloor\frac{n}{2}\rfloor\). Also it is known from [2, Theorem 3.2] that \(\sigma^{-}(C_{n}^{1})=2\). Thus it remains to compute the _rna_ number of \(C_{n}^{d}\) for \(2\leq d<\lfloor\frac{n}{2}\rfloor\).
In this paper, we show for \(2\leq d<\lfloor\frac{n}{2}\rfloor\) that \(2d\leq\sigma^{-}(C_{n}^{d})\leq d(d+1)\) improving the upper bound of Kang et al. [4] for the graph \(C_{n}^{d}\). Moreover, we show that the _rna_ numbers of \(C_{n}^{2}\) and \(C_{n}^{3}\) are \(6\) and \(12\), respectively, achieving the upper bound of \(d(d+1)\). For the remaining values of \(d\), we make a conjecture that \(\sigma^{-}(C_{n}^{d})=d(d+1)\).
## 2 Preliminaries
We start this section with some necessary definitions.
**Definition 1**.: _[_2_]_ _For a given graph \(G\) of order \(n\) and a bijective mapping \(f\colon V(G)\to\{1,\ldots,n\}\), define \(\sigma_{f}\colon E(G)\to\{1,-1\}\) such that \(\sigma_{f}(uv)=1\) if \(f(u)\) and \(f(v)\) are of the same parity and \(\sigma_{f}(uv)=-1\) if \(f(u)\) and \(f(v)\) are of different parity, where \(uv\in E(G)\). We define \(\Sigma_{f}\) to be the signed graph \((G,\sigma_{f})\), which is called a _parity signed graph_._
A cycle in a signed graph is said to be _positive_ if it has even number of negative edges, and _negative_, otherwise. A signed graph is said to be _balanced_ if all of its cycles are positive. Every parity signed graph is balanced, see [2, Theorem 1].
**Definition 2**.: _[_1_]_ The _rna_ number of a graph \(G\), denoted \(\sigma^{-}(G)\), is the least number of negative edges among all possible parity signed graphs over \(G\).
If \((G,\sigma_{f})\) is a parity signed graph, then the sets defined by \(V_{1}\coloneqq\{v\in V(G)\mid f(v)=\text{odd}\}\) and \(V_{2}\coloneqq\{v\in V(G)\mid f(v)=\text{even}\}\) are called _parity sets_. Such a bipartition \(\{V_{1},V_{2}\}\) of \(V(G)\) is said to be _parity-partition_ and an edge cut \((V_{1},V_{2})\) is called an _equicut_ of \(G\). Observe that finding the _rna_ number of a graph \(G\) is equivalent to finding the smallest equicut size.
Switching is a way of turning one signed graph into another signed graph. If \((G,\sigma)\) is a signed graph and \(v\) is a vertex in \((G,\sigma)\), then _switching_\(v\) reverses the signs of all edges incident to \(v\). Further, two signed graphs \((G,\sigma)\) and \((G,\sigma^{\prime})\) are _switching-equivalent_ if one can be obtained from the other by a sequence of switchings.
For the study of parity signed graphs, a different type of switching is defined by Kang _et al._[4]. Let \((G,\sigma)\) be a parity signed graph. A _parity-switching_ in \((G,\sigma)\) is a switching of a pair of vertices locating in different parity sets.
The following proposition tells us how we can obtain all parity signed graphs over any given graph \(G\).
**Proposition 1**.: _[_4_]_ _A signed graph \((G,\sigma)\) on \(n\) vertices is a parity signed graph if and only if it can be obtained from \((G,+)\) by switching a set of vertices of cardinality \(\lfloor\frac{n}{2}\rfloor\)._
Kang _et al._ also proved that parity-switching is the exact way to describe the transformation between any two parity signed graphs having the same underlying graph. More precisely:
**Proposition 2**.: _[_4_]_ _Let \((G,\sigma)\) be a parity signed graph. Then a signed graph \((G,\sigma^{\prime})\) is a parity signed graph if and only if it can be obtained from \((G,\sigma)\) by a sequence of parity-switchings._
The following result, due to Sehrawat and Bhattacharjya, gives a lower bound for the _rna_ number of a graph in terms of its edge-connectivity.
**Proposition 3**.: _[_6_]_ _If \(G\) is a graph with edge-connectivity \(\lambda\), then \(\sigma^{-}(G)\geq\lambda\)._
## 3 Our results
A simple but important result is the following.
**Lemma 1**.: _Let \(H\) be a spanning subgraph of a graph \(G\) with \(\sigma^{-}(H)=r\). Then \(\sigma^{-}(G)\geq r\)._
Proof.: The size of each equicut of \(G\) is at least the size of equicut of \(H\). Thus the result follows.
Let \(S=\{a_{1},\ldots,a_{k}\}\) be a set of positive integers such that \(a_{1}<a_{2}<\cdots<a_{k}<\frac{p+1}{2}\) and let the vertices of a graph of order \(p\) be labelled with \(u_{0},u_{1},\ldots,u_{p-1}\). Then the _circulant graph_\(C(p,S)\) has its
vertex set as \(\{u_{0},u_{1},\ldots,u_{p-1}\}\) and vertices \(u_{i\pm a_{1}},u_{i\pm a_{2}},\ldots,u_{i\pm a_{k}}\) are adjacent to each vertex \(u_{i}\) (where summation in the indices are taken modulo \(p\)). In this definition, the sequence \((a_{i})\) is called the _jump sequence_ and the \(a_{i}\) are called _jumps_. Further, if \(a_{k}\neq\frac{p}{2}\), then \(C(p,S)\) is a regular graph of degree \(2k\) and if \(a_{k}=\frac{p}{2}\), then \(C(p,S)\) is a regular graph of degree \(2k-1\).
From here onwards, we assume that the vertex set of the cycle \(C_{n}\) is \(\{u_{0},u_{1},\ldots,u_{n-1}\}\). Observe that for any \(d\) such that \(2\leq d<\lfloor\frac{n}{2}\rfloor\) the graph \(C_{n}^{d}\) is equivalent to the circulant graph \(C(n,\{1,2,\ldots,d\})\). It is well-known that every circulant graph is vertex-transitive. Thus for each \(2\leq d<\lfloor\frac{n}{2}\rfloor\), the graph \(C_{n}^{d}\) is also vertex-transitive. Mader [5] has been shown that for a connected vertex-transitive graph \(G\), the edge-connectivity of \(G\) is same as its minimum degree. We will use this fact to obtain a lower bound for the _rna_ number of \(C_{n}^{d}\).
If \(X\) is a set of \(k+1\) consecutive vertices \(u_{0},u_{1},\ldots,u_{k}\) then the vertex (respectively, vertices) \(u_{\frac{k}{2}}\) (\(u_{\lfloor\frac{k}{2}\rfloor}\) & \(u_{\lfloor\frac{k}{2}\rfloor+1}\)) is (are) called _mid-vertex (mid-vertices)_ of \(X\), according as \(k\) is even (odd). If \(v\) is a vertex in \(G\) and \(X\) is a subset of vertices of \(G\) such that \(v\notin X\), then for simplicity, we denote the edge-cut \(E(\{v\},X)\) and its size \(|E(\{v\},X)|\) by \((v,X)\) and \(|(v,X)|\), respectively.
Now we are ready to prove our main result.
**Theorem 2**.: _If \(2\leq d<\lfloor\frac{n}{2}\rfloor\), then \(2d\leq\sigma^{-}(C_{n}^{d})\leq d(d+1)\)._
Proof.: Since \(C_{n}^{d}\) is a connected vertex-transitive graph with minimum degree \(2d\), the edge-connectivity of \(C_{n}^{d}\) is \(2d\). Thus the lower bound follows by Proposition 3.
For the rest of the proof, \(X\) denotes the set \(\{u_{0},u_{1},\ldots,u_{\lfloor\frac{n}{2}\rfloor-1}\}\), where \(n\geq 5\) and \(|X|=\lfloor\frac{n}{2}\rfloor\). We distinguish two cases depending upon \(\lfloor\frac{n}{2}\rfloor\) is odd or even.
_Case 1._ Assume that \(\lfloor\frac{n}{2}\rfloor=2k+1\) for some \(k\). Here \(X=\{u_{0},u_{1},\ldots,u_{2k}\}\) with \(|X|=2k+1\), and the mid-vertex of \(X\) is \(u_{k}\). It is clearly observed that the graph \(C_{n}^{d}\) is symmetric about the line passing through the vertex \(u_{k}\). Consequently, the cardinality of two sets \((u_{k-i},X^{c})\) and \((u_{k+i},X^{c})\) is same for each \(1\leq i\leq k\). Therefore we conclude that
\[|(X,X^{c})|=2\sum_{j=0}^{k-1}|(u_{j},X^{c})|+|(u_{k},X^{c})|. \tag{1}\]
Now we find that
\[|(u_{k},X^{c})|=\left\{\begin{array}{ll}0&\mbox{if $2\leq d\leq k$,}\\ 2\ell&\mbox{if $d=k+\ell$ and $\ell=1,\ldots,k$}\end{array}\right. \tag{2}\]
and for \(0\leq j\leq k-1\) that
\[|(u_{j},X^{c})|=\left\{\begin{array}{ll}d-j&\mbox{if $2\leq d\leq k$ and $j\leq d$},\\ 0&\mbox{if $2\leq d\leq k$ and $j>d$},\\ d-j&\mbox{if $d=k+\ell$, $\ell=1,\ldots,k$ and $j+d\leq 2k$},\\ 2\ell&\mbox{if $d=k+\ell$, $\ell=1,\ldots,k$ and $j+d\geq 2k+1$}.\end{array}\right. \tag{3}\]
If \(2\leq d\leq k\), then from Equation (1) we have
\[|(X,X^{c})| =2\left[\sum_{j=0}^{d}|(u_{j},X^{c})|+\sum_{j=d+1}^{k-1}|(u_{j},X ^{c})|\right]+0\] \[=2\left[\sum_{j=0}^{d}(d-j)+0\right]+0\] \[=d(d+1),\]
where second equality uses Formulas (2) and (3).
If \(d=k+\ell\) and \(\ell=1,\ldots,k\), then from Equation (1) we have
\[|(X,X^{c})| =2\left[\sum_{j=0}^{2k-d}|(u_{j},X^{c})|+\sum_{2k-d+1}^{k-1}|(u_{ j},X^{c})|\right]+|(u_{k},X^{c})|\] \[=2\left[\sum_{j=0}^{2k-d}(d-j)+\sum_{2k-d+1}^{k-1}(2\ell)\right] +2\ell\] \[=2\left[d(2k-d+1)-\frac{(2k-d)(2k-d+1)}{2}\right]+(4d-4k)(d-k+1) +(2d-2k)\] \[=[4dk-2d^{2}+2d-4k^{2}+4kd-d^{2}-2k+d]+[4d^{2}-8kd+4k^{2}-4d+4k]+(2 d-2k)\] \[=d(d+1),\]
where second equality uses Formulas (2) and (3), and third equality uses the fact that \(2\ell=2d-2k\). Thus for all \(2\leq d\leq 2k\), we conclude that \(|(X,X^{c})|=d(d+1)\).
_Case 2._ Assume that \(\lfloor\frac{n}{2}\rfloor=2k\) for some \(k\). Here \(X=\{u_{0},u_{1},\ldots,u_{2k-1}\}\) with \(|X|=2k\), and the mid-vertices of \(X\) are \(u_{k-1}\) and \(u_{k}\). Further, the graph \(C_{n}^{d}\) is symmetric about the line passing through the mid-point of \(u_{k-1}\) and \(u_{k}\). Consequently, the cardinality of two sets \((u_{(k-1)-j},X^{c})\) and \((u_{k+j},X^{c})\) is same for each \(0\leq j\leq k-1\). Therefore we conclude that
\[|(X,X^{c})|=2\sum_{j=0}^{k-1}|(u_{j},X^{c})|. \tag{4}\]
Now we find for \(0\leq j\leq k-1\) that
\[|(u_{j},X^{c})|=\left\{\begin{array}{ll}0&\text{ if }2\leq d\leq k-1\text{ and }j \geq d\\ d-j&\text{ if }2\leq d\leq k-1\text{ and }j<d\\ d-j&\text{ if }d=(k-1)+\ell,\,\ell=1,\ldots,k\text{ and }j+d\leq 2k-1\\ 2\ell-1&\text{ if }d=(k-1)+\ell,\,\ell=1,\ldots,k\text{ and }j+d\geq 2k.\end{array}\right. \tag{5}\]
If \(2\leq d\leq k-1\), then from Equation (4) we have
\[|(X,X^{c})| =2\sum_{j=0}^{k-1}|(u_{j},X^{c})|\] \[=2\sum_{j=0}^{d-1}|(u_{j},X^{c})|+2\sum_{j=d}^{k-1}|(u_{j},X^{c})|\] \[=2\sum_{j=0}^{d-1}(d-j)+0+0\] \[=d(d+1).\]
If \(d=(k-1)+\ell\) and \(\ell=1,\ldots,k\), then from Equation (4) we have
\[|(X,X^{c})| =2\sum_{j=0}^{k-1}|(u_{j},X^{c})|\] \[=2\sum_{j=0}^{2k-d-1}|(u_{j},X^{c})|+2\sum_{2k-d}^{k-1}|(u_{j},X^ {c})|\] \[=2\sum_{j=0}^{2k-d-1}(d-j)+2\sum_{2k-d}^{k-1}(2\ell-1)\] \[=d(d+1).\]
Hence for all \(2\leq d\leq 2k-1\), we have \(|(X,X^{c})|=d(d+1)\).
In both the cases, we have shown that the size of \((X,X^{c})\) is \(d(d+1)\) for \(X=\{u_{0},u_{1},\ldots,u_{\lfloor\frac{n}{2}\rfloor-1}\}\). Thus we get, \(\sigma^{-}(C_{n}^{d})\leq d(d+1)\). This completes the proof.
### The _rna_ number of \(C_{n}^{2}\)
**Theorem 3**.: _If \(n\geq 6\), then \(\sigma^{-}(C_{n}^{2})=6\)._
Proof.: We discuss the two cases depending upon the parity of \(n\).
_Case 1._ Assume that \(n\) is even. In this case, \(C_{n}^{2}\) is edge-disjoint union of one \(n\)-cycle (formed with 1 jumps) and two \(\frac{n}{2}\)-cycles (formed with 2 jumps). We denote the \(n\)-cycle by \(C_{n}^{\prime}\coloneqq u_{0}u_{1}\ldots u_{n-1}u_{0}\) and two \(\frac{n}{2}\)-cycles by \(C_{\frac{n}{2}}^{\prime}\coloneqq u_{1}u_{3}\ldots u_{n-1}u_{1}\) and \(C_{\frac{n}{2}}^{\prime\prime}\coloneqq u_{0}u_{2}\ldots u_{n-2}u_{0}\). So for any \(X\subset V(C_{n}^{2})\) such that \(|X|=\frac{n}{2}\), we discuss the following two cases.
_Case 1.1:_ Let us consider \(X\) contain vertices of both \(C^{\prime}_{\frac{n}{2}}\) as well as \(C^{\prime\prime}_{\frac{n}{2}}\). Thus the cut \((X,X^{c})\) must contain at least two edges of each of the cycles \(C^{\prime}_{\frac{n}{2}}\), \(C^{\prime\prime}_{\frac{n}{2}}\), and \(C^{\prime}_{n}\). Therefore, \(|(X,X^{c})|\geq 6\).
_Case 1.2:_ Let us consider \(X\) contain all the vertices of exactly one of \(C^{\prime}_{\frac{n}{2}}\) and \(C^{\prime\prime}_{\frac{n}{2}}\). Thus \((X,X^{c})=E(C_{n})\). Consequently, \(|(X,X^{c})|=n\geq 6\).
_Case 2:_ Assume that \(n\) is odd. In this case, \(C^{2}_{n}\) is edge-disjoint union of two \(n\)-cycles (one formed with 1 jumps while other formed with 2 jumps). We denote these two \(n\)-cycles by \(C^{\prime}_{n}\) and \(C^{\prime\prime}_{n}\), where \(C^{\prime}_{n}\) and \(C^{\prime\prime}_{n}\) are formed with 1 jumps and 2 jumps, respectively. That is, \(C^{\prime}_{n}=u_{0}u_{1}\ldots u_{n-1}u_{0}\) and \(C^{\prime\prime}_{n}=u_{0}u_{2}\ldots u_{n-1}u_{1}u_{3}\ldots u_{n-2}u_{0}\).
Let \(X\) be a subset of \(V(C^{2}_{n})\) such that \(|X|=\lfloor\frac{n}{2}\rfloor\). Clearly, \((X,X^{c})\) must contain at least two edges of each of \(C^{\prime}_{n}\) and \(C^{\prime\prime}_{n}\). Further, if \((X,X^{c})\) contain exactly two edges of \(C^{\prime}_{n}\) then \(X\) must be of the form \(\{u_{j},u_{j+1},\ldots,u_{j+\lfloor\frac{n}{2}\rfloor-1}\}\) for some \(j\in\{0,1,\ldots,n-1\}\). Consequently, \(X^{c}=V(C_{n})-X\). Thus the following edges:
\[u_{j-2}u_{j},u_{j-1}u_{j+1},u_{j+\lfloor\frac{n}{2}\rfloor-2}u_{j+\lfloor\frac {n}{2}\rfloor},u_{j+\lfloor\frac{n}{2}\rfloor-1}u_{j+\lfloor\frac{n}{2} \rfloor+1}\]
of \(C^{\prime\prime}_{n}\), along with two edges of \(C^{\prime}_{n}\), have to belong to \((X,X^{c})\). Hence \(|(X,X^{c})|\geq 6\).
On the other hand, if \((X,X^{c})\) contain more than two edges of \(C^{\prime}_{n}\), then \((X,X^{c})\) has to contain at least 4 edges of \(C^{\prime}_{n}\) and hence \(|(X,X^{c})|\geq 6\).
Thus from Cases 1 and 2, we conclude that any equicut of \(C^{2}_{n}\) has to be of size at least 6. Therefore, \(\sigma^{-}(C^{2}_{n})\geq 6\). Now consider \(X=\{u_{0},\ldots,u_{\lfloor\frac{n}{2}\rfloor-1}\}\). So we have
\[(X,X^{c})=\{u_{n-1}u_{0},u_{n-1}u_{1},u_{n-2}u_{0},u_{\lfloor\frac{n}{2} \rfloor-2}u_{\lfloor\frac{n}{2}\rfloor},u_{\lfloor\frac{n}{2}\rfloor-1}u_{ \lfloor\frac{n}{2}\rfloor},u_{\lfloor\frac{n}{2}\rfloor-1}u_{\lfloor\frac{n}{ 2}\rfloor+1}\}.\]
It is clear that \((X,X^{c})\) is an equicut of \(C^{2}_{n}\) and that \(|(X,X^{c})|=6\). Hence \(\sigma^{-}(C^{2}_{n})=6\), as desired.
### The _rna_ number of \(C^{3}_{n}\)
**Theorem 4**.: _If \(n\geq 8\), then \(\sigma^{-}(C^{3}_{n})=12\)._
Proof.: Let \(X\) be a subset of \(V(C^{3}_{n})\) such that \(|X|=\lfloor\frac{n}{2}\rfloor\). The number of cycles of different lengths (formed with 2 jumps and 3 jumps) in \(C^{3}_{n}\) will depend upon the values of \(n\). So, we discuss the following cases.
_Case 1:_ Let \(n\) be an even and a multiple of 3. In this case, \(n\geq 12\) and \(C^{3}_{n}\) is edge-disjoint union of one \(n\)-cycle (formed with 1 jumps), two \(\frac{n}{2}\)-cycles (formed with 2 jumps) and three \(\frac{n}{3}\)-cycles (formed with 3 jumps). We denote these \(\frac{n}{2}\)-cycles by \(C^{1}_{\frac{n}{2}}\) and \(C^{2}_{\frac{n}{2}}\), and \(\frac{n}{3}\)-cycles by \(C^{1}_{\frac{n}{3}}\), \(C^{2}_{\frac{n}{3}}\) and \(C^{3}_{\frac{n}{3}}\). It is important to note that for any \(i\in\{1,2,3\}\), each vertex of \(V(C_{n})\setminus V(C^{i}_{\frac{n}{3}})\) has exactly two neighbours in \(V(C^{i}_{\frac{n}{3}})\). So if \(V(C^{i}_{\frac{n}{3}})\subset X\), then each vertex of \(X^{c}\) contributes at least two edges in \((X,X^{c})\). Hence \(|(X,X^{c})|\geq 2\times\lceil\frac{n}{2}\rceil\geq n\). Thus \(|(X,X^{c})|\geq 12\).
If \(X=V(C^{i}_{\frac{n}{2}})\) for any \(i\in\{1,2\}\), then \(E(C_{n})\subseteq(X,X^{c})\). Thus \(|(X,X^{c})|\geq n\geq 12\).
Next, if \(X\) contain vertices of \(C^{i}_{\frac{n}{2}}\), and \(C^{j}_{\frac{n}{3}}\), for each \(i\in\{1,2\}\) and \(j\in\{1,2,3\}\), then \(X\) has to contain vertices of all 6 cycles. Consequently, \(|(X,X^{c})|\geq 12\).
_Case 2:_ Let \(n\) be an even and not a multiple of 3. That is, \(n=2k\) for some \(k\geq 4\) and \(k\) is not divisible by 3. In this case, \(n\geq 8\). For any \(X\subset V(C_{n})\) such that \(|X|=k\) there are only two choices: (i) \(X\) contains all the vertices of exactly one of the cycles (formed with 2 jumps) of length \(k\), or (ii) \(X\) contains vertices of both the cycles of length \(k\). We discuss these two cases separately.
_Case 2.1:_ Without loss of generality, we assume that \(X=\{u_{1},u_{3},\ldots,u_{2k-1}\}\). That is, \(X\) contains all the vertices of exactly one of the cycles of length \(k\). Note that
\[(X,X^{c})=\{u_{i}u_{i+1},u_{i}u_{i+3}\ |\ i\in\{0,1,\ldots,2k-1\}\}.\]
Consequently, \(|(X,X^{c})|=2n\geq 16\) because \(n\geq 8\).
_Case 2.2:_ If \(X\) contains vertices of both the cycles of length \(k\), then \((X,X^{c})\) contains at least 8 edges. This holds true because in this case, there are total four cycles in \(C^{3}_{n}\) and vertices of each of these four cycles belong to both \(X\) and \(X^{c}\).
Now it is important to observe that any cut \((X,X^{c})\) of a cycle on \(n\) vertices with \(|(X,X^{c})|=2\) must correspond to a vertex set \(X=\{u_{i},u_{i+1},\ldots,u_{i+\lfloor\frac{n}{2}\rfloor-1}\}\) for some \(i\in\{0,1,\ldots,n-1\}\). Therefore, if \((X,X^{c})\) contains exactly two edges of an \(n\)-cycle (formed with 1 jumps), then \(X\) must equals \(\{u_{i},u_{i+1},\ldots,u_{i+\lfloor\frac{n}{2}\rfloor-1}\}\). Hence \((X,X^{c})\) has to contain the edges \(u_{i-2}u_{i},u_{i-1}u_{i+1},u_{i+\lfloor\frac{n}{2}\rfloor-1}u_{i+\lfloor \frac{n}{2}\rfloor+1}\), \(u_{i+\lfloor\frac{n}{2}\rfloor-2}u_{i+\lfloor\frac{n}{2}\rfloor},u_{i-3}u_{i}\), \(u_{i-2}u_{i+1},u_{i-1}u_{i+2},u_{i+\lfloor\frac{n}{2}\rfloor-3}u_{i+\lfloor \frac{n}{2}\rfloor},u_{i+\lfloor\frac{n}{2}\rfloor+1},u_{i+\lfloor\frac{n}{2} \rfloor-1}u_{i+\lfloor\frac{n}{2}\rfloor+2}\) along with those two edges of \(n\)-cycle. Thus, \(|(X,X^{c})|\geq 12\).
Similarly, if \((X,X^{c})\) contains exactly two edges of an \(n\)-cycle (formed with 3 jumps), then \(|(X,X^{c})|\geq 12\). Thus, we conclude that if \((X,X^{c})\) contains exactly two edges of any one of \(n\)-cycles (formed with 1 jumps or 3 jumps), then \(|(X,X^{c})|\geq 12\).
On the other hand, if \((X,X^{c})\) contains at least 4 edges of both \(n\)-cycles, then \(|(X,X^{c})|\geq 12\) because both \(k\)-cycles contribute at least 4 edges in \((X,X^{c})\).
_Case 3:_ Let \(n\) be an odd and divisible by 3. That is, \(n=2k+1\) for some \(k\geq 4\) and is divisible by 3. Here \(C^{3}_{n}\) has total 5 cycles. Out of these 5 cycles, two are \(n\)-cycles (formed with 1 jumps and 2 jumps) and three are \(\frac{n}{3}\)-cycles. Let \(X\) be any subset of \(V(C_{n})\) such that \(|X|=k\). As discussed in Case 1, if \(X\) contains all the vertices of any one of \(\frac{n}{3}\)-cycles, then \(|(X,X^{c})|\geq 12\).
On the other hand, if \(X\) contains vertices of each of 5 cycles, then it is clear that \(|(X,X^{c})|\geq 10\). Moreover this equality holds only if \((X,X^{c})\) contains exactly two edges of each of these 5 cycles. Now we distinguish the following cases.
_Case 3.1:_ If \((X,X^{c})\) contains exactly two edges of the cycle \(u_{0}u_{1}\ldots u_{n-1}u_{0}\), then \(X\) must equals \(\{u_{i},u_{i+1},\ldots,u_{i+k-1}\}\) for some \(i\in\{0,1,\ldots,n-1\}\). Consequently, \((X,X^{c})=\{u_{i-1}u_{i},u_{i-1}u_{i+1},u_{i-1}u_{i+2},\)
\(u_{i-2}u_{i},u_{i-2}u_{i+1},u_{i-3}u_{i},u_{i+k-3}u_{i+k},u_{i+k-2}u_{i+k},u_{i+k-2}u_{ i+k+1},u_{i+k-1}u_{i+k},u_{i+k-1}u_{i+k+1},u_{i+k-1}u_{i+k+2}\}\). Hence \(|(X,X^{c})|=12\).
_Case 3.2:_ If \((X,X^{c})\) contains at least four edges of the cycle \(u_{0}u_{1}\ldots u_{n-1}u_{0}\), then \(|(X,X^{c})|\geq 12\). Therefore, we conclude that \(|(X,X^{c})|\geq 12\).
_Case 4:_ Let \(n\) be an odd and not divisible by \(3\). Note that if \(n=2k+1\) is odd and not divisible by \(3\), then \(n\) must be of the form \(3\ell-1\) or \(3\ell-2\), according as \(\ell\) is even or odd, respectively.
In this case, \(C_{n}^{3}\) has total \(3\)\(n\)-cycles. We denote these cycles by \(C_{n1}\), \(C_{n2}\), and \(C_{n3}\), where \(C_{ni}\) is formed with \(i\) jumps for \(i=1,2,3\). Clearly for any \(X\subset V(C_{n})\) with \(|X|=\lfloor\frac{n}{2}\rfloor\), the cut \((X,X^{c})\) contains at least \(2\) edges of each of \(C_{ni}\) for \(i=1,2,3\). Thus \(|(X,X^{c})|\geq 6\). But we claim that for each \(X\subset V(C_{n})\) with \(|X|=\lfloor\frac{n}{2}\rfloor\), the cut \((X,X^{c})\) contains at least \(4\) edges of each of \(C_{ni}\) for \(i=1,2,3\). This would prove that \(|(X,X^{c})|\geq 12\). To justify this claim, we discuss the following cases.
_Case 4.1:_ If \((X,X^{c})\) contains exactly \(2\) edges of \(C_{n1}\), then as discussed in Case 3.1, we have \(|(X,X^{c})|=12\), as desired.
_Case 4.2:_ If \((X,X^{c})\) contains exactly \(2\) edges of \(C_{n2}\), then \(X\) must equals \(\{u_{i},u_{i+2},\ldots,u_{i+2k-2}\}\) for some \(i\in\{0,1,\ldots,n-1\}\). So we have \((X,X^{c})=E(C_{n1})\cup\{u_{i-2}u_{i},u_{i+2k-2}u_{i+2k}\}\cup S-\{u_{i+(2k-1)} u_{i+2k}\}\), where \(S\subset E(C_{n3})\) and \(|S|\geq 0\). Thus \(|(X,X^{c})|\geq n+2-1=n+1\geq 12\), because \(n\geq 11\).
_Case 4.3:_ Let \((X,X^{c})\) contains exactly \(2\) edges of \(C_{n3}\). If \(n=3\ell-1\) and \(\ell\geq 4\) is even, then
\[X=\{u_{i},u_{i+3},\ldots,u_{i+3(\ell-1)},u_{i+1},\ldots,u_{i+3(\frac{\ell}{2}-1 )-2}\}\]
so that \(|X|=\ell+\frac{\ell}{2}-1=\frac{3\ell}{2}-1\). Consider a set
\[A=\begin{cases}\{u_{i+2},u_{i+5},u_{i+7},u_{i+8},u_{i+10}\}\ \ \text{if}\ \ell=4\\ \{u_{i+2},u_{i+5},u_{i+8},u_{i+3\ell-4},u_{i+3\ell-2}\}\ \ \text{if}\ \ell\geq 6\ \text{is even}.\end{cases}\]
Observe that \(A\subset X^{c}\). Hence, for \(\ell=4\), the edges \(u_{i}u_{i+2},u_{i}u_{i+8},u_{i}u_{i+10},u_{i+3}u_{i+5},u_{i+3}u_{i+2},u_{i+6}u_ {i+5}\), \(u_{i+6}u_{i+7},u_{i+6}u_{i+8},u_{i+9}u_{i+8},u_{i+9}u_{i+7},u_{i+9}u_{i+10},u_{i +1}u_{i+2},u_{i+1}u_{i+10}\) must belong to the cut \((X,X^{c})\). Thus \(|(X,X^{c})|>12\). For \(\ell\geq 6\), the edges \(u_{i}u_{i+2},u_{i}u_{i+3\ell-2},u_{i}u_{i+3\ell-4},u_{i+3}u_{i+2},u_{i+3}u_{i+5}, u_{i+6}u_{i+5}\), \(u_{i+6}u_{i+8},u_{i+9}u_{i+8},u_{i+3\ell-3}u_{i+3\ell-4},u_{i+3\ell-3}u_{i+3\ell-2}, u_{i+1}u_{i+2},u_{i+4}u_{i+2},u_{i+4}u_{i+5}\) must belong to \((X,X^{c})\). Thus \(|(X,X^{c})|>12\).
Now if \(n=3\ell-2\) and \(\ell\geq 5\) is odd, then
\[X=\{u_{i},u_{i+3},\ldots,u_{i+3(\ell-1)},u_{i+2},\ldots,u_{i+3(\lfloor\frac{\ell }{2}\rfloor-1)-1}\}\]
so that \(|X|=\ell+\lfloor\frac{\ell}{2}\rfloor-1=\lfloor\frac{3\ell}{2}\rfloor-1\). Here \(X^{c}\) must contain the vertices \(u_{i+1},u_{i+4},u_{i+7},u_{i+3\ell-5}\), and \(u_{i+3\ell-4}\). So the edges \(u_{i}u_{i+1},u_{i}u_{i+3\ell-4}\), \(u_{i+3}u_{i+1}\), \(u_{i+3}u_{i+4}\), \(u_{i+6}u_{i+4}\), \(u_{i+6}u_{i+7}\), \(u_{i+9}u_{i+7}\), \(u_{i+3\ell-6}u_{i+3\ell-5}\), \(u_{i+3\ell-6}u_{i+3\ell-4},u_{i+3\ell-3}u_{i+3\ell-5},u_{i+3\ell-3}u_{i+3\ell-4}, u_{i+3\ell-3}u_{i+1}\) must belong to \((X,X^{c})\). Thus \(|(X,X^{c})|>12\).
From all the above four cases, we conclude that for any \(X\subset V(C_{n}^{3})\) with \(|X|=\lfloor\frac{n}{2}\rfloor\), we have \(|(X,X^{c})|\geq 12\). Further, if \(X=\{u_{0},u_{1},\ldots,u_{\lfloor\frac{n}{2}\rfloor-1}\}\), then \((X,X^{c})=\{u_{n-1}u_{0},u_{n-1}u_{1},u_{n-1}u_{2},u_{n-2}u_{0},\)\(u_{n-2}u_{1},u_{n-3}u_{0},u_{\lfloor\frac{n}{2}\rfloor-3}u_{\lfloor\frac{n}{2} \rfloor},u_{\lfloor\frac{n}{2}\rfloor-2}u_{\lfloor\frac{n}{2}\rfloor+1},u_{ \lfloor\frac{n}{2}\rfloor-1}u_{\lfloor\frac{n}{2}\rfloor-1}u_{\lfloor\frac{n }{2}\rfloor-1}u_{\lfloor\frac{n}{2}\rfloor+1},u_{\lfloor\frac{n}{2}\rfloor-1 }u_{\lfloor\frac{n}{2}\rfloor+2}\}\). So we have \(|(X,X^{c})|=12\). Consequently, \(\sigma^{-}(C_{n}^{3})=12\). This completes the proof.
In this paper, we obtained an upper bound of \(d(d+1)\) for the _rna_ number of \(C_{n}^{d}\) for \(2\leq d<\lfloor\frac{n}{2}\rfloor\) and proved that the _rna_ numbers of \(C_{n}^{2}\) and \(C_{n}^{3}\) are \(6\) and \(12\), respectively. We were not able to calculate the _rna_ number of \(C_{n}^{d}\) for \(4\leq d<\lfloor\frac{n}{2}\rfloor\). So for these values of \(d\) we make the following conjecture.
**Conjecture 1**.: _If \(4\leq d<\lfloor\frac{n}{2}\rfloor\), then \(\sigma^{-}(C_{n}^{d})=d(d+1).\)_
|
2309.03441 | $G$-kernels of Kirchberg algebras | A $G$-kernel is a group homomorphism from a group $G$ to the outer
automorphism group of a C$^*$-algebra. Inspired by recent work of Evington and
Gir\'{o}n Pacheco in the stably finite case, we introduce a new invariant of a
$G$-kernel using $K$-theory, and deduce several new constraints of the
obstruction classes of $G$-kernels in the purely infinite case. We classify
$\mathbb{Z}^n$-kernels for strongly self-absorbing Kirchberg algebras in the
bootstrap category in terms of our new invariant and the Dadarlat-Pennig theory
of continuous fields of strongly self-absorbing C$^*$-algebras. | Masaki Izumi | 2023-09-07T01:50:01Z | http://arxiv.org/abs/2309.03441v3 | # \(G\)-kernels of Kirchberg algebras
###### Abstract
A \(G\)-kernel is a group homomorphism from a group \(G\) to the outer automorphism group of a C\({}^{*}\)-algebra. Inspired by recent work of Evington and Giron Pacheco in the stably finite case, we introduce a new invariant of a \(G\)-kernel using \(K\)-theory, and deduce several new constraints of the obstruction classes of \(G\)-kernels in the purely infinite case. We classify \(\mathbb{Z}^{n}\)-kernels for strongly self-absorbing Kirchberg algebras in the bootstrap category in terms of our new invariant and the Dadarlat-Pennig theory of continuous fields of strongly self-absorbing C\({}^{*}\)-algebras.
In memory of Eberhard Kirchberg
## 1 Introduction
A \(G\)-kernel is a group homomorphism from a group \(G\) into the outer automorphism group \(\mathrm{Out}(A)\) of an operator algebra \(A\). With a \(G\)-kernel \(\alpha:G\to\mathrm{Out}(A)\), we can associate the third cohomology obstruction \(\mathrm{ob}(\alpha)\in H^{3}(G,U(Z(A)))\) for \(\alpha\) to lift to a cocycle \(G\)-action on \(A\) (see [42]), which is the most significant invariant for \(G\)-kernels. Indeed, in the case of a countable discrete amenable group \(G\) and the hyperfinite II\({}_{1}\) factor \(A=\mathcal{R}\), this is known to be a complete invariant up to conjugacy due to Connes [4], [5], Jones [24], and Ocneanu [36] (see also [25], [26], [27], and [30] for the case of the other injective factors). Finite group \(G\)-kernels and their obstruction classes also play important roles in the conformal field theory models (see [9] for example).
Working on group actions on operator algebras, for a long time the present author expected an interesting interplay between K-theory and the third cohomology obstruction arising from \(G\)-kernels of C\({}^{*}\)-algebras. The first result in this direction
was obtained only recently by Eugiton and Giron Pacheco [10], where they showed that \(\mathrm{ob}(\alpha)\) is always trivial for the Jiang-Su algebra \(A=\mathcal{Z}\), and that a strong K-theoretical restriction occurs if \(G\) is finite and \(A\) is a UHF algebra. Their main technical tool is the de la Harpe-Skandalis determinant, algebraic K-theory in other words, which works only for stably finite C\({}^{*}\)-algebras. Therefore it is desirable to introduce an alternative invariant by using only the topological K-theory, which is applicable to a wider class of C\({}^{*}\)-algebras. One of the purposes of this paper is to accomplish this task.
For a unital simple C\({}^{*}\)-algebra \(A\) whose unitary group \(U(A)\) is connected, we introduce a new invariant \(\widetilde{\mathrm{ob}}(\alpha)\in H^{3}(G,K_{0}^{\#}(A))\) of a \(G\)-kernel \(\alpha\), where \(K_{0}^{\#}(A)\) is an extension of \(\mathbb{T}\) by \(K_{0}(A)\) for a large class of C\({}^{*}\)-algebras (see Definition 3.3). The usual obstruction \(\mathrm{ob}(\alpha)\) is the image of \(\widetilde{\mathrm{ob}}(\alpha)\) under the map induced by the surjection \(K_{0}^{\#}(A)\to\mathbb{T}\), and hence \(\widetilde{\mathrm{ob}}(\alpha)\) carries more information than \(\mathrm{ob}(\alpha)\). As we expected, our new invariant \(\widetilde{\mathrm{ob}}(\alpha)\) give significantly strong restriction to \(\mathrm{ob}(\alpha)\) in the purely infinite case. For example, we can show by using \(\widetilde{\mathrm{ob}}(\alpha)\) that \(\mathrm{ob}(\alpha)\) is trivial for the following two cases: (i) \(G=\mathbb{Z}_{2}\) and the odd Cuntz algebras \(A=\mathcal{O}_{2n+1}\) (Theorem 3.4), and (ii) any finite \(G\) and the infinite Cuntz algebra \(A=\mathcal{O}_{\infty}\) (Theorem 3.6). In particular, we see that \(\mathcal{Z}\) and \(\mathcal{O}_{\infty}\) behave in the same way as far as finite group \(G\)-kernels are concerned, which adds a new example to many common features shared by \(\mathcal{Z}\) and \(\mathcal{O}_{\infty}\). However, when \(G\) is infinite, e.g. \(G=\mathbb{Z}^{n}\) with \(n\geq 3\), the situation is completely different and \(\mathcal{O}_{\infty}\) may have non-trivial \(\mathrm{ob}(\alpha)\).
The recent striking work of Gabe and Szabo [11] on dynamical Kirchberg-Phillips theorem shows that the thoery of group actions on Kirchberg algebras is sufficiently matured by now, and we have enough tools to start classification of \(G\)-kernels too. Their result together with Meyer's work [33] solved a conjecture raised by the author [17],[20], which allows us to use continuous fields of C\({}^{*}\)-algebras to study group actions on them. This typically works very well for (stabilized) strongly self-absorbing Kirchberg algebras thanks to Dadarlat-Pennig's generalized Dixmier-Doudary theory [7],[6]. Therefore it is reasonable to start classification of \(G\)-kernels with strongly self-absorbing Kirhberg algebras, which is another purpose of this paper. In Theorem 4.14, we classify \(\mathbb{Z}^{n}\)-kernels for the strongly self-absorbing Kirchberg algebras in the bootstrap category, in terms of our new invariant and the Dadarlat-Pennig theory.
Looking back on my fledgling period, I really feel that it is fortunate for me to have attended Kirchberg's famous Geneva talk in summer of 1994, and his serial lectures in the Fields Institute in subsequent winter period (his famous forever preprint [28] keeps atmosphere at that time). Without these precious experiences, I would never have worked on C\({}^{*}\)-algebras seriously.
The author would like to thank Sergio Giron Pacheco, Ulrich Pennig, Hiroki Matui, and Yuhei Suzuki for useful discussions. He would like to thank Isaac Newton Institute for its hospitality.
Preliminaries
Throughout this paper, we assume that \(A\) is a simple unital \(\mathrm{C}^{*}\)-algebra. We denote by \(U(A)\) the unitary group of \(A\), and by \(\mathrm{Aut}(A)\) the automorphism group of \(A\). For \(u\in U(A)\), we denote by \(\mathrm{Ad}\,u\) the automorphism of \(A\) defined by \(\mathrm{Ad}\,u(x)=uxu^{*}\) for \(x\in A\). An automorphism of \(A\) is called inner if it is of the form \(\mathrm{Ad}\,u\). We denote by \(\mathrm{Inn}(A)\) the set of inner automorphisms, which is a normal subgroup of \(\mathrm{Aut}(A)\). The outer automorphism group \(\mathrm{Out}(A)\) of \(A\) is defined by the quotient group \(\mathrm{Aut}(A)/\mathrm{Inn}(A)\).
By a trace of a \(\mathrm{C}^{*}\)-algebra, we always mean a tracial state. We denote \(\widetilde{K}_{0}(A)=K_{0}(A)/\langle[1]_{0}\rangle\), where \([1]_{0}\) is the \(K_{0}\)-class of \(1_{A}\). We denote by \(\tau_{*}\) the homomorphism from \(K_{0}(A)\) to \(\mathbb{R}\) induced by a trace \(\tau\).
We denote by \(\mathbb{K}\) the set of compact operators on a separable infinite dimensional Hilbert space, and by \(A^{s}\) the stabilization \(A\otimes\mathbb{K}\) of \(A\). For an integer \(n\geq 2\), we denote by \(M_{n^{\infty}}\) the UHF algebra of type \(n^{\infty}\). For a set \(\mathfrak{P}\) of prime numbers, we denote
\[M_{\mathfrak{P}^{\infty}}=\bigotimes_{p\in\mathfrak{P}}M_{p^{\infty}},\]
with understanding \(M_{\emptyset}=\mathbb{C}\). If \(\mathfrak{P}\) is the set of all prime numbers, we denote \(M_{\mathbb{Q}}=M_{\mathfrak{P}^{\infty}}\). We denote by \(\mathcal{O}_{n}\) the Cuntz algebra, and by \(\mathcal{Z}\) the Jiang-Su algebra.
We always assume that \(G\) is a countable (or finite) discrete group. For a subgroup \(H\) of \(G\), the quotient map \(G\to G/H\) is denoted by \(q_{G\to G/H}\), or simply by \(q\) if no possibility of confusion. For \(n\in\mathbb{N}\), we denote \(\mathbb{Z}_{n}=\mathbb{Z}/n\mathbb{Z}\).
We denote \(\mathbb{T}=\{z\in\mathbb{C};\ |z|=1\}\), which is a multiplicative group. When we identify \(\mathbb{T}\) with \(\mathbb{R}/\mathbb{Z}\), we always use the surjection \(\mathbb{R}\ni r\mapsto e^{2\pi ir}\in\mathbb{T}\).
A \(G\)-kernel is a group homomorphism \(\alpha:G\to\mathrm{Out}(A)\), and we always assume that \(\alpha\) is _injective_ in this paper.
We recall the definition of the obstruction class \(\mathrm{ob}(\alpha)\in H^{3}(G,\mathbb{T})\) of a \(G\)-kernel \(\alpha:G\to\mathrm{Out}(A)\) now. We choose a set theoretical lifting \(\widetilde{\alpha}:G\to\mathrm{Aut}(A)\) of \(\alpha_{g}\) and unitaries \(u(g,h)\in U(A)\) satisfying
\[\widetilde{\alpha}_{g}\circ\widetilde{\alpha}_{h}=\mathrm{Ad}\,u(g,h)\circ \widetilde{\alpha}_{gh}.\]
We call such a pair \((\widetilde{\alpha},u)\) a lifting of \(\alpha\) (called an anomalous action in [22]). Associativity implies
\[\mathrm{Ad}\,(\alpha_{g}(u(h,k))u(g,hk))\circ\alpha_{ghk}=\mathrm{Ad}(u(g,h)u (gh,k))\circ\alpha_{ghk},\]
and since the center of \(A\) is trivial, there exists \(\omega(g,h,k)\in\mathbb{T}\) satisfying
\[\widetilde{\alpha}_{g}(u(h,k))u(g,hk)=\omega(g,h,k)u(g,h)u(gh,k). \tag{2.1}\]
We can show that \(\omega\) satisfies the 3-cocycle relation. The definition of \(\omega\in Z^{3}(G,\mathbb{T})\) depends on the choice of the lifting \((\widetilde{\alpha},u)\). However, a different choice replaces
with \(\operatorname{Ad}v_{g}\circ\widetilde{\alpha}_{g}\) and \(u(g,h)\) with \(\mu(g,h)v_{g}\widetilde{\alpha}_{g}(v_{h})u(g,h)v_{gh}^{*}\), where \(v_{g}\in U(A)\) and \(\mu(g,h)\in\mathbb{T}\). This ends up replacing \(\omega(g,h,k)\) with \(\partial\mu(g,h,k)\omega(g,h,k)\), and hence the cohomology class \([\omega]\in H^{3}(G,\mathbb{T})\) depends only on \(\alpha\).
**Definition 2.1**.: For a \(G\)-kernel \(\alpha\), we define the obstruction class \(\operatorname{ob}(\alpha)\in H^{3}(G,\mathbb{T})\) of \(\alpha\) by the cohomology class of the cocycle \(\omega\) in Eq.(2.1). The obstruction class \(\operatorname{ob}(\alpha)\) depends only on the conjugacy class of \(\alpha\) in \(\operatorname{Hom}(G,\operatorname{Out}(A))\).
For a given pair of \(A\) and \(G\), we can ask the following two fundamental problems about \(G\)-kernels of \(A\):
1. The realization problem determining possible values of \(\operatorname{ob}(\alpha)\).
2. The classification problem seeking sufficiently many invariants to distinguish \(G\)-kernels up to conjugacy.
When \(c\in H^{3}(G,\mathbb{T})\) is realized as \(\operatorname{ob}(\alpha)\) of a \(G\)-kernel \(\alpha:G\to\operatorname{Out}(A)\), we say that \(c\) is realized in \(A\) for simplicity. We introduce several variants of the obstruction class in this paper, and we ask (1) for these variants too.
As mentioned in Introduction, the two problems are completely solved for the hyperfinite II\({}_{1}\) factor \(\mathcal{R}\) and amenable \(G\).
**Theorem 2.2** (Connes, Jones, Ocneanu).: _Let \(G\) be a countable amenable group, and let \(\mathcal{R}\) be the hyperfinite II\({}_{1}\) factor. Then the third cohomology obstruction is a completely invariant for \(G\)-kernels up to conjugacy. Moreover, every class in \(H^{3}(G,\mathbb{T})\) can be realized in \(\mathcal{R}\)._
As pointed out in [10], the realization part of the above work has important implications in C\({}^{*}\)-algebras too. For example, Connes' construction in [5] shows that every class in \(H^{3}(\mathbb{Z}_{n},\mathbb{T})\) is realized in \(M_{n^{\infty}}\).
More generally, Jones' construction in [23], [24] shows the following (see [10, Theorem 4.3]):
**Theorem 2.3**.: _For every finite group \(G\), every class in \(H^{3}(G,\mathbb{T})\) is realized in \(M_{|G|^{\infty}}\)._
Combining Jones' construction with Kirchberg's \(\mathcal{O}_{2}\) theorem, we get the following:
**Theorem 2.4**.: _For every countable discrete group \(G\), every class in \(H^{3}(G,\mathbb{T})\) is realized in the Cuntz algebra \(\mathcal{O}_{2}\)._
Proof.: Let \(c\in H^{3}(G,\mathbb{T})\). Then [23, Lemma 2.3] shows that there exists a countable group \(\tilde{G}\) with a surjective homomorphism \(p:\tilde{G}\to G\) such that \(N=\ker p\) is abelian and \(p^{*}c=0\) in \(H^{3}(\tilde{G},\mathbb{T})\). We choose an outer action (say a faithful quasi-free action) \(\beta\) of \(\tilde{G}\) on \(\mathcal{O}_{\infty}\). Then [23, Theorem 2.5] shows that the class \(c\) is realized in the twisted crossed product \(A=\mathcal{O}_{\infty}\rtimes_{\beta,\mu}N\) with an appropriate \(2\)-cocycle \(\mu\in Z^{2}(N,\mathbb{T})\) (see [22, Section 3] too). Since \(\beta\) is outer and \(N\) is abelian, \(A\) is a Kirchberg algebra. Now as \(c\) is realized in \(A\otimes\mathcal{O}_{2}\) too, Kirchberg's \(\mathcal{O}_{2}\) theorem finishes the proof.
We recall Evington and Giron Pacheco's argument in [10]. Let \(\tau\) be a trace, and let \(u\in U(A)_{0}\). Then the de la Harpe Skandalis determinant \(\Delta_{\tau}(u)\in\mathbb{R}+\tau_{*}(K_{0}(A))\) is defined as follows. We choose a smmoth path \(\{\tilde{u}(t)\}_{t\in[0,1]}\) from 1 to \(u\) in \(U(A)_{0}\), and set
\[\Delta_{\tau}(u)=\frac{1}{2\pi i}\int_{0}^{1}\tau(\tilde{u}(t)^{-1}\tilde{u}^{ \prime}(t))dt+\tau_{*}(K_{0}(A)).\]
Then \(\Delta_{\tau}:U(A)_{0}\to\mathbb{R}/\tau_{*}(K_{0}(A))\) is a well-defined group homomorphism (see [14, Lemme 1, Proposition 2]).
Assume now that \(U(A)\) is connected and \(A\) has a trace \(\tau\) preserved by a \(G\)-kernel \(\alpha\). Let \((\widetilde{\alpha},u)\) be a lifting of a \(G\)-kernel \(\alpha:G\to\mathrm{Out}(A)\). Then we have
\[\Delta_{\tau}(u(h,k))+\Delta_{\tau}(u(g,hk))=\Delta_{\tau}(\omega(g,h,k))+ \Delta_{\tau}(u(g,h))+\Delta_{\tau}(u(gh,k)).\]
Let \(s:\mathbb{T}\to\mathbb{R}/\tau_{*}(K_{0}(A))\) be the map given by \(e^{2\pi it}\mapsto t+\tau_{*}(K_{0}(A))\). Then the above equality means \(s_{*}(\mathrm{ob}(\alpha))=0\). This immediately implies the following theorem.
**Theorem 2.5** ([10, Theorem A]).: _For the Jiang-Su algebra \(\mathcal{Z}\), the obstruction \(\mathrm{ob}(\alpha)\) is always trivial for any discrete group \(G\) and any \(G\)-kernel \(\alpha:G\to\mathrm{Out}(\mathcal{Z})\)._
Using \(s_{*}(\mathrm{ob}(\alpha))=0\) together with the fact that a \(G\)-kernel restricts to corners by projections keeping the same obstruction, they also obtained the following theorem.
**Theorem 2.6** ([10, Theorem B]).: _Let \(A\) be a UHF algebra and let \(G\) be a finite group. Let \(\alpha:G\to\mathrm{Out}(A)\) be a \(G\)-kernel, and let \(r\) be the order of \(\mathrm{ob}(\alpha)\), which is not 0. Then \(A\cong A\otimes M_{r^{\infty}}\)._
The coefficient short exact sequence
\[0\to\tau_{*}(K_{0}(A))/\mathbb{Z}\to\mathbb{T}\to\mathbb{R}/\tau_{*}(K_{0}(A))\to 0 \tag{2.2}\]
induces a long exact sequence of cohomology, and \(s_{*}(\mathrm{ob}(\alpha))=0\) implies that \(\mathrm{ob}(\alpha)\) comes from a class in \(H^{3}(G,\tau_{*}(K_{0}(A))/\mathbb{Z})\), which was used in the second theorem. In fact, it is more convenient to introduce an invariant valued in the cohomology group \(H^{3}(G,\tau_{*}(K_{0}(A))/\mathbb{Z})\) directly.
**Definition 2.7**.: Assume that \(A\) has a trace \(\tau\) preserved by a \(G\)-kernel \(\alpha:G\to\mathrm{Out}(A)\) and \(U(A)\) is connected. We choose a lifting \((\widetilde{\alpha},u)\) of \(\alpha\) satisfying \(u(g,h)\in\ker\Delta_{\tau}\). Then the resulting cocycle \(\omega\) determined by Eq.(2.1) satisfies \(\omega(g,h,k)\in e^{2\pi i\tau_{*}(K_{0}(A))}\). With such a cocycle, we define
\[\mathrm{ob}_{\tau}(\alpha)=[\omega]\in H^{3}(G,\tau_{*}(K_{0}(A))/\mathbb{Z}).\]
When \(A\) has a unique trace and \(K_{0}(A)\) has no infinitesimal elements, we simply denote \(H^{3}(G,\tau_{*}(K_{0}(A))/\mathbb{Z})\) by \(H^{3}(G,\widetilde{K}_{0}(A))\) (recall \(\widetilde{K}_{0}(A)=K_{0}(A)/\mathbb{Z}[1]_{0}\)).
To see that \(\operatorname{ob}_{\tau}(\alpha)\) is well defined, let \((\widetilde{\alpha}^{\prime},u^{\prime})\) be another lifting of \(\alpha\) satisfying \(\Delta_{\tau}(u^{\prime}(g,h))=0\). Then there exist \(v_{g}\in U(A)\) and \(\mu(g,h)\in\mathbb{T}\) satisfying \(\widetilde{\alpha}^{\prime}_{g}=\operatorname{Ad}v_{g}\circ\widetilde{\alpha} _{g}\), \(u^{\prime}(g,h)=\mu(g,h)v_{g}\widetilde{\alpha}_{g}(v_{h})u(g,h)v_{gh}^{*}\), and
\[\Delta_{\tau}(\mu(g,h))=\Delta_{\tau}(v_{gh})-\Delta_{\tau}(v_{g})-\Delta_{ \tau}(v_{h}).\]
This means that there exist \(\eta(g)\in\mathbb{R}\) and \(\zeta(g,h)\in\tau_{*}K_{0}(A)\) satisfying
\[\mu(g,h)=e^{2\pi i\partial\eta(g,h)}e^{2\pi i\zeta(g,h)},\]
which shows that \(\operatorname{ob}_{\tau}(\alpha)\) is well-defined.
_Remark 2.8_.: The long exact sequence arising from the coefficient short exact sequence Eq.(2.2) shows that the map
\[H^{3}(G,\tau_{*}K_{0}(A)/\mathbb{Z})\to H^{3}(G,\mathbb{T})\]
is injective if and only if the connecting map
\[\partial_{A}:H^{2}(G,\mathbb{R}/\tau_{*}K_{0}(A))\to H^{3}(G,\tau_{*}K_{0}(A)/ \mathbb{Z})\]
is \(0\). The universal coefficient theorem shows that this is equivalent to
\[\operatorname{Ext}(H_{2}(G,\mathbb{Z}),\tau_{*}K_{0}(A)/\mathbb{Z})=\{0\}.\]
This condition is satisfied if either \(H_{2}(G,\mathbb{Z})\) is free e.g. \(G=\mathbb{Z}^{n}\), or \(\tau_{*}K_{0}(A)/\mathbb{Z}\) is divisible e.g. \(A=M_{\mathfrak{P}^{\infty}}\).
The above two theorems inspire the following two conjectures.
**Conjecture 2.9**.: _Let \(G\) be a countable discrete group, and assume that \(c\in H^{3}(G,\mathbb{T})\) has a non-zero finite order \(r\). Then \(c\) is realized as \(\operatorname{ob}(\alpha)\) of a \(G\)-kernel \(\alpha:G\to\operatorname{Out}(M_{r^{\infty}})\)._
Conjecture 2.9 is true for every finite abelian group thanks to Theorem 2.3, though it is not clear if it holds for general finite groups.
**Conjecture 2.10**.: _Let \(G\) be a countable discrete group with a reasonable finiteness condition for its cohomology, and let \(\mathfrak{P}\) be a set of prime numbers. Then every class in \(H^{3}(G,\widetilde{K}_{0}(M_{\mathfrak{P}^{\infty}}))\) can be realized as \(\operatorname{ob}_{\tau}(\alpha)\) of a \(G\)-kernel \(\alpha:G\to\operatorname{Out}(M_{\mathfrak{P}^{\infty}})\)._
In view of
\[\widetilde{K}_{0}(M_{\mathfrak{P}^{\infty}})\cong\bigoplus_{p\in P}\mathbb{Z} [1/p]/\mathbb{Z}=\bigoplus_{p\in P}\varinjlim_{m}\mathbb{Z}_{p^{m}},\]
it is reasonable to tackle Conjecture 2.10 assuming a finiteness condition on \(G\) such as \(FP_{3}\) (see [3, p.193] for the definition). Under this assumption, we have
\[H^{3}(G,\widetilde{K}_{0}(M_{\mathfrak{P}^{\infty}}))=\bigoplus_{p\in\mathfrak{ P}}\varinjlim_{m}H^{3}(G,\mathbb{Z}_{p^{m}}),\]
(see [3, Chapter VIII, (Proposition 4.6]), and Conjecture 2.10 is reduced to the case of \(M_{p^{\infty}}\), and is further reduced to Conjecture 2.9 as the subtle difference between \(\operatorname{ob}(\alpha)\) and \(\operatorname{ob}_{\tau}(\alpha)\) can be ignored in this case (see Remark 2.8 above). Indeed, this reduction argument works for \(\mathbb{Z}^{n}\).
**Theorem 2.11**.: _Conjecture 2.10 is true for \(G=\mathbb{Z}^{n}\)._
Proof.: As stated above, it suffices to verify the statement for the UHF algebra \(M_{p^{\infty}}\) for each prime \(p\). Furthermore, it suffices to show that every class in \(\iota_{m*}H^{3}(\mathbb{Z}^{n},\mathbb{Z}_{p^{m}})\) is realized in \(M_{p^{\infty}}\), where \(\iota_{m}:\mathbb{Z}_{p^{m}}\to\mathbb{Z}[1/p]/\mathbb{Z}\) is the inclusion map. We write \(H_{n}(G)=H_{n}(G,\mathbb{Z})\) and \(C=\mathbb{Z}_{p^{m}}\) for simplicity.
Let \(c\in H^{3}(\mathbb{Z}^{n},C)\). Since \(H_{*}(\mathbb{Z}^{n})\) is a finitely generated free abelian group, we can identify \(c\) as an element in \(\operatorname{Hom}(H_{3}(\mathbb{Z}^{n}),C)\) by the university coefficient theorem. Let \(q:\mathbb{Z}^{n}\to C^{n}\) be the quotient map. We show that \(c\) is in the image of \(q^{*}:H^{3}(C^{n},C)\to H^{3}(\mathbb{Z}^{n},C)\).
The Kunneth formula implies that
\[H_{3}(\mathbb{Z}^{n})=\bigoplus_{i_{1}+i_{2}+\cdots+i_{n}=3,\ i_{j}=0,1}H_{i_ {1}}(\mathbb{Z})\otimes H_{i_{2}}(\mathbb{Z})\otimes\cdots\otimes H_{i_{n}}( \mathbb{Z})\cong\mathbb{Z}^{n(n-1)(n-2))/6},\]
and also that
\[L=\bigoplus_{i_{1}+i_{2}+\cdots+i_{n}=3}H_{i_{1}}(C)\otimes H_{i_{2}}(C) \otimes\cdots\otimes H_{i_{n}}(C)\]
is a direct summand of \(H_{3}(C^{n})\). Note that we have \(H_{0}(C)=\mathbb{Z}\) and \(H_{1}(C)=C\). This shows that the image of \(q_{*}:H_{3}(\mathbb{Z}^{n})\to H_{3}(C^{n})\) is a direct summand of \(L\) isomorphic to \(C^{n(n-1)(n-2)/6}\), and \(\ker q_{*}=p^{m}H_{3}(\mathbb{Z}^{n})\). Since the kernel of \(c\) as an element in \(\operatorname{Hom}(H_{3}(\mathbb{Z}^{n}),C)\) includes \(p^{m}H_{3}(\mathbb{Z}^{n})\), there exists \(c^{\prime}\in\operatorname{Hom}(H_{3}(C^{n}),C)\) satisfying \(c=c^{\prime}\circ q_{*}\). Now the universal coefficient theorem shows that there exists \(c_{1}\in H^{3}(C^{n},C)\) satisfying \(q^{*}(c_{1})=c\).
Let \(\iota:\mathbb{Z}[1/p]/\mathbb{Z}\to\mathbb{T}\) be the inclusion map. Thanks to Remark 2.8, it suffices to show the realization of \((\iota\circ\iota_{m})_{*}c\) as \(\operatorname{ob}(\alpha)\) instead of \(\iota_{m*}c\) as \(\operatorname{ob}_{\tau}(\alpha)\). Applying Theorem 2.3 to \(C^{n}\) and \((\iota\circ\iota_{m})_{*}c_{1}\in H^{3}(C^{n},\mathbb{T})\), we see that \((\iota\circ\iota_{m})_{*}c_{1}\) is realized as \(\operatorname{ob}(\alpha)\) of a \(C^{n}\)-kernel \(\alpha:C^{n}\to\operatorname{Out}(M_{p^{\infty}})\). Letting \(\beta\) be the composition of \(q\) and \(\alpha\), and taking tensor product of \(\beta\) and an outer action of \(\mathbb{Z}^{n}\) on \(M_{p^{\infty}}\), we get a desired \(\mathbb{Z}^{n}\)-kernel.
## 3 A new invariant
In this section, we assume that \(U(A)\) is connected. Note that we have a group homomorphism from \(\pi_{1}(U(A))\) to \(K_{1}(SA)=K_{0}(A)\) by the Bott periodicity. For simplicity, we assume that \(\pi_{1}(U(A))\cong K_{0}(A)\), which is the case, for example, if \(A\) is Jiang-Su absorbing.
### Invariant \(\kappa^{3}(\alpha,u)\) for a cocycle action \((G,u)\).
Our definition of a new invariant \(\widetilde{\operatorname{ob}}(\alpha)\) for a \(G\)-kernel \(\alpha\) is a modification of that of an invariant
\[\kappa^{3}(\alpha,u)\in H^{3}(G,K_{0}(A))\]
for a cocycle action \((\alpha,u)\) introduced in [19, Section 8.1]. Since we need it in the next section, we recall its definition here before introducing \(\widetilde{\operatorname{ob}}(\alpha)\).
A cocycle action \((\alpha,u)\) of \(G\) on \(A\) is a pair of a map \(\alpha:G\to\operatorname{Aut}(A)\) and a families of unitaries \(u(g,h)\in U(A)\) satisfying
\[\alpha_{g}\circ\alpha_{h}=\operatorname{Ad}u(g,h)\circ\alpha_{gh},\]
\[\alpha_{g}(u(h,k))u(g,hk)=u(g,h)u(gh,k).\]
A cocycle action \((\alpha,u)\) is outer if \(\alpha_{g}\) is outer for every \(g\in G\setminus\{1\}\).
We say that two cocycle actions \((\alpha,u)\) and \((\beta,v)\) on \(A\) are equivalent if there exist \(w_{g}\in U(A)\) satisfying \(\beta_{g}=\operatorname{Ad}w_{g}\circ\alpha_{g}\), and
\[v(g,h)=w_{g}\alpha_{g}(w_{h})u(g,h)w_{gh}^{*}.\]
Up to equivalence, we may and do assume that \(\alpha_{e}=\operatorname{id}\) and \(u(g,e)=u(e,g)=1_{A}\). If there exists no non-trivial order \(2\) element in \(G\), we may further normalize \((\alpha,u)\) so that \(\alpha_{g^{-1}}=\alpha_{g}^{-1}\) and \(u(g,g^{-1})=1\) hold.
**Definition 3.1**.: For a cocycle \(G\)-action \((\alpha,u)\) on \(A\), we choose a continuous path \(\{\widetilde{u}_{g,h}(t)\}_{t\in[0,1]}\) in \(U(A)\) from \(1\) to \(u_{g,h}\) for each pair \(g,h\). Then
\[\partial\widetilde{u}(g,h,k)(t):=\alpha_{g}(\widetilde{u}(h,k)(t))\widetilde {u}(g,hk)(t)\widetilde{u}(gh,k)(t)^{-1}\widetilde{u}(g,h)(t)^{-1}\]
is a based loop in \(U(A)\), and \([\partial\widetilde{u}_{g,h,k}]_{0}\in K_{0}(A)\) form a \(3\)-cocycle in \(Z^{3}(G,K_{0}(A))\). We define \(\kappa^{3}(\alpha,u)\) to be its cohomology class in \(H^{3}(G,K_{0}(A))\). The class \(\kappa^{3}(\alpha,u)\) does not depend on the choices of the paths.
If \((\alpha,u)\) and \((\beta,v)\) are equivalent, we have \(\kappa^{3}(\alpha,u)=\kappa^{3}(\beta,v)\).
We say that two cocycle actions \((\alpha,u)\) on \(A\) and \((\beta,v)\) on \(B\) are cocycle conjugate if there exists an isomorphism \(\theta:A\to B\) such that \((\theta\circ\alpha\circ\theta^{-1},\theta(u))\) and \((\beta,v)\) are equivalent. Under this relation, we have \(\theta_{*}\kappa^{3}(\alpha,u)=\kappa^{3}(\beta,v)\).
### Invariant \(\widetilde{\operatorname{ob}}(\alpha)\) for a \(G\)-kernel \(\alpha\).
Now we introduce a module \(K_{0}^{\#}(A)\) and an invariant \(\widetilde{\operatorname{ob}}(\alpha)\in H^{3}(G,K_{0}^{\#}(A))\) for a \(G\)-kernel \(\alpha\). We state some of their basic properties without giving detailed proofs, and the reader is referred to Giron Pacheco's thesis [12, Section 5] for details.
We set
\[K_{0}^{\#}(A)=\{f\in U(C([0,1],A));\ f(0)=1,\ f(1)\in\mathbb{T}\}/U(C_{0}((0,1 ),A))_{0},\]
where \(U(C_{0}((0,1),A))_{0}\) is the connected component of \(1\) in
\[\{f\in U(C([0,1],A));\ f(0)=f(1)=1\}.\]
Then \(K_{0}^{\#}(A)\) is an abelian group. Let \(\mathrm{ev}_{1}\) be the evaluation map at \(1\), and let
\[j_{A}:K_{0}(A)=\pi_{1}(U(A))\to K_{0}^{\#}(A)\]
be the inclusion map. Then we have a short exact sequence:
\[0\to K_{0}(A)\xrightarrow{j_{A}}K_{0}^{\#}(A)\xrightarrow{\mathrm{ev}_{1}} \mathbb{T}\to 0. \tag{3.1}\]
For \(r\in\mathbb{R}\), let \(e_{r}(t)=e^{2\pi irt}\). For two modules \(M_{1}\) and \(M_{2}\), we denote by \(\mathrm{pr}_{i}\) for \(i=1,2\) the projection from \(M_{1}\times M_{2}\) onto the \(i\)-the component. We abuse the notation and use the same symbole for the map
\[(M_{1}\times M_{2})/\langle(m_{1},m_{2})\rangle\to M_{i}/\langle m_{i}\rangle\]
induced by \(\mathrm{pr}_{i}\).
The following presentation of \(K_{0}^{\#}(A)\) is useful to identify \(K_{0}^{\#}(A)\) in many concrete examples.
**Lemma 3.2**.:
1. _There exists a unique isomorphism_ \[\varphi_{A}:K_{0}^{\#}(A)\to(K_{0}(A)\times\mathbb{R})/\mathbb{Z}([1]_{0},-1),\] _satisfying_ \[\varphi_{A}\circ j_{A}(x)=[(x,0)],\] \[\varphi_{A}([e_{r}])=[(0,r)],\] _for all_ \(x\in K_{0}(A)\) _and_ \(r\in\mathbb{R}\)_. We also have_ \(\mathrm{ev}_{1}=\mathrm{pr}_{2}\circ\varphi_{A}\)_._
2. _Assume that there exists_ \(\rho\in\mathrm{Hom}(K_{0}(A),\mathbb{R})\) _satisfying_ \(\rho([1]_{0})=1\)_. Then there exists a unique isomorphism_ \(\psi_{A,\rho}:K_{0}^{\#}(A)\to\widetilde{K}_{0}(A)\times\mathbb{R}\) _satisfying_ \[\psi_{A,\rho}\circ j_{A}(x)=([x],\rho(x)),\] \[\psi_{A,\rho}([e_{r}])=(0,r),\] _for all_ \(x\in K_{0}(A)\) _and_ \(r\in\mathbb{R}\)_. Moreover, we have_ \[\mathrm{ev}_{1}\circ\psi_{A,\rho}^{-1}([x],y)=e^{2\pi i(-\rho(x)+y)}.\]
Proof.: (1) The two maps \(j_{A}:K_{0}(A)\to K_{0}^{\#}(A)\) and \(\mathbb{R}\ni r\mapsto[e_{r}]\in K_{0}^{\#}(A)\) induce a surjective homomorphism from \(K_{0}(A)\times\mathbb{R}\) onto \(K_{0}^{\#}(A)\). Since its kernel is \(\mathbb{Z}([1]_{0},-1)\), we get the statement.
(2) The statement follows from the fact that there exists an isomorphism
\[(K_{0}(A)\times\mathbb{R})/\mathbb{Z}([1]_{0},-1)\ni[(x,r)]\mapsto([x],\rho( x)+r)\in\widetilde{K}_{0}(A)\times\mathbb{R}.\]
**Definition 3.3**.: For a \(G\)-kernel \(\alpha:G\to\operatorname{Out}(A)\) with a lifting \((\widetilde{\alpha},u)\), we choose a continuous path \(\{\widetilde{u}(g,h)(t)\}_{t\in[0,1]}\) in \(U(A)\) from \(1\) to \(u(g,h)\) for each pair \(g,h\in G\). Then
\[\widetilde{\omega}(g,h,k)(t):=\widetilde{\alpha}_{g}(\widetilde{u}(h,k)(t)) \widetilde{u}(g,hk)(t)\widetilde{u}(gh,k)(t)^{-1}\widetilde{u}(g,h)(t)^{-1}\]
is a continuous path from \(1\) to \(\omega(g,h,k)\in\mathbb{T}\) in \(U(A)\), where \(\omega\) is as in Eq.(2.1), and \([\widetilde{\omega}(g,h,k)]\in K_{0}^{\#}(A)\) form a \(3\)-coycle. We define \(\widetilde{\operatorname{ob}}(\alpha)\) to be its cohomology class in \(H^{3}(G,K_{0}^{\#}(A))\). The class \(\widetilde{\operatorname{ob}}(\alpha)\) does not depend on any choices made for its definition, and depends only on \(\alpha\).
We define the reduced version of \(\widetilde{\operatorname{ob}}(\alpha)\) by
\[\widetilde{\operatorname{ob}}^{r}(\alpha)=-(\operatorname{pr}_{1}\circ \varphi_{A})_{*}\widetilde{\operatorname{ob}}(\alpha)\in H^{3}(G,\widetilde{K }_{0}(A)).\]
When \(\rho\) as in Lemma 3.2 is available, we have \(\widetilde{\operatorname{ob}}^{r}(\alpha)=-(\operatorname{pr}_{1}\circ \psi_{A,\rho})_{*}\widetilde{\operatorname{ob}}(\alpha)\) too.
By construction, we have \(\operatorname{ev}_{1*}(\widetilde{\operatorname{ob}}(\alpha))=\operatorname{ ob}(\alpha)\), and \(\widetilde{\operatorname{ob}}(\alpha)\) has more information than \(\operatorname{ob}(\alpha)\). When \(\operatorname{ob}(\alpha)=0\), we can adjust \((\widetilde{\alpha},u)\) so that it gives a cocycle action of \(G\) on \(A\). In this case, we have \(j_{A*}(\kappa^{3}(\widetilde{\alpha},u))=\widetilde{\operatorname{ob}}(\alpha)\), which is compatible with the cohomology long exact sequence arising from Eq.(3.1).
Applying Lemma 3.2,(1) to the finite Cuntz algebra \(A=\mathcal{O}_{n+1}\), we obtain a commutative diagram
which immediately implies
**Theorem 3.4**.: _For every \(G\)-kernel \(\alpha:G\to\operatorname{Out}(\mathcal{O}_{n+1})\), its obstruction \(\operatorname{ob}(\alpha)\) belongs to \(nH^{3}(G,\mathbb{T})\). In particular, for every prime number \(p\), every \(\mathbb{Z}_{p}\)-kernel \(\alpha:\mathbb{Z}_{p}\to\operatorname{Out}(\mathcal{O}_{np+1})\) has trivial obstruction._
If \(p\) does not divide \(n\in\mathbb{N}\), we have \(\mathcal{O}_{n+1}\cong\mathcal{O}_{n+1}\otimes M_{p^{\infty}}\), and every class in \(H^{3}(\mathbb{Z}_{p},\mathbb{T})\cong\mathbb{Z}_{p}\) is realized in \(\mathcal{O}_{n+1}\) by Connes' construction. The realization problem in the prime power order case, e.g. \(A=\mathcal{O}_{3}\) and \(G=\mathbb{Z}_{4}\), is much subtler, and nothing is known about it to the best of the author's knowledge. More generally, the following problem is very fundamental.
**Problem 3.5**.: _Decide the range of \(\widetilde{\operatorname{ob}}\) for general \(\mathbb{Z}_{m}\) and \(\mathcal{O}_{n+1}\)._
Applying Lemma 3.2,(2) to the infinite Cuntz algebra \(A=\mathcal{O}_{\infty}\), we obtain a commutative diagram
which implies
**Theorem 3.6**.: _For every countable discrete group \(G\) and every \(G\)-kernel \(\alpha:G\to\operatorname{Out}(\mathcal{O}_{\infty})\), its obstruction belongs to \(q_{\mathbb{R}\to\mathbb{T}_{*}}H^{3}(G,\mathbb{R})\). In particular, it is trivial for any finite \(G\)._
We identify \(K_{0}^{\#}(\mathcal{O}_{\infty})\) with \(\mathbb{R}\) and \(j_{A}([1]_{0})\in K_{0}^{\#}(\mathcal{O}_{\infty})\) with \(1\) in what follows.
**Conjecture 3.7**.: _Let \(G\) be a countable discrete group with a reasonable finiteness condition for its cohomology, and let \(\mathfrak{P}\) be a (possibly empty) set of primes. Then every class in_
\[H^{3}(G,K_{0}^{\#}(M_{\mathfrak{P}^{\infty}}\otimes\mathcal{O}_{\infty}))\]
_is realized as \(\widetilde{\operatorname{ob}}(\alpha)\) of a \(G\)-kernel \(\alpha:G\to\operatorname{Out}(M_{\mathfrak{P}^{\infty}}\otimes\mathcal{O}_{ \infty})\)._
Note that we have
\[K_{0}^{\#}(M_{\mathfrak{P}^{\infty}}\otimes\mathcal{O}_{\infty})\cong\widetilde {K}_{0}(M_{\mathfrak{P}^{\infty}})\times\mathbb{R}\]
thanks to Lemma 3.2,(2).
### Stably finite case
In this subsection, we assume that \(A\) has a trace preserved by a \(G\)-kernel \(\alpha:G\to\operatorname{Out}(A)\), and show that \(\widetilde{\operatorname{ob}}(\alpha)\) and its reduced form \(\widetilde{\operatorname{ob}}^{r}(\alpha)\) have the same information.
We denote by \(C_{*}^{\infty}([0,1],U(A))\) the set of smooth maps \(f:[0,1]\to U(A)\) satisfying \(f(0)=1\). We define \(\tilde{\Delta}_{\tau}:C_{*}^{\infty}([0,1],U(A))\to\mathbb{R}\) by
\[\tilde{\Delta}_{\tau}(f)=\frac{1}{2\pi i}\int_{0}^{1}\tau(f(t)^{-1}f^{\prime} (t))dt.\]
Then we have \(\Delta_{\tau}(f(1))=\tilde{\Delta}_{\tau}(f)+\tau_{*}K_{0}(A)\) by definition, and \(\tilde{\Delta}_{\tau}(f)=\tau_{*}[f]_{0}\) if \(f(1)=1\).
**Lemma 3.8**.: _If \(f\in C_{*}^{\infty}([0,1],U(A))\) satisfies \(f(1)\in e^{2\pi i\tau_{*}K_{0}(A)}\) and \(\tilde{\Delta}_{\tau}(f)=0\), we have \(\psi_{A,\tau_{*}}([f])\in\widetilde{K}_{0}(A)\times\{0\}\)._
Proof.: We take \(x\in K_{0}(A)\) satisfying \(f(1)=e^{2\pi i\tau_{*}x}\), and set \(g(t)=f(t)e^{-2\pi i\tau_{*}xt}\). Then \(g\) is a based loop in \(U(A)\) giving an element of \(K_{0}(A)\), and
\[\tau_{*}[g]_{0}=\tilde{\Delta}_{\tau}([g]_{0})=\tilde{\Delta}_{\tau}(f)-\tau_ {*}x=-\tau_{*}x.\]
Since \(f=ge_{\tau_{*}x}\), we get
\[\psi_{A,\tau_{*}}(f)=([[g]_{0}]\,,\tau_{*}[g]_{0}+\tau_{*}x)=([[g]_{0}]\,,0),\]
showing the statement.
Let \(\tau_{*,q}:\widetilde{K}_{0}(A)\to\mathbb{T}\) be the map defined by
\[\tau_{*,q}([x])=e^{2\pi i\tau_{*}x}.\]
**Theorem 3.9**.: _We have_
\[(\operatorname{pr}_{2}\circ\psi_{A,\tau_{*}})_{*}\widetilde{\operatorname{ob}} (\alpha)=0,\]
\[(\tau_{*,q})_{*}\widetilde{\operatorname{ob}}^{r}(\alpha)=\operatorname{ob}_{ \tau}(\alpha).\]
Proof.: Let \((\widetilde{\alpha},u)\) be a lifting of a \(G\)-kernel satisfying \(\Delta_{\tau}(u_{g,h})=0\) so that \(\operatorname{ob}_{\tau}(\alpha)\) is given by the cohomology class of
\[\omega(g,h,k)=\widetilde{\alpha}_{g}(u(h,k))u(g,hk)u(gh,k)^{-1}u(g,h)^{-1}.\]
We can choose a smooth path \(\widetilde{u}(g,h)\in C_{*}^{\infty}([0,1],U(A))\) from \(1\) to \(u(g,h)\) satisfying \(\widetilde{\Delta}_{\tau}(u(g,h))=0\) for each pair \(g,h\in G\). With this choice of paths, the path
\[\widetilde{\omega}(g,h,k)(t)=\widetilde{\alpha}_{g}(\widetilde{u}(h,k)(t)) \widetilde{u}(g,hk)(t)\widetilde{u}(gh,k)(t)^{-1}\widetilde{u}(g,h)(t)^{-1}\]
satisfies \(\widetilde{\Delta}_{\tau}(\widetilde{\omega}(g,h,k))=0\). Thus Lemma 3.8 and Lemma 3.2 imply
\[\operatorname{pr}_{2}\circ\psi_{A,\tau_{*}}([\widetilde{\omega}(g,h,k)]) =0,\]
\[\tau_{*,q}\circ\operatorname{pr}_{1}\circ\psi_{A,\tau_{*}}([\widetilde{\omega} (g,h,k)]) =-\omega_{g,h,k}.\]
Since \(\mathcal{O}_{\infty}\) is \(KK\)-equivalent to \(\mathbb{C}\), we identify \(K_{0}(A\otimes\mathcal{O}_{\infty})\) with \(K_{0}(A)\). Note that although \(A\otimes\mathcal{O}_{\infty}\) has no trace, the homomorphism \(\tau_{*}:K_{0}(A\otimes\mathcal{O}_{\infty})\to\mathbb{R}\) makes sense.
**Lemma 3.10**.: _Let \(\alpha:G\to\operatorname{Out}(A)\) be a \(G\)-kernel with an invariant trace, and let \(\beta:G\to\operatorname{Out}(\mathcal{O}_{\infty})\) be a \(G\)-kernel. Then we have_
\[(\psi_{A\otimes\mathcal{O}_{\infty},\tau_{*}})_{*}\widetilde{\operatorname{ob} }(\alpha\otimes\beta)=(-\widetilde{\operatorname{ob}}^{r}(\alpha),\widetilde{ \operatorname{ob}}(\beta))\in H^{3}(G,\widetilde{K}_{0}(A))\times H^{3}(G, \mathbb{R}).\]
Proof.: Let \(j_{l}:A\to A\otimes\mathcal{O}_{\infty}\) and \(j_{r}:\mathcal{O}_{\infty}\to A\otimes\mathcal{O}_{\infty}\) be the maps given by \(x\mapsto x\otimes 1\) and \(x\mapsto 1\otimes x\) respectively. Then the statement follows from the commutative diagram:
where \(\iota:K_{0}(\mathcal{O}_{\infty})\to\mathbb{R}\) is the map given by \(\iota([1]_{0})=1\)
The lemma shows that the realization problem of the invariant \(\widetilde{\mathrm{ob}}\) for \(A\otimes\mathcal{O}_{\infty}\) is reduced to that for \(A\) and for \(\mathcal{O}_{\infty}\). In particular, we get
**Corollary 3.11**.: _If Conjecture 2.10 is true for \((G,M_{\mathfrak{q}^{\infty}})\) and Conjecture 3.7 is true for \((G,\mathcal{O}_{\infty})\), then Conjecture 3.7 is true for \((G,M_{\mathfrak{q}^{\infty}}\otimes\mathcal{O}_{\infty})\)._
Before finishing this section, we discuss the case where a \(G\)-kernel comes from a cocycle action. Assume that an outer cocycle action \((\alpha,u)\) has an invariant trace \(\tau\). We abuse notation and we denote the \(G\)-kernel arising from \(\alpha\) by the same symbol \(\alpha\). Then of course we have \(\mathrm{ob}(\alpha)=0\). However \(\mathrm{ob}_{\tau}(\alpha)\) may no be trivial. By construction, we have \(q_{K_{0}(A)\to\tilde{K}_{0}(A)_{*}}\kappa^{3}(\alpha,u)=-\,\widetilde{ \mathrm{ob}}^{r}(\alpha)\), and so
\[\mathrm{ob}_{\tau}(\alpha)=-(q_{\tau_{*}K_{0}(A)\to\tau_{*}K_{0}(A)/\mathbb{ Z}}\circ\tau_{*})_{*}\kappa^{3}(\alpha,u).\]
**Proposition 3.12**.: _Let \((\alpha,u)\) be an outer cocycle action of \(G\) on \(A\) with an invariant trace \(\tau\)._
1. _The class_ \((\tau_{*})_{*}\kappa^{3}(\alpha,u)\in H^{3}(G,\tau_{*}K_{0}(A))\) _is the image of the class_ \[[\Delta_{\tau}(u_{g,h})]\in H^{2}(G,\mathbb{R}/\tau_{*}K_{0}(A))\] _under the connecting map of the cohomology long exact sequence arising from the coefficients short exact sequence_ \[0\to\tau_{*}K_{0}(A)\to\mathbb{R}\to\mathbb{R}/\tau_{*}K_{0}(A)\to 0.\]
2. _The class_ \(\mathrm{ob}_{\tau}([\alpha])\in H^{3}(G,\tau_{*}K_{0}(A)/\mathbb{Z})\) _is the image of the class_ \[-[\Delta_{\tau}(u_{g,h})]\in H^{2}(G,\mathbb{R}/\tau_{*}K_{0}(A))\] _under the connecting map of the cohomology long exact sequence arising from the coefficients short exact sequence Eq.(_2.2_)._
Proof.: For each pair \(g,h\in G\), we choose \(\widetilde{u}(g,h)\in C_{*}^{\infty}([0,1],U(A))\) satisfyibg \(\widetilde{u}(g,h)(1)=u_{g,h}\), and define \(\partial\widetilde{u}(g,h,k)\) as in the definition of \(\kappa^{3}(\alpha,u)\). Let \(\mu(g,h)=\tilde{\Delta}_{\tau}(\widetilde{u}(g,h))\). Then by definition, \((\tau_{*})_{*}\kappa^{3}(\alpha,u)\) is the cohomology class given by \(\tau_{*}\partial\widetilde{u}(g,h,k)\). On the other hand, we have
\[\tau_{*}\partial\widetilde{u}(g,h,k)=\tilde{\Delta}_{\tau}(\partial \widetilde{u}(g,h,k))=\partial\mu(g,h,k),\]
which shows (1) and (2).
_Remark 3.13_.: Above (1) together with the universal coefficient theorem shows
\[(\tau_{*})_{*}\kappa^{3}(\alpha,u)\in\mathrm{Ext}(H_{2}(G),\tau_{*}K_{0}(A)) \subset H^{3}(G,\tau_{*}K_{0}(A)).\]
Note that \(\mathrm{Ext}(H_{2}(G),\tau_{*}K_{0}(A))\) is quite often a small subgroup of \(H^{3}(G,\tau_{*}K_{0}(A))\). For example, it is trivial if either \(H_{2}(G)\) is free, e.g. \(G=\mathbb{Z}^{n}\), or \(\tau_{*}K_{0}(A)\) is divisible, e.g. \(A=M_{\mathbb{Q}}\). This is a sharp contrast from the case of Kirchberg algebras, for which we are going to show in the next section that the invariant \(\kappa^{3}\) has rich range.
## 4 Strongly self-absorbing Kirchberg algebras
### Semigroup \(\mathcal{F}_{A}(G)\)
Recall that a unital separable C\({}^{*}\)-algebra \(A\) is strongly self-absorbing if there exists an isomorphism \(\psi:A\to A\otimes A\) such that \(\psi\) is approximately unitarily equivalent to the map \(l:A\to A\otimes A\), \(l(x)=x\otimes 1_{A}\). The notion of strongly self-absorbing C\({}^{*}\)-algebras was introduced in [48], and plays a very important role in the classification theory of amenable C\({}^{*}\)-algebras. The reader is referred to [7, Section 2] for their basic properties. Following [6], we denote by \(\mathcal{D}_{pi}\) the class of strongly self-absorbing Kirchberg algebras in the bootstrap category. Throughout this section, we assume \(A\in\mathcal{D}_{pi}\) unless otherwise stated. Thus \(A\) is isomorphic to either \(M_{\mathfrak{P}^{\infty}}\otimes\mathcal{O}_{\infty}\), with possibly empty \(\mathfrak{P}\), or \(\mathcal{O}_{2}\). We identify \(K_{0}(A)\) with a subring of \(\mathbb{R}\).
Let \(G\) be a countable discrete group. We denote by \(\mathcal{F}_{A}(G)\) the set of conjugacy classes of \(G\)-kernels \(\alpha:G\to\mathrm{Out}(A)\). Using the fact that \(A\otimes A\) is isomorphic to \(A\), we can introduce a commutative semigroup structure into \(\mathcal{F}_{A}(G)\) by
\[[\alpha]+[\beta]=[\alpha\otimes\beta].\]
Then \(\widetilde{\mathrm{ob}}\) induces a semigroup homomorphim from \(\mathcal{F}_{A}(G)\) into \(H^{3}(G,K_{0}^{\#}(A))\), which we denote by the same symbol \(\widetilde{\mathrm{ob}}\) by abusing notation.
The purpose of this section is to show that \(\mathcal{F}_{A}(G)\) is a group in a good situation, and determine its group structure up to extension. We warn the reader that \(\mathcal{F}_{A}(G)\) is not necessarily a group in general. For example, the semigroup \(\mathcal{F}_{\mathcal{O}_{2}}(G)\) cannot be a group for finite non-trivial \(G\) because a \(G\)-action with the Rohlin property absorbs all the other actions by tensor product (see [16, Theorem 4.2]).
From now on, we assume that \(G\) is infinite.
### Semigroup \(\mathcal{E}_{A}(G)\)
Before working on \(\mathcal{F}_{A}(G)\), we need to determine the structure of its cocycle action analogue \(\mathcal{E}_{A}(G)\) first. We denote by \(\mathcal{E}_{A}(G)\) the set of cocycle conjugacy classes of outer cocycle actions \((\alpha,u)\) of \(G\) on \(A\). In a similar way as above, we can introduce a semigroup structure into \(\mathcal{E}_{A}(G)\) by tensor product so that the forgetful functor map gives a semigroup homomorphism \(f:\mathcal{E}_{A}(G)\to\mathcal{F}_{A}(G)\). The purpose of this subsection is to give a topological interpretation of the invariant \(\kappa^{3}(\alpha,u)\) defined on \(\mathcal{E}_{A}(G)\).
Recall that we denote \(A^{s}=A\otimes\mathbb{K}\). Let \(\mathrm{Aut}_{0}(A^{s})\) be the connected component of \(\mathrm{id}\) in \(\mathrm{Aut}(A^{s})\). For \(\gamma\in\mathrm{Aut}(A^{s})\), we see that it is in \(\mathrm{Aut}_{0}(A^{s})\) if and only if \(\gamma_{*}[1_{A}]_{0}=[1_{A}]_{0}\in K_{0}(A^{s})\). If a \(G\)-action \(\gamma\) on \(A^{s}\) satisfies \(\gamma_{g}\in\mathrm{Aut}_{0}(A^{s})\) for every \(g\in G\), we say that \(\gamma\) is a \(G\)-action via \(\mathrm{Aut}_{0}(A^{s})\). We say that two such actions \(\gamma_{1}\) and \(\gamma_{2}\) are \(KK\)-trivially cocycle conjugate if they are cocycle conjugate and the conjugation map \(\theta\) can be taken from \(\mathrm{Aut}_{0}(A^{s})\). We need to generalize this notion to the case where \(\gamma_{1}\) is a \(G\)-action on \(A^{s\otimes m}\) and \(\gamma_{2}\) is a \(G\)-action on \(A^{s\otimes n}\). In this
case, we say that \(\gamma_{1}\) and \(\gamma_{2}\) are KK-trivially cocycle conjugate if they are cocycle conjutate via a conjugation map \(\theta:A^{s\otimes m}\to A^{s\otimes^{n}}\) satisfying \(\theta_{*}[1_{A^{\otimes m}}]_{0}=[1_{A^{\otimes n}}]_{0}\) in \(K_{0}(A^{s\otimes n})\). We denote by \(\mathcal{E}^{\prime}_{A}(G)\) the set of \(KK\)-trivially cocycle conjugacy classes of outer \(G\)-actions via \(\operatorname{Aut}_{0}(A^{s})\). We introduce a semigroup structure into \(\mathcal{E}^{\prime}_{A}(G)\) by tensor product as before.
We first show that \(\mathcal{E}_{A}(G)\) and \(\mathcal{E}^{\prime}_{A}(G)\) are naturally isomorphic. For this purpose, first note that we may allow outer cocycle actions \((\beta,V)\) of \(G\) on \(A^{s}\) with \(\beta_{g}\in\operatorname{Aut}_{0}(A^{s})\) and \(V_{g,h}\in U(M(A^{s}))\) for all \(g,h\in G\) in the definition of \(\mathcal{E}^{\prime}_{A}(G)\) because such cocycle actions are always equivalent to genuine actions (this essentially follows from the proof of [42, part II, Theorem 4.1.3] in the von Neumann algebra case). Thus we can define a semigroup homomorphism from \(\mathcal{E}_{A}(G)\) to \(\mathcal{E}^{\prime}(G)\) sending \([(\alpha,u)]\) to \([(\alpha\otimes\operatorname{id}_{\mathbb{K}},u\otimes 1)]\).
**Lemma 4.1**.: _The map \(\mathcal{E}_{A}(G)\to\mathcal{E}^{\prime}_{A}(G)\) sending \([(\alpha,u)]\) to \([(\alpha\otimes\operatorname{id}_{\mathbb{K}},u\otimes 1)]\) is a semigroup isomorphism._
Proof.: First we show that the map is a surjection. Let \((\gamma,V)\) be a cocycle \(G\)-action on \(A^{s}\) satisfying \(\gamma_{g}\in\operatorname{Aut}_{0}(A^{s})\). We choose a system of matrix units \(\{E_{ij}\}_{i,j\in\mathbb{N}}\) in \(\mathbb{K}\) with \(E_{11}\) a minimal projection in \(\mathbb{K}\) and
\[\sum_{i=1}^{\infty}E_{ii}=1\]
converges in the strong operator topology (or the strict topology in \(M(\mathbb{K})\)). For each \(g\in G\), we can choose a unitary \(W_{g}\in U(M(A^{s}))\) satisfying \(W_{g}\gamma_{g}(1_{A}\otimes E_{ij})W_{g}^{*}=1_{A}\otimes E_{ij}\) for all \(i,j\). Such a unitary \(W_{g}\) exists because there exists a partial isometry \(w_{g}\in A^{s}\) satisfying \(w_{g}^{*}w_{g}=\gamma_{g}(1_{A}\otimes E_{11})\) and \(w_{g}w_{g}^{*}=1\otimes E_{11}\) thanks to \(\gamma_{g_{*}}[1_{A}\otimes E_{11}]_{0}=[1_{A}\otimes E_{11}]_{0}\), and
\[W_{g}=\sum_{i=1}^{\infty}(1\otimes E_{i1})w_{g}\gamma_{g}(1\otimes E_{1i})\]
converges in the strict topology of \(M(A^{s})\). Then \(\operatorname{Ad}W_{g}\circ\gamma_{g}\) leaves \(1_{A}\otimes E_{ij}\) invariant for all \(i,j\), and it is of the form \(\alpha_{g}\otimes\operatorname{id}_{\mathbb{K}}\). Since \(W_{g}\gamma_{g}(W_{h})V_{g,h}W_{gh}^{*}\) commutes with \(1_{A}\otimes E_{ij}\) for all \(i,j\), it is of the form \(u_{g,h}\otimes 1\). Thus \((\gamma,V)\) is equivalent to \((\alpha\otimes\operatorname{id}_{\mathbb{K}},u\otimes 1)\).
Next we show that the map is injection. Assume that \((\alpha,u)\) and \((\beta,v)\) are cocycle actions of \(G\) such that \((\alpha\otimes\operatorname{id}_{\mathbb{K}},u\otimes 1)\) and \((\beta\otimes\operatorname{id}_{\mathbb{K}},v\otimes 1)\) are \(KK\)-trivially cocycle conjugate. Then there exists \(\theta\in\operatorname{Aut}_{0}(A^{s})\) such that \((\theta\circ(\alpha\otimes\operatorname{id})\circ\theta^{-1},\theta(u\otimes 1))\) and \((\beta\otimes\operatorname{id}_{\mathbb{K}},v\otimes 1)\) are equivalent. As above we may assume that \(\theta\) is of the form \(\theta=\theta_{0}\otimes\operatorname{id}_{\mathbb{K}}\) by perturbing \(\theta\) by an inner automorphism. Thus we see that \((\theta_{0}\circ\alpha\circ\theta_{0}^{-1}\otimes\operatorname{id}_{\mathbb{K} },\theta_{0}(u)\otimes 1)\) and \((\beta\otimes\operatorname{id}_{\mathbb{K}},v\otimes 1)\) are equivalent, and there exist unitaries \(W_{g}\in U(M(A^{s}))\) satisfying
\[\operatorname{Ad}W_{g}\circ(\beta_{g}\otimes\operatorname{id}_{\mathbb{K}})= \theta_{0}\circ\alpha_{g}\circ\theta_{0}^{-1}\otimes\operatorname{id}_{ \mathbb{K}},\]
\[W_{g}(\beta_{g}\otimes\operatorname{id}_{\mathbb{K}})(W_{h})(v(g,h)\otimes 1)W_{gh }^{*}=\theta_{0}(u(g,h))\otimes 1.\]
The first equation implies that \(W_{g}\) commutes with \(1_{A}\otimes\mathbb{K}\) and it is of the form \(W_{g}=w_{g}\otimes 1\). Thus \((\alpha,u)\) and \((\beta,v)\) are cocycle conjugate.
For our purpose, it is more convenient to have an explicit formula of a genuine action equivalent to \((\alpha\otimes\operatorname{id}_{\mathbb{K}},u\otimes 1)\), which is given by the second dual action in the Takesaki-Takai type duality for cocycle actions (see [38]). Although traditionally the second dual action is described in terms of the right regular representation of \(G\), here we give an action inner conjugate to it using the left regular representation \(\lambda\) in order to simplify the notation in the proof of Theorem 4.2. Let \(\{E_{g,h}\}_{g,h}\) be the canonical system of matrix units in \(\mathbb{K}(\ell^{2}(G))\). We set
\[V_{g}=(\sum_{s\in G}\alpha_{s^{-1}}^{-1}(u(s^{-1},g)^{-1})\otimes E_{s,s})(1_{ A}\otimes\lambda_{g})\in U(M(A\otimes\mathbb{K}(\ell^{2}(G)))). \tag{4.1}\]
Then
\[V_{g}(\alpha_{g}\otimes\operatorname{id})(V_{h})(u_{g,h}\otimes 1)V_{gh}^{-1}=1,\]
and we get a genuine action of \(G\) on \(A^{s}\) given by
\[\hat{\hat{\alpha}}_{g}=\operatorname{Ad}V_{g}\circ(\alpha_{g}\otimes \operatorname{id}_{\mathbb{K}}).\]
For a \(G\)-action \(\alpha\) on \(A^{s}\), we define a principal \(\operatorname{Aut}(A^{s})\)-bundle \(\mathcal{P}_{\alpha}\) over the classifying space \(BG\) by
\[\mathcal{P}_{\alpha}=(EG\times\operatorname{Aut}(A^{s}))/G,\]
where the \(G\) action above is given by \(g\cdot(x,\gamma)=(g\cdot x,\beta_{g}\circ\gamma)\). If moreover \(\alpha\) is via \(\operatorname{Aut}_{0}(A^{s})\), we define a principal \(\operatorname{Aut}_{0}(A^{s})\)-bundle \(\mathcal{P}_{\alpha}^{(0)}\) over \(BG\) by
\[\mathcal{P}_{\alpha}^{(0)}=(EG\times\operatorname{Aut}_{0}(A^{s}))/G.\]
Since the first non-trivial homotopy group of \(\operatorname{Aut}(A^{s})\) is \(\pi_{0}(\operatorname{Aut}(A^{s}))\cong K_{0}(A)^{\times}\), the primary obstruction to a continuous section of \(\mathcal{P}_{\alpha}\to BG\) is in
\[H^{1}(BG,\pi_{0}(\operatorname{Aut}(A^{s})))\cong\operatorname{Hom}(G,\pi_{0}( \operatorname{Aut}(A^{s}))),\]
which is naturally identified with the composition of \(\alpha\) with the quotient map from \(\operatorname{Aut}(A^{s})\) to \(\pi_{0}(\operatorname{Aut}(A^{s}))\).
Since the first non-trivial homopoty group of \(\operatorname{Aut}_{0}(A^{s})\) is \(\pi_{2}(\operatorname{Aut}_{0}(A^{s}))\cong K_{0}(A)\), the primary obstruction to a continuous section of \(\mathcal{P}_{\alpha}^{(0)}\to BG\) is in
\[H^{3}(BG,\pi_{2}(\operatorname{Aut}_{0}(A^{s})))\cong H^{3}(G,K_{0}(A)).\]
To compute the primary obstruction in terms of a cocycle action, we adopt Milgram's geometric bar construction [34] as a model of \(EG\). Let \(\Delta^{n}\) be the geometric \(n\)-simplex
\[\Delta^{n}=\{(t_{0},t_{1},\cdots,t_{n})\in\mathbb{R}^{n+1};\;\sum_{i=0}^{n}t_{i }=1,\;t_{i}\geq 0\}.\]
We define \(d^{i}:\Delta^{n-1}\rightarrow\Delta^{n}\) for \(0\leq i\leq n\), and \(s_{i}:\Delta^{n+1}\rightarrow\Delta^{n}\) for \(0\leq i\leq n\) by
\[d^{i}(t_{0},\cdots,t_{n-1})=(t_{0},\cdots,t_{i-1},0,t_{i+1},\cdots,t_{n-1}),\]
\[s^{i}(t_{0},\cdots,t_{n+1})=(t_{0},\cdots,t_{i-1},t_{i}+t_{i+1},t_{i+2},\cdots, t_{n+1}).\]
Then
\[EG=(\coprod_{k=0}^{\infty}G\times\Delta^{k}\times G^{k})/\sim,\]
where the equivalence relation \(\sim\) is generated by
\[(g_{0};d^{i}(t);g_{1},\cdots,g_{n})\sim\left\{\begin{array}{ll}(g_{0}g_{1};t ;g_{2},\cdots,g_{n}),&i=0\\ (g_{0};t;g_{1},\cdots,g_{i}g_{i+1},\cdots,g_{n}),&1\leq i\leq n-1\\ (g_{0};t;g_{1},\cdots,g_{n-1}),&i=n\end{array}\right.\]
\[(g_{0};t;g_{1},\cdots,g_{i-1},e,g_{i+1},\cdots,g_{n})\sim(g_{0};s^{i}(t);g_{1},\cdots,g_{i-1},g_{i+1},\cdots,g_{n}),\]
and a \(G\)-action is given by \(g\cdot(g_{0};t;g_{1},\cdots,g_{n})=(gg_{0};t;g_{1},\cdots,g_{n})\). The \(n\)-skeleton on \(EG\) is
\[E_{n}G=(\coprod_{k=0}^{n}G\times\Delta^{k}\times G^{k})/\sim,\]
and we set \(B_{n}G=E_{n}G/G\).
We can regard \((g_{0}:\Delta^{n}:g_{1},g_{2},\cdots,g_{n})\) as a \(n\)-simplex whose vertices are labeled by
\[(g_{0},g_{0}g_{1},g_{0}g_{1}g_{2},\cdots,g_{0}g_{1}\cdots g_{n})\]
if \(g_{i}\neq e\) for \(i=1,2,\cdots,n\).
**Theorem 4.2**.: _Let \(A\) be a strongly self-absorbing C\({}^{*}\)-algebra with trivial \(K_{1}(A)\) and let \(G\) be a countably infinite discrete group. Let \((\alpha,u)\) be a cocycle action of \(G\) on \(A\). Then the primary obstruction to a continuous section of \(\mathcal{P}^{(0)}_{\hat{\dot{\alpha}}}\to BG\) is \(\kappa^{3}(\alpha,u)\). (We do not assume that \(A\) is either a Kirchberg algebra or in the bootstrap category. We do not assume that the cocycle action \((\alpha,u)\) is outer.)_
Proof.: Note that a partial section of the fiber-bundle \(\mathcal{P}^{0}_{\hat{\hat{\alpha}}}\to BG\) defined on \(B_{n}G\) is identified with a continuous map \(\varphi:E_{n}G\to\operatorname{Aut}_{0}(A^{s})\) satisfying \(\varphi(g\cdot x)=\hat{\hat{\alpha}}_{g}\circ\varphi(x)\) because every partial section is of the form \(x\mapsto[(x,\varphi(x))]\). To compute the primary obstruction class to a section of \(\mathcal{P}^{0}_{\hat{\hat{\alpha}}}\to BG\), our task is to choose a partial section \(\varphi:E_{2}G\to\operatorname{Aut}_{0}(A^{s})\), and compute the element in \(\pi_{2}(\operatorname{Aut}(A^{s}))\) arising from the restriction of \(\varphi\) to \(\partial(e:\Delta^{3}:g_{1},g_{2},g_{3})\) (see [8, Section 7.6]).
Let \(Pr(A^{s})_{[1_{A}]_{0}}\) be the set of projections in \(A^{s}\) whose \(K_{0}\)-classes are equal to \([1_{A}]_{0}\). Since the map
\[\operatorname{Aut}_{0}(A^{s})\ni\gamma\to\gamma(1\otimes E_{e,e})\in Pr(A^{s} )_{[1_{A}]_{0}}\]
gives a homotopy equivalence between \(\operatorname{Aut}_{0}(A^{s})\) and \(Pr(A^{s})_{[1_{A}]_{0}}\) (see [7, Corollary 2.8]), we may and do consider a \(G\)-equivariant map from \(E_{2}G\) to \(Pr(A^{s})_{[1_{A}]_{0}}\) instead of a partial section \(E_{2}G\to\operatorname{Aut}_{0}(A^{s})\). Here the \(G\)-action on \(Pr(A^{s})_{[1_{A}]}\) is given by \(\hat{\hat{\alpha}}\).
Since \(G\) is torsion-free, we may and do normalize \((\alpha_{g},u)\) so that \(\alpha_{e}=\operatorname{id}\), \(\alpha_{g}^{-1}=\alpha_{g}\), and \(u(e,g)=u(g,e)=u(g,g^{-1})=1\) hold. Thus we have \(\alpha_{s^{-1}}^{-1}(u(s^{-1},g)^{-1})=u(s,s^{-1}g)\). Let
\[W_{g}=\sum_{s\in G}u(s,s^{-1}g)\otimes E_{s,s}.\]
Then \(\hat{\hat{\alpha}}_{g}=\operatorname{Ad}W_{g}\circ(\alpha_{g}\otimes \operatorname{Ad}\lambda_{g})\). We often use the equality \(u(g,h)=u(gh,h^{-1})^{*}\).
For \(\eta\in\ell^{2}(G)\setminus\{0\}\), we denote by \(P_{\eta}\in\mathbb{K}(\ell^{2}(G))\) the projection onto \(\mathbb{C}\eta\). Let \(\{\delta_{g}\}_{g\in G}\) be the canonical orthonormal basis of \(\ell^{2}(G)\). Then \(E_{g,g}=P_{\delta_{g}}\).
Now we construct a \(G\)-equivariant map \(\varphi:E_{2}G\to Pr(A^{s})_{[1_{A}]_{0}}\). First we define \(\varphi((g:\Delta^{0}))=1_{A}\otimes E_{g,g}\), which is an equivariant map from \(E_{0}G\) to \(Pr(A^{s})_{[1_{A}]_{0}}\). Next we extend \(\varphi\) to \((e:\Delta^{1}:g)\) by
\[\varphi((e:(t_{0},t_{1}):g))=1_{A}\otimes P_{t_{0}\delta_{e}+t_{1}\delta_{g}}.\]
To make an equivariant extension of \(\varphi\) to \(E_{1}G\), we set
\[\varphi((g_{0}:(t_{0},t_{1}):g_{1}))=\hat{\hat{\alpha}}_{g_{0}}(\varphi((e:(t_ {0},t_{1}):g_{1})))=\operatorname{Ad}W_{g_{0}}(1\otimes P_{t_{0}\delta_{g_{0} }+t_{1}\delta_{g_{0}g_{1}}}).\]
Now we extend \(\varphi\) to \((e:\Delta^{2}:g_{1},g_{2})\). On the boundary of \((e:\Delta^{2}:g_{1},g_{2})\), we already have
\[\varphi((e:(t_{0},t_{1},0):g_{1},g_{2}))=1_{A}\otimes P_{t_{0}\delta_{e}+t_{1 }\delta_{g_{1}}},\]
\[\varphi((e:(t_{0},0,t_{2}):g_{1},g_{2}))=1_{A}\otimes P_{t_{0}\delta_{e}+t_{2 }\delta_{g_{1}g_{2}}},\]
\[\varphi((e:(0,t_{1},t_{2}):g_{1},g_{2}))=W_{g_{1}}(1_{A}\otimes P_{t_{1}\delta _{g_{1}}+t_{2}\delta_{g_{1}g_{2}}})W_{g_{1}}^{*}.\]
Let
\[\psi_{g_{1},g_{2}}((e:(t_{0},t_{1},t_{2}):g_{1},g_{2}))=1_{A}\otimes P_{t_{0} \delta_{e}+t_{1}\delta_{g_{1}}+t_{2}\delta_{g_{1}g_{2}}},\]
which is a continuous map from \((e:\Delta^{2}:g_{1},g_{2})\) to \(Pr(A^{s})_{[1_{A}]_{0}}\). We need to deform \(\psi_{g_{1},g_{2}}\) on a neighborhood of \((g_{1}:\Delta^{1}:g_{2})\) to extend \(\varphi\). We choose a continuous path \(\{\widetilde{u}(g_{1},g_{2})(t)\}_{[0,1]}\) from \(1\) to \(u(g_{1},g_{2})\) in \(U(A)\), and set
\[\widetilde{W}_{g}(t)=\sum_{s\in G}\widetilde{u}(s,s^{-1}g)(t)\otimes E_{s,s}.\]
We may and do assume \(\widetilde{u}(g,h)=\widetilde{u}(gh,h^{-1})^{*}\).
We choose a smooth convex curve \(C\) connecting \(g_{1}\) and \(g_{1}g_{2}\) inside \((e:\Delta^{2}:g_{1},g_{2})\) as in Figure 2.
Let \(a\in(g_{1}:\Delta^{1}:g_{2})\) and let \(b\) be the intersection of \(C\) and the line segment \(ea\). We extend \(\varphi\) to the region \(D_{1}\) (including the boudary) below \(C\) and the region \(D_{2}\) above \(C\) separately. We first take a homeomorphism \(h_{1}:D_{1}\to(e:\Delta^{2}:g_{1},g_{2})\) whose restriction to the line segment \(eb\) is an affine map from \(eb\) to \(ea\). Then we set \(\varphi((e:t:g_{1},g_{2}))=\psi_{g_{1},g_{2}}\circ h_{1}(t)\) for \(t\in D_{1}\).
For \(D_{2}\), we define \(\varphi\) as follows. Let \(c\) be the point internally divides the line segment \(ba\) in the ratio of \(r:1-r\). Then we set \(\varphi(c)=\operatorname{Ad}\widetilde{W}_{g_{1}}(r)(\psi_{g_{1},g_{2}}(a))\). In other words, we parametrize \(D_{2}\) by \((t,r)\in[0,1]\times[0,1]\) with each of \(\{0\}\times[0,1]\) and \(\{1\}\times[0,1]\) collapsed to one point respectively, and define \(\varphi\) by
\[\operatorname{Ad}\widetilde{W}_{g_{1}}(r)(1_{A}\otimes P_{(1-t)\delta_{g_{1}} +t\delta_{g_{1}g_{2}}}).\]
Then we get a continuous extension of \(\varphi\) to \((e:\Delta^{2}:g_{1},g_{2})\). We extend \(\varphi\) to the whole \(E_{2}G\) by setting
\[\varphi((g_{0}:t:g_{1},g_{2}))=\hat{\hat{\alpha}}_{g_{0}}(\varphi((e:t:g_{1},g _{2}))).\]
Our task is to compute the element of \(\pi_{2}(Pr(A^{s})_{[1_{d}]_{0}})\) determined by the restriction of \(\varphi\) to \(\partial(e:\Delta^{3}:g_{1},g_{2},g_{3})\). For this, it suffices to compute
\[[\varphi|_{\partial(e:\Delta^{3}:g_{1},g_{2},g_{3})}]_{0}-[1_{A}]_{0}\in K_{0} (S^{2}A^{s})\cong K_{0}(A),\]
by the Bott periodicity.
We first deform \(\varphi\) on \((e:\Delta^{2}:g_{1}g_{2},g_{3})\cup(g_{1}:\Delta^{2}:g_{2},g_{3})\). We choose a homeomorphism \(h_{2}\) of it as in Figure 3 leaving the boundary invariant such that \(h_{1}(D_{i})=D_{i}^{\prime}\), \(i=3,4,5\).
We can deform \(\varphi\) into \(\varphi_{1}=\varphi\circ h_{2}\) on \((e:\Delta^{2}:g_{1}g_{2},g_{3})\cup(g_{1}:\Delta^{2}:g_{2},g_{3})\) so that we have the following description of \(\varphi_{1}\) on each region. On \((g_{1}:\Delta^{2}:g_{2},g_{3})\) we have \(\varphi_{1}=\hat{\alpha}_{g_{1}}\circ\psi_{g_{2}.g_{3}}\). On \(D_{3}\), there exists a homeomorphism \(h_{3}:D_{3}\to(e:\Delta^{2}:g_{1}g_{2},g_{3})\) satisfying \(\varphi_{1}|_{D_{3}}=\psi_{g_{1}g_{2},g_{3}}\circ h_{3}\). On \(D_{4}\), the map \(\varphi_{1}\) is described by
\[\operatorname{Ad}\widetilde{W}_{g_{1}g_{2}}(r)(1_{A}\otimes P_{(1-t)\delta_{ g_{1}g_{2}}+t\delta_{g_{1}g_{2}g_{3}}}),\]
as in the case of \(D_{2}\). We have similar description of \(\varphi_{1}\) on \(D_{5}\). We put \(\varphi_{1}=\varphi\) on \((e:\Delta^{2}:g_{1},g_{2})\cup(e:\Delta^{2}:g_{1},g_{2}g_{3})\).
Secondly we deform \(\varphi_{1}\) on
\[(e:\Delta^{2}:g_{1},g_{2})\cup(e:\Delta^{2}:g_{1},g_{2}g_{3})\cup(g_{1}:\Delta ^{2}:g_{2},g_{3}).\]
Figure 4:
Figure 3:
Recall that \(\varphi_{1}\) on \(D_{1}\) and \(D_{6}\) as in Figure 4 are compositions of suitable homeomorphisms and \(\psi_{g_{1},g_{2}}\) and \(\psi_{g_{1},g_{2}g_{3}}\) respectively, and \(\varphi_{1}\) on \(D_{2}\) and \(D_{7}\) are described by
\[\operatorname{Ad}\widetilde{W}_{g_{1}}(r)(1_{A}\otimes P_{(1-t)\delta_{g_{1}}+ t\delta_{g_{1}g_{2}}}),\]
\[\operatorname{Ad}\widetilde{W}_{g_{1}}(r)(1_{A}\otimes P_{(1-t)\delta_{g_{1}}+ t\delta_{g_{1}g_{2}g_{3}}}).\]
On \((g_{1}:\Delta^{2}:g_{2},g_{3})\), we have \(\varphi_{1}=\hat{\hat{\alpha}}_{g_{1}}\circ\psi_{g_{2},g_{3}}\). We can deform \(\varphi_{1}\) in into \(\varphi_{2}\) so that the same description is possible for \(D_{1}^{\prime}\), \(D_{2}^{\prime}\), \(D_{6}^{\prime}\), \(D_{7}^{\prime}\) as in Figure 5.
We further deform \(\varphi_{2}\) into \(\varphi_{3}\) by applying the following deformation:
\[P_{t_{0}\delta_{e}+t_{1}\delta_{g_{1}}+t_{2}\delta_{g_{1}g_{2}}+t_{3}\delta_{ g_{1}g_{2}g_{3}}}\mapsto P_{t_{0}\delta_{e}+(1-s)t_{1}\delta_{g_{1}}+(t_{2}+ \frac{st_{1}}{2})\delta_{g_{1}g_{2}}+(t_{3}+\frac{st_{1}}{2})\delta_{g_{1}g_{2 }g_{3}}},\]
where \(0\leq s\leq 1\) is a deformation parameter. Note that this does not deform \(\varphi_{2}\) on the boundary.
We further deform \(\varphi_{3}\) into \(\varphi_{4}\) so that \(\varphi_{4}\) is described as follows. There exists a homeomorphism \(h_{3}:D_{8}\to(e:\Delta^{2}:g_{1}g_{2},g_{3})\) such that \(\varphi_{4}=\psi_{g_{1}g_{2},g_{3}}\circ h_{3}\) on \(D_{8}\). On \(D_{9}\), the map \(\varphi_{4}\) is described by
\[\operatorname{Ad}\widetilde{W}_{g_{1}}(r)(1_{A}\otimes P_{(1-t)\delta_{g_{1}g _{2}}+t\delta_{g_{1}g_{2}g_{3}}}).\]
We put \(\varphi_{4}=\varphi_{1}\) on \((e:\Delta^{2}:g_{1}g_{2},g_{3})\).
Figure 5:
Now we deform \(\varphi_{4}\) on \(D_{9}\cup D_{5}\cup D_{4}\). In the following argument, homotopy of unitaries is understood after they are cut by the projection \(1_{A}\otimes P_{\delta_{g_{1}g_{2}}}+1_{A}\otimes P_{\delta_{g_{1}g_{2}g_{3}}}\), and no issue of the topology of \(U(M(A^{s}))\) occurs. On \(D_{9}\), the projection path \(p(t)=1_{A}\otimes P_{(1-t)\delta_{g_{1}g_{2}}+t\delta_{g_{1}g_{2}g_{3}}}\) is deformed as \(\operatorname{Ad}\widetilde{W}_{g_{1}}(r)(p(t))\). On \(D_{5}\), the projection path \(W_{g_{1}}p(t)W_{g_{1}}^{*}\) is deformed as
\[\operatorname{Ad}(W_{g_{1}}(\alpha_{g_{1}}\otimes\operatorname{Ad}(\lambda_{ g_{1}}))(\widetilde{W}_{g_{2}})(r))(p(t)).\]
Note that the concatenation of the two unitary paths \(\{\widetilde{W}_{g_{1}}(r)\}_{r\in[0,1]}\) and
\[\{W_{g_{1}}(\alpha_{g_{1}}\otimes\operatorname{Ad}(\lambda_{g_{1}}))( \widetilde{W}_{g_{2}})(r)\}_{r\in[0,1]}\]
is homotopic to
\[\{\widetilde{W}_{g_{1}}(r)(\alpha_{g_{1}}\otimes\operatorname{Ad}(\lambda_{ g_{1}}))(\widetilde{W}_{g_{2}})(r))\}_{r\in[0,1]},\]
and its endpoint is \(W_{g_{1}g_{2}}(u_{g_{1},g_{2}}^{*}\otimes 1)\). On \(D_{4}\), the projection path \(p(t)\) is deformed as \(\operatorname{Ad}\widetilde{W}_{g_{1}g_{2}}(r)(p(t))\) (in the reversed direction), but we may replace it with
\[\operatorname{Ad}(\widetilde{W}_{g_{1}g_{2}}(r)(\widetilde{u}_{g_{1},g_{2}}(r )^{*}\otimes 1))(p(t)).\]
Now concatenation of the previous unitary path with the unitary path
\[\{\widetilde{W}_{g_{1}g_{2}}(1-r)(\widetilde{u}_{g_{1},g_{2}}(1-r)^{*}\otimes 1 )\}_{r\in[0,1]}\]
is homotopic to the unitary loop
\[\{(\widetilde{u}_{g_{1},g_{2}}(r)\otimes 1)\widetilde{W}_{g_{1}g_{2}}(r)^{*} \widetilde{W}_{g_{1}}(r)(\alpha_{g_{1}}\otimes\operatorname{Ad}(\lambda_{g_{1 }}))(\widetilde{W}_{g_{2}})(r))\}_{r\in[0,1]}.\]
Cutting this by the projection \(1_{A}\otimes P_{\delta_{g_{1}g_{2}}}+1_{A}\otimes P_{\delta_{g_{1}g_{2}g_{3}}}\), we get
\[1_{A}\otimes P_{\delta_{g_{1}g_{2}}}+\partial\widetilde{u}(g_{1},g_{2},g_{3}) (r)^{*}\otimes P_{\delta_{g_{1}g_{2}g_{3}}}.\]
Figure 6:
Now we deform \(\varphi_{4}\) into \(\varphi_{5}\) so that its restriction on to \(D_{9}\cup D_{5}\cup D_{4}\) is given by
\[\operatorname{Ad}\left(1_{A}\otimes P_{\delta_{g_{1}g_{2}}}+(\partial\widetilde{ u}(g_{1},g_{2},g_{3})(r)^{*}\otimes P_{\delta_{g_{1}g_{2}g_{3}}}\right)(p(t)),\]
and leave \(\varphi_{4}|_{D_{3}\cup D_{8}}\) undeformed. Since \(\varphi_{5}\) on \(D_{3}\) and on \(D_{8}\) are essentially the same, the homotopy group element we are looking for is determined by the restriction of \(\varphi_{5}\) to \(D_{9}\cup D_{5}\cup D_{4}\) by identify the lower side of \(D_{9}\) and the lower side of \(D_{4}\).
From now on we identify \(\delta_{g_{1}g_{2}}\) and \(\delta_{g_{1}g_{2}g_{3}}\) with the canonical basis \(e_{1}\), \(e_{2}\) of \(\mathbb{C}^{2}\), and identify the corner of \(A^{s}\) by the projection \(1_{A}\otimes P_{e_{1}}+1_{A}\otimes P_{e_{2}}\) with \(M_{2}(A)\). Then the above computation shows that \(\varphi_{5}\) restricted to \(D_{8}\cup D_{5}\cup D_{4}\) is given by
\[\left(\begin{array}{cc}c(t)^{2}&c(t)s(t)\widetilde{u}(g_{1}.g_{2},g_{3})(r) \\ c(t)s(t)\widetilde{u}(g_{1}.g_{2},g_{3})(r)^{*}&s(t)^{2}\end{array}\right),\]
where \(c(t)=\frac{1-t}{\sqrt{(1-t)^{2}+t^{2}}}\), \(s(t)=\frac{t}{\sqrt{(1-t)^{2}+t^{2}}}\). Let
\[R(t)=\left(\begin{array}{cc}c(t)&-s(t)\\ s(t)&c(t)\end{array}\right),\]
\[Z(r)=\left(\begin{array}{cc}\widetilde{u}(g_{1}.g_{2},g_{3})(r)&0\\ 0&1_{A}\end{array}\right).\]
We have to decide the homotopy group element given by
\[\Phi(t,r)=Z(t)R(t)\left(\begin{array}{cc}1_{A}&0\\ 0&0\end{array}\right)R(t)^{*}Z(r)^{*}.\]
Since \(\Phi(0,r)=1_{A}\otimes P_{e_{1}}\) and \(\Phi(1,r)=1_{A}\otimes P_{e_{2}}\), it is more convenient for us to extend \(\Phi\) to \([0,2]\times[0,1]\) by setting
\[\Phi(t,r)=R(t-1)^{*}(1_{A}\otimes P_{e_{2}})R(t-1),\]
for \(1\leq t\leq 2\). This map is homotopic to
\[\Psi(t,r)=\left\{\begin{array}{ll}\widetilde{Z}(t,r)(1_{A}\otimes P_{e_{1} })\widetilde{Z}(t,r),&0\leq t\leq 1\\ (1_{A}\otimes P_{e_{1}}),&1\leq t\leq 2,\end{array}\right.\]
where
\[\widetilde{Z}(t,r)=R(t)^{*}\left(\begin{array}{cc}Z(r)&0\\ 0&1\end{array}\right)R(t)\left(\begin{array}{cc}1&0\\ 0&Z(r)^{*}\end{array}\right).\]
Note that \(\{\widetilde{Z}(t,\cdot)\}_{t\in[0,1]}\) is a unitary path in \(M_{2}(A)\) from \(\left(\begin{array}{cc}Z&0\\ 0&Z^{*}\end{array}\right)\) to \(1_{M_{2}(A)}\). Comparing the orientation of our \([0,2]\times[0,1]/\sim\) with the usual isomorphism \(\theta_{SA}:K_{1}(SA)\to K_{0}(A)\) (see [2, Theorem 8.2.2]), we finish the proof.
_Remark 4.3_.: Theorem 4.2 together with Remark 3.13 shows that Meyer's result [33, Theorem 3.10] does not hold for \(M_{\mathfrak{P}^{\infty}}\) or \(\mathcal{Z}\).
### Relationship with the Dadarlat-Pennig theory
From now on, we assume that \(G\) is amenable and there exists a finite CW-complex model of the classifying space \(BG\). We show under this condition that the semigroup \(\mathcal{E}^{\prime}_{A}(G)\) is naturally isomorphic to Dadarlat-Pennig's group \(\bar{E}^{1}_{A}(BG)\), and in particular \(\mathcal{E}_{A}(G)\) is a group.
Let \(X\) be a compact metric space. Dadarlat-Pennig's cohomology \(E^{1}_{A}(X)\) is the set of isomorphism classes of locally trivial continuous fields of \(A^{s}\) over \(X\), which is a group under tensor product over \(C(X)\). Equivalently, it can be defined to be the set of isomorphism classes of principal \(\operatorname{Aut}(A^{s})\)-bundles over \(X\), which is further identified with the homotopy set \([X,B\operatorname{Aut}(A^{s})]\). The reduced version \(\bar{E}^{1}_{A}(X)\) is defined by the set of isomorphism classes of principal \(\operatorname{Aut}_{0}(A^{s})\)-bundles over \(X\), or equivalently by \([X,B\operatorname{Aut}_{0}(A^{s})]\).
Let \(\mathcal{P}\) be a principal \(\operatorname{Aut}_{0}(A^{s})\)-bundle over \(X\). Then \(\mathcal{P}\times_{\operatorname{Aut}_{0}(A^{s})}\operatorname{Aut}(A^{s})\) is a principal \(\operatorname{Aut}(A^{s})\)-bundle, which gives a natural map from \(\bar{E}^{1}_{A}(X)\) to \(E^{1}_{A}(X)\). It was shown in [6, Proposition 2.3] that this map is injective and its image is the kernel of the primary obstruction map
\[\delta_{0}:E^{1}_{A}(X)\to H^{1}(X,\pi_{0}(\operatorname{Aut}(A^{s})))=H^{1}(X,K_{0}(A)^{\times}).\]
In particular, we can regard \(\bar{E}^{1}_{A}(X)\) as a subgroup of \(E^{1}_{A}(X)\) inheriting a group structure from \(E^{1}_{A}(X)\).
Let \(\alpha\) be a \(G\)-action on \(A^{s}\), and let \(\mathcal{P}_{\alpha}\) be the corresponding principal \(\operatorname{Aut}(A^{s})\)-bundle over \(BG\). The section algebra of the associated \(A^{s}\)-bundle \(\mathcal{P}_{\alpha}\times_{\operatorname{Aut}(A^{s})}A\) over \(BG\) is identified with
\[M_{\alpha}=\{f\in C^{b}(EG,A^{s});\;f(g\cdot x)=\alpha_{g}(f(x)),\;\forall x \in EG,\;\forall g\in G\},\]
which is a locally trivial continuous field of \(A^{s}\) over \(BG\).
**Lemma 4.4**.: _For two \(G\)-acitons \(\alpha\) and \(\beta\) on \(A^{s}\), we have_
\[[\mathcal{P}_{\alpha}]+[\mathcal{P}_{\beta}]=[\mathcal{P}_{\alpha\otimes\beta}]\]
_in \(E^{1}_{A}(BG)\)._
Proof.: It suffices to show \(M_{\alpha}\otimes_{C(BG)}M_{\beta}=M_{\alpha\otimes\beta}\). The left-hand side is a subalgebra of the right-hand side, and equality can be shown by using a partition of unity.
Let \(\alpha\) and \(\beta\) be \(G\)-actions on \(A^{s}\) via \(\operatorname{Aut}_{0}(A^{s})\). Since \(\bar{E}^{1}_{A}(BG)\) can be regarded as a subgroup of \(E^{1}_{A}(BG)\), we also have
\[[\mathcal{P}^{0}_{\alpha}]+[\mathcal{P}^{0}_{\beta}]=[\mathcal{P}^{0}_{\alpha \otimes\beta}]\]
in \(\bar{E}^{1}_{A}(BG)\).
The author gave a conjecture in [17] whose special case says that the map \([\alpha]\mapsto[\mathcal{P}_{\alpha}]\) gives a bijection between the set of cocycle conjugacy classes of outer
\(G\)-actions of \(A^{s}\) and \(E^{1}_{A}(BG)\). After partial results [18],[19],[20], the conjecture was recently solved affirmatively in combination of Meyer's result [33, Theorem 3.10] for surjectivity and Gabe-Szabo's result [11, Theorem 6.2] for injectivity.
Since the primary obstruction \(\delta_{0}([\mathcal{P}_{\alpha}])\) is identified with the composition of \(\alpha\) and the quotient map from \(\operatorname{Aut}(A^{s})\) to \(\pi_{0}(\operatorname{Aut}(A^{s}))\), we have \([\mathcal{P}_{\alpha}]\in\ker\delta_{0}\) if and only if \(\alpha\) is via \(\operatorname{Aut}(A^{s})\). Thus every element in \(\bar{E}^{1}_{A}(BG)\) is given by \(\mathcal{P}^{0}_{\alpha}\) with an outer action \(\alpha\) via \(\operatorname{Aut}_{0}(A^{s})\). This gives a surjective semigroup homomorphism from \(\mathcal{E}^{\prime}_{A}(G)\) onto \(\bar{E}^{1}_{A}(BG)\).
**Theorem 4.5**.: _Let \(A\) be a strongly self-absorbing Kirchber algebra, and let \(G\) be a countable discrete amenable group with a finite CW-comples model of the classifying space \(BG\). Let \(\alpha\) and \(\beta\) be outer \(G\)-actions on \(A^{s}\) via \(\operatorname{Aut}_{0}(A^{a})\). Then the following conditions are equivalent:_
1. \(\alpha\) _and_ \(\beta\) _are_ \(KK\)_-trivially cocycle conjugate._
2. \(\alpha\) _and_ \(\beta\) _are cocycle conjugate._
3. \([\mathcal{P}^{0}_{\alpha}]=[\mathcal{P}^{0}_{\beta}]\) _in_ \(\bar{E}^{1}_{A}(BG)\)_._
4. \([\mathcal{P}_{\alpha}]=[\mathcal{P}_{\beta}]\) _in_ \(E^{1}_{A}(BG)\)_._
Proof.: The implication from (1) to (2) is trivial. We already mentioned the equivalence of (2) and (4). The equivalence of (3) and (4) follows from [6, Proposition 2.3].
We assume (3) and show (1) now. From our definition of \(\mathcal{P}^{0}_{\alpha}\) and \(\mathcal{P}^{0}_{\beta}\), an isomorphism from \(\mathcal{P}^{0}_{\alpha}\) to \(\mathcal{P}^{0}_{\beta}\) is given by a continuous map \(\Phi:EG\to\operatorname{Aut}_{0}(A^{s})\) satisfying \(\Phi(g\cdot x)=\beta_{g}\circ\Phi(x)\circ\alpha_{g}^{-1}\) for all \(x\in EG\) and all \(g\in G.\) Indeed, with such a map, we can define an ismorphism \(\mathcal{P}^{0}_{\alpha}\to\mathcal{P}^{0}_{\beta}\) by
\[[(x,\gamma)]\mapsto[(x,\Phi(x)\circ\gamma)].\]
On the other hand, we can show that every isomorphim is of this form by using local trivialization. Pointwise application of \(\Phi\) gives an isomorphim from \(M_{\alpha}\) to \(M_{\beta}\). Now the proof of [33, Theorem 3.10] shows that this isomorphism give rise to a \(KK^{G}\)-equivalence from \((A^{s},\alpha)\) to \((A^{s},\beta)\) whose underlying \(KK\)-equivalence is [id], and [11, Theorem 6.2] shows that \(\alpha\) and \(\beta\) are \(KK\)-trivially cocycle conjugate.
**Corollary 4.6**.: _Let the assumption on \(A\) and \(G\) be as above. The map \([(\alpha,u)]\mapsto[\mathcal{P}^{0}_{\hat{\alpha}}]\) gives an group isomorphism from \(\mathcal{E}_{A}(G)\) onto \(\bar{E}_{A}(BG)\)._
Note that if \(\gamma\) is an outer \(G\)-action on \(A\), then \(\mathcal{P}^{0}_{\gamma\otimes\operatorname{id}_{\mathbb{X}}}\) is a trivial bundle as \(\operatorname{Aut}(A)\) is contractible. Thus \([(\gamma,1)]\) is the unit of \(\mathcal{E}_{A}(G)\).
We denote by \(\delta_{1}\) the primary obstruction map \(\delta_{1}:\bar{E}_{A}(BG)\to H^{3}(BG,K_{0}(A))\) (this definition is a little different from \(\delta_{1}\) in [7, Definition 4.6]). Theorem 4.2 says that \(\delta_{1}([\mathcal{P}^{0}_{\hat{\alpha}}])=\kappa^{3}((\alpha,u))\) for \([(\alpha,u)]\in\mathcal{E}_{A}(G)\).
_Remark 4.7_.: Let \(\mu\in Z^{2}(G,\mathbb{T})\), and let \((\mathrm{id},\mu)\) be a cocycle action of \(G\) on \(\mathbb{C}\). As \(\mathbb{C}\) is strongly self-absorbing, we can apply Theorem 4.2 to this cocycle action. Then \(V_{g}\) in Eq.(4.1) is a projective unitary representation with the cocycle \(\mu\), and \(\mathrm{Ad}\,V_{g}\) gives a \(G\)-action on \(\mathbb{K}\). We denote by \(C_{\mu}\) the corresponding locally trivial continuous field of \(\mathbb{K}\). Theorem 4.2 shows that its Dixmier-Douady class is \(k^{3}(\mathrm{id},\mu)=\partial[\mu]\), where \(\partial:H^{2}(G,\mathbb{T})\to H^{3}(G,\mathbb{Z})\) is the connecting map of the cohomology long exact sequence arising from the coefficient short exact sequence
\[0\to\mathbb{Z}\to\mathbb{R}\to\mathbb{T}\to 0. \tag{4.2}\]
**Lemma 4.8**.: _Let \(\mu\in Z^{2}(G,\mathbb{T})\) with \(j_{*}\partial[\mu]=0\) in \(H^{3}(G,K_{0}(A))\), where \(j:\mathbb{Z}\to K_{0}(A)\) is the inclusion map. Then for any \([(\alpha,u)]\in\mathcal{E}_{A}(G)\), we have \([(\alpha,u)]=[(\alpha,\mu u)]\)._
Proof.: If \(A=\mathcal{O}_{2}\), we have nothing to show.
We assume \(A=M_{\mathfrak{P}^{\infty}}\otimes\mathcal{O}_{\infty}\) with possibly empty \(\mathfrak{P}\). We denote by \(C_{(\alpha,\mu)}\) the continuous field of \(A^{s}\) over \(BG\) corresponding to \((\alpha,u)\). Then we have
\[C_{(\alpha,\mu u)}\cong C_{(\alpha,u)}\otimes_{C(BG)}C_{\mu}.\]
If \(A=\mathcal{O}_{\infty}\), we have \(\partial[\mu]=0\), and the Dixmier-Douady class of \(C_{\mu}\) is trivial. Thus \(C_{(\alpha,\mu u)}\cong C_{(\alpha,u)}\).
Assume \(\mathfrak{P}\neq\emptyset\). Note that the condition \(j_{*}\partial[\mu]=0\) implies that \([\mu]\) comes from an element in \(H^{2}(G,\widetilde{K}_{0}(A))\). Since \(BG\) is a finite CW-complex, we have
\[H^{2}(BG,\widetilde{K}_{0}(M_{\mathfrak{P}^{\infty}}\otimes\mathcal{O}_{ \infty}))=\bigoplus_{p\in\mathfrak{P}}\varinjlim_{m}H^{2}(BG,\mathbb{Z}_{p^{m} }).\]
Thus there exist primes \(p_{1},p_{2},\cdots,p_{l}\in\mathfrak{P}\) and natural numbers \(m_{1},m_{2},\cdots,m_{l}\) satisfying
\[p_{1}^{m_{1}}p_{2}^{m_{2}}\cdots p_{l}^{m_{l}}\partial[\mu]=0.\]
Now [6, Theorem 2.11] shows \(C_{\mu}\otimes M_{\mathfrak{P}^{\infty}}\cong C(BG)\otimes M_{\mathfrak{P}^{ \infty}}\), and we get \(C_{(\alpha,\mu u)}\cong C_{(\alpha,u)}\), and \([(\alpha,\mu u)]=[(\alpha,u)]\).
### Structure of \(\mathcal{F}_{A}(G)\)
Recall that \(\widetilde{\mathrm{ob}}:\mathcal{F}_{A}(G)\to H^{3}(G,K_{0}^{\#}(A))\) is a semigroup homomorphism, and \(f:\mathcal{E}_{A}(G)\to\mathcal{F}_{A}(G)\) is a forgetfull functor map. We identify \(\mathcal{E}_{A}(G)\) with \(\bar{E}_{A}(BG)\) and we also write \(f:\bar{E}_{A}(BG)\to\mathcal{F}_{A}(G)\). Recall that \(\delta_{1}:\bar{E}_{A}(BG)\to H^{3}(BG,K_{0}(A))\) is the primary obstruction map.
Our main concern in this subsection is the following conjecture.
**Conjecture 4.9**.: _Let \(A\in\mathcal{D}_{pi}\), and let \(G\) be a countable discrete amenable group with a finite CW-complex model of the classifying space \(BG\). Then following hold:_
1. \(\mathcal{F}_{A}(G)\) _is a group._
2. _The following sequence is exact:_ \[\begin{CD}0@>{}>{}>\ker\delta_{1}@>{f}>{}>\mathcal{F}_{A}(G)@>{\widetilde{\rm ob }}>{}>H^{3}(G,K_{0}^{\#}(A))@>{}>{}>0\end{CD}.\]
We first establish the exactness at \(\ker\delta_{1}\) and \(\mathcal{F}_{A}(G)\) in (2) in full generality.
**Lemma 4.10**.: _The restriction of \(f\) to \(\ker\delta_{1}\) is injective, and_
\[\ker\widetilde{\rm ob}=f(\ker\delta_{1}).\]
Proof.: If \(A=\mathcal{O}_{2}\), we have \(\mathcal{E}_{\mathcal{O}_{2}}(G)=\{0\}\) and \(\widetilde{\rm ob}=\rm ob\). Thus there is nothing to show.
Assume \(A=M_{\mathfrak{P}^{\infty}}\otimes\mathcal{O}_{\infty}\) possibly with empty \(\mathfrak{P}\). We treat \(K_{0}(A)\) as a subring of \(\mathbb{R}\). We denote by \(j\) the inclusion map \(j:\mathbb{Z}\to K_{0}(A)\), and by \(\rho\) the inclusion map \(\rho:K_{0}(A)\to\mathbb{R}\).
Let \([\alpha]\in\ker\widetilde{\rm ob}\). Since \(\rm ob(\alpha)=0\), there exists a lifting \((\widetilde{\alpha},u)\) of \(\alpha\) such that \((\widetilde{\alpha},u)\) is a cocycle action. Thus \(j_{A_{*}}\kappa^{3}((\widetilde{\alpha},u))=\widetilde{\rm ob}(\alpha)=0\). Lemma 3.2 shows that \(q_{K_{0}(A)\to\widetilde{K}(A)_{0}\ast}\kappa^{3}((\widetilde{\alpha},u))=0\) and \(\rho_{*}\kappa^{3}((\widetilde{\alpha},u))=0\). The former implies that there exists \(x\in H^{3}(G,\mathbb{Z})\) with \(j_{*}x=\kappa^{3}((\widetilde{\alpha},u))\), and the latter shows \((\rho\circ j)_{*}x=0\). This implies that there exists \(\mu\in Z^{2}(G,\mathbb{T})\) satisfying \(x=\partial[\mu]\). Thus \(\kappa^{3}((\widetilde{\alpha},\mu^{-1}u))=0\), and \([\alpha]\in f(\ker\delta_{1})\), which shows \(\ker\widetilde{\rm ob}\subset f(\ker\delta_{1})\). The other inclusion follows from \(j_{A_{*}}\kappa^{3}((\beta,v))=\widetilde{\rm ob}\circ f([(\beta,v)])\).
Now we show that \(f\) restricted to \(\ker\delta_{1}\) is injective. Let \([(\alpha^{(1)},u^{(1)})],[(\alpha^{(2)},u^{(2)})]\in\ker\delta_{1}\), and assume \(f([(\alpha^{(1)},u^{(1)})])=f([(\alpha^{(2)},u^{(2)})])\). Then we may assume \(\alpha^{(1)}=\alpha^{(2)}\) and \(u^{(2)}(g,h)=\mu(g,h)u^{(1)}(g,h)\) with \(\mu\in Z^{2}(G,\mathbb{T})\). This shows
\[\kappa^{3}((\alpha^{(2)},u^{(2)}))=\kappa^{3}((\alpha^{(1)},u^{(1)}))+j_{*} \partial[\mu],\]
and we get \(j_{*}\partial[\mu]=0\). Thus from Lemma 4.8 we obtain \([(\alpha^{(1)},u^{(1)})]=[(\alpha^{(2)},u^{(2)})]\).
A typical example of a group \(G\) satisfying our assumption is a poly-\(\mathbb{Z}\) group, and we recall its definition here. A discrete group \(G\) is said to be _poly-\(\mathbb{Z}\)_ if there exists a subnormal series
\[\{e\}=G_{0}\leq G_{1}\leq G_{2}\leq\cdots\leq G_{n}=G,\]
such that \(G_{i}/G_{i-1}\cong\mathbb{Z}\) for any \(1\leq i\leq n\). The number \(n\) in the above definition is called the Hirsch length of \(G\) and denoted by \(h(G)\). It does not depend on the choice of the subnormal series as above, and coincides with the cohomological dimension of \(G\).
**Proposition 4.11**.: _Let \(A\) be a strongly self-absorbing Kirchberg algebra, and let \(G\) be a countable torsion-free discrete amenable group. Let \(\gamma\) be an outer \(G\)-action on \(\mathcal{O}_{\infty}\). Then_
1. _For any_ \(G\)_-kernel_ \(\alpha:G\to\operatorname{Out}(A)\)_, we have_ \([\alpha\otimes\gamma]=[\alpha]\)_._
2. _Assume moreover that_ \(G\) _is a poly-_\(\mathbb{Z}\) _group. Then for any_ \(G\)_-kernel_ \(\alpha:G\to\operatorname{Out}(A)\)_, we have_ \([\alpha\otimes\operatorname{id}_{A}\otimes\gamma]=[\alpha]\)_._
_Here we slightly abuse the notation and we denote by the same symbol \(\gamma\) the \(G\)-kernel induced by \(\gamma\)._
Proof.: Note that Szabo's result [45, Theorem 2.6] holds for a \(G\)-kernel without essential change of the proof. Therefore our task is to construct an equivariant embedding of relevant \(G\)-C\({}^{*}\)-algebras into the central sequence algebra of \(A\).
(1) Since \(\gamma\) is unique up to very strongly cocycle conjugate thanks to Gabe-Szabo's classification theorem [11, Corollary 6.11], we may assume that \(\gamma\) is a quasi-free \(G\)-action on \(\mathcal{O}_{\infty}\) given by \(\gamma_{g}(S_{h})=S_{gh}\), where \(\{S_{g}\}_{g\in G}\) is the canonical generators of \(\mathcal{O}_{\infty}\). Moreover, it is strongly self-absorbing as an action.
For a free ultrafilter \(\omega\in\beta\mathbb{N}\setminus\mathbb{N}\), we set
\[A^{\omega}=\ell^{\infty}(\mathbb{N},A)/\{(x_{n})\in\ell^{\infty}(\mathbb{N},A );\ \lim_{n\to\omega}\|x_{n}\|=0\},\]
and \(A_{\omega}=A^{\omega}\cap A^{\prime}\), which is purely infinite by [29, Proposition 3.4]. Then \(\alpha\) induces an outer action on \(A_{\omega}\), which we still denote by \(\alpha\) (see the proof of [35, Lemma 2]), and there exists a unital equivariant embedding of \((\mathcal{O}_{\infty},\gamma)\) into \((A_{\omega},\alpha)\) (see the proof of [13, Theorem 5.1], or alternatively [46, Corollary 2.10]). Thus we get \([\alpha\otimes\gamma]=[\gamma]\).
(2) All we have to show is to give an equivariant embedding of \((A\otimes\mathcal{O}_{\infty},\operatorname{id}_{A}\otimes\gamma)\) into \((A_{\omega},\alpha)\). As we have already given an embedding of \((\mathcal{O}_{\infty},\gamma)\), it suffices to embed \(A\) into \((A_{\omega})^{G}\), where \((A_{\omega})^{G}\) is the fixed point subalgebra of the \(G\)-action \(\alpha\) on \(A_{\omega}\). We prove it by induction of the Hirsch length of \(G\).
We see that the statement is correct if the Hirsch length is zero, that is \(G=\{e\}\), as there exists a unital embedding of \(A\) into \(A_{\omega}\). Assume that the statement is correct for all poly-\(\mathbb{Z}\) group whose Hirsch lenght is less than or equal to \(n-1\). Assume that the Hirsch length of \(G\) is \(n\) now. We choose a normal subgroup \(N\) of \(G\) and \(\xi\in G\) such that \(G=N\rtimes\langle\xi\rangle\). Note that \(B:=(A_{\omega})^{N}\) is a purely infinite (see [18, Corollary]), and the restriction of \(\alpha_{\xi}\) to \(B\), which we denote by \(\beta\) for simplicity, is an aperiodic automorphism and it has a Rohlin property [18, Theorem 3.6]. By induction hypothesis, there exists a unital embedding \(\varphi:A\to B\).
The following argument is essentially in [47]. We choose another embedding \(\psi:A\to B\) such that \(\psi(A)\) commutes with \(\cup_{n\in\mathbb{Z}}\beta^{n}(\varphi(A))\). For \(n\in\mathbb{N}\), we define \(f_{n}:\mathbb{Z}\to\mathbb{R}\) by
\[f_{n}(x)=\left\{\begin{array}{ll}1-\frac{2}{n}|x|,&\quad|x|\leq\frac{n}{2}, \\ 0,&\quad|x|>\frac{n}{2},\end{array}\right.\]
and set
\[\Theta_{n}^{(0)}(a)(x)=\sum_{k\in\mathbb{Z}}f(x+kn)\beta^{x+kn}(\varphi(a)),\]
\[\Theta_{n}^{(1)}(a)(x)=\sum_{k\in\mathbb{Z}}f(x+\frac{n}{2}+kn)\beta^{x+kn}( \psi(a)).\]
Note that for a given \(x\in\mathbb{Z}\), we have \(f(x+kn)=0\) except for only one \(k\in\mathbb{Z}\), and the same statement is true for \(f(x+\frac{n}{2}+kn)\) too. We have
\[\Theta_{n}^{(0)}(1_{A})(x)+\Theta_{n}^{(1)}(1_{A})(x)=1_{B},\]
and the following estimate holds:
\[\|\Theta_{n}^{(j)}(a)(x+1)-\beta(\Theta_{n}^{(j)}(a)(x))\|\leq\frac{2}{n}\|a\|.\]
By construction \(\Theta_{n}^{(j)}(a)\) has period \(n\), and it gives rise to an order \(0\) completely positive map from \(A\) to \(C(\mathbb{Z}/n\mathbb{Z})\otimes B\).
We choose a Rohlin tower \(\{e_{x}^{(0)}\}_{x=0}^{n-1}\cup\{e_{x}^{(1)}\}_{x=0}^{n}\) commuting with
\[\bigcup_{n\in\mathbb{Z}}\beta^{n}(\varphi(A))\cup\bigcup_{n\in\mathbb{Z}} \beta^{n}(\psi(A)),\]
and set
\[\Phi_{n}^{(j)}(a)=\sum_{x=0}^{n-1}e_{x}^{(0)}\Theta_{n}^{(j)}(a)(x)+\sum_{x=0 }^{n}e_{x}^{(1)}\Theta_{n+1}^{(j)}(a)(x).\]
Then \(\Phi_{n}^{(0)}\) and \(\Phi_{n}^{(1)}\) are order \(0\) completely postive maps from \(A\) to \(B\) with mutually commuting ranges, and they satisfy \(\Phi_{n}^{(0)}(1_{A})+\Phi_{n}^{(1)}(1_{A})=1_{B}\). Moreover, we have
\[\|\Phi_{n}^{(j)}(a)-\beta(\Phi_{n}^{(j)}(a))\|\leq\frac{2}{n}\|a\|.\]
Let
\[\mathcal{E}(A,A)=\{f\in C([0,1],A\otimes A);\ f(0)\in A\otimes\mathbb{C},\ f(1) \in\mathbb{C}\otimes A\}.\]
Then there exists a unital homomorphism \(\Psi_{n}:\mathcal{E}(A,A)\to B\) satisfying \(\Phi_{n}^{(0)}(a)=\Psi_{n}((1-t)(a\otimes 1))\) and \(\Phi_{n}^{(1)}(a)=\Phi_{n}(t(1\otimes a))\) for all \(a\in A\), where \(t\) is the coordinate function of \([0,1]\) (see [47, Proposition 3.2], [15, Lemma 6.6]). Since
\[\lim_{n\to 0}\|\beta\circ\Psi_{n}(x)-\Psi_{n}(x)\|=0\]
holds for all \(x\) in a generating set, it holds for general \(x\in\mathcal{E}(A,A)\) too. Since \(A\) is strongly self-absorbing, there exists an embedding \(A\to\mathcal{E}(A,A)\). Thus the usual diagonal sequence argument shows that we have an embedding of \(A\) into \(B^{\beta}=(A_{\omega})^{G}\), and the induction argument is finished.
Note that \(\operatorname{id}_{A}\otimes\gamma\) is a unique outer \(G\)-action on \(A\otimes\mathcal{O}_{\infty}\cong A\) up to cocycle conjugacy, and it gives a unit of \(\mathcal{F}_{A}(G)\) if the assumption of Proposition 4.11 is satisfied.
**Theorem 4.12**.: _Let \(G\) be a countably infinite amenable discrete group such that a compct model of the classifying space \(BG\) exists. Then_
1. \(\mathcal{F}_{\mathcal{O}_{\infty}}(G)\) _is a group._
2. _Assume moreover that_ \(G\) _is poly-_\(\mathbb{Z}\)_. Then_ \(\mathcal{F}_{A}(G)\) _is a group for_ \(A=M_{\mathfrak{P}^{\infty}}\otimes\mathcal{O}_{\infty}\) _and_ \(A=\mathcal{O}_{2}\)_._
Proof.: Let \(A\in\mathcal{D}_{pi}\). For a pair \((A,G)\) satisfying the assumption of (1) or (2), we already know from Proposition 4.11 that \(\mathcal{F}_{A}(G)\) has a unit element. Thus it suffices to show that each element has its inverse.
Let \(\alpha:G\to\operatorname{Out}(A)\) be a \(G\)-kernel, and let \((\widetilde{\alpha},u)\) be a lifting of it. Note that \(A\) is isomorphic to its opposite algebra \(A^{\operatorname{op}}\). Let \(J:A\to A^{\operatorname{op}}\) be the canonical bijection. Then \(J\circ\widetilde{\alpha}_{g}\circ J^{-1}\) induces the opposite \(G\)-kernel \(\alpha^{\operatorname{op}}:G\to\operatorname{Out}(A^{\operatorname{op}})\), and
\[(\widetilde{\alpha}_{g}\otimes J\circ\widetilde{\alpha}_{g}\circ J^{-1},u(g, h)\otimes J(u(g,h))^{*}),\]
is a cocycle action. Thus there exists its inverse \([(\beta,v)]\in\mathcal{E}_{A}(G)\). This means that \([\alpha^{\operatorname{op}}\otimes\beta]\) is the inverse of \([\alpha]\).
**Corollary 4.13**.: _If \(G\) is poly-\(\mathbb{Z}\),_
\[\operatorname{ob}:\mathcal{F}_{\mathcal{O}_{2}}(G)\to H^{3}(G,\mathbb{T})\]
_is an isomorphism._
Our final goal in this section is the following theorem.
**Theorem 4.14**.: _Let \(A=M_{\mathfrak{P}^{\infty}}\otimes\mathcal{O}_{\infty}\) with possibly empty \(\mathfrak{P}\). Then Conjecture 4.9 holds in the following two cases._
1. \(G=\mathbb{Z}^{n}\)_._
2. \(G\) _is a poly-_\(\mathbb{Z}\) _group with_ \(h(G)\leq 5\)_, where_ \(h(G)\) _is the Hirsch length of_ \(G\)_._
We need some preparation for the proof.
Let \(\partial:H^{2}(G,\mathbb{T})\to H^{3}(G,\mathbb{Z})\) and \(\partial_{A}:H^{2}(G,\mathbb{T})\to H^{3}(G,K_{0}(A))\) be the connecting maps of the cohomology long exact sequences arising from Eq.(4.2) and Eq.(3.1) respectively. Then direct computation using Lemma 3.2 shows \(\partial_{A}=j_{*}\circ\partial\), where \(j:\mathbb{Z}\to K_{0}(A)\) is the inclusion map.
We choose a normal subgroup \(N\) of \(G\) and \(\xi\in G\) such that \(G=N\rtimes\langle\xi\rangle\). We denote \(\xi(n)=\xi n\xi^{-1}\) for \(n\in N\). We fix an outer \(G\)-action \(\gamma\) on \(A\).
**Lemma 4.15**.: _For \(\mu\in Z^{2}(G,\mathbb{T})\) satisfying \(\partial_{A}[\mu]=0\), there exists a \(G\)-kernel \(\alpha:G\to\operatorname{Out}(A)\) with a lifting \((\widetilde{\alpha},u)\) such that \(u(n_{1},n_{2})=1\) and_
\[u(\xi,n_{1})\widetilde{\alpha}_{n_{1}}(u(\xi,n_{2}))=\mu(\xi(n_{1}),\xi(n_{2}) )u(\xi,n_{1}n_{2})\]
_for all \(n_{1},n_{2}\in N\)._
Proof.: We define an \(N\)-action \(\beta\) on \(N\) by \(\beta_{n}=\gamma_{\xi^{-1}(n)}\). Note that \(\beta\) and \(\gamma|N\) are cocycle conjugate. Since \(j_{*}\partial[\mu]=0\), Lemma 4.8 implies \([(\gamma|_{N},1)]]=[(\beta,\mu)]\) in \(\mathcal{E}_{A}(N)\), and there exist \(\theta\) and \(v_{n}\in U(A)\) satisfying
\[\theta\circ\beta_{n}\circ\theta^{-1}=\operatorname{Ad}v_{n}\circ\gamma_{n},\]
\[v_{n_{1}}\beta_{n_{1}}(v_{n_{2}})=\mu(n_{1},n_{2})v_{n_{1}n_{2}}.\]
Let \(\widetilde{\alpha}_{n\xi^{l}}=\gamma_{n}\theta^{l}\otimes\gamma_{n\xi^{l}}\) for \(n\in N\) and \(l\in\mathbb{Z}\), which gives rise to a \(G\)-kernel \(\alpha:G\to\operatorname{Out}(A)\). Since \(\widetilde{\alpha}_{n_{1}}\circ\widetilde{\alpha}_{n_{2}}=\widetilde{\alpha}_ {n_{1}n_{2}}\) for \(n_{1},n_{2}\in N\), and \(\alpha_{\xi}\circ\widetilde{\alpha}_{n}=\operatorname{Ad}(v_{\xi(n)}\otimes 1) \circ\widetilde{\alpha}_{\xi(n)\xi}\) for \(n\in N\), we can choose a representative \((\widetilde{\alpha},u)\) of \(\alpha\) with \(u(n_{1},n_{2})=1\) and \(u(\xi,n)=v_{\xi(n)}\otimes 1\). They have the desired property.
We recall an exact sequence arising from the Lyndon/Hochschild-Serr spectral sequence [3, Chapter VII, Section 6]:
The map "res" is the restriction map from \(G\) to \(N\), and \(\mathbb{Z}\) is generated by \(\xi\). We recall a detailed description of the injective map \(\iota_{M}:H^{1}(\mathbb{Z},H^{2}(N,M))\to H^{3}(G,M)\) from [19, Section 7]. For simplicity, we assume that \(M\) is a trivial \(G\)-module.
The \(\xi\)-action on \(Z^{2}(N,M)\) is given by \(\xi\cdot\mu(n_{1},n_{2})=\mu(\xi^{-1}\cdot n_{1},\xi^{-1}\cdot n_{2})\), and
\[H^{1}(\mathbb{Z},H^{2}(N,M))=H^{2}(N,M)/(1-\xi_{*})H^{2}(N,M).\]
We denote by \(Q_{M}\) the quotient map from \(H^{2}(N,M)\) onto \(H^{1}(\mathbb{Z},H^{2}(N,M))\). Let \(\rho\in Z^{1}(\mathbb{Z},H^{2}(N,M))\). Then \(\rho_{m}\) is determined by \(\rho_{1}=[\mu]\) and the cocycle relation \(\rho_{m+n}=\rho_{m}+\xi_{*}^{n}\rho_{n}\). Now \(\iota_{M}[\rho]\in H^{3}(G,M)\) is the cohomology class of the following cocycle:
\[\omega(n_{1}\xi^{l_{1}},n_{2}\xi^{l_{2}},n_{3}\xi^{l_{3}})=-\rho_{l_{1}}(\xi^ {l_{1}}(n_{2}),\xi^{l_{1}+l_{2}}(n_{3})).\]
**Lemma 4.16**.: _Let \(\alpha\) be the \(G\)-kernel constructed in the previous lemma. Then_
\[\operatorname{ob}(\alpha)=\iota_{\mathbb{T}}\circ Q_{\mathbb{T}}([\mu]).\]
Proof.: As it would be silly to use the symbol \(\widetilde{\alpha}\) throughout our computation, we slightly abuse the notation, and we simply write \(\alpha\) instead. Since \(\alpha\) restricted to \(N\) and \(\mathbb{Z}\) are actions, we may choose \(u\) satisfying
\[u(n_{1}\xi^{l_{1}},n_{2}\xi^{l_{2}})=\alpha_{n_{1}}(u(\xi^{l_{1}},n_{2})). \tag{4.3}\]
Let \(w_{n}=v_{n}\otimes 1\), which satisfies
\[\alpha_{\xi}\circ\alpha_{\xi^{-1}(n)}\circ\alpha_{\xi}^{-1}=\operatorname{Ad}w_{ n}\circ\alpha_{n},\]
\[w_{n_{1}}\alpha_{n_{2}}(w_{n_{2}})=\mu(n_{1},n_{2})w_{n_{1}n_{2}}.\]
We can put
\[u(\xi^{l},n)=\left\{\begin{array}{ll}\alpha_{\xi}^{l-1}(w_{\xi(n)})\alpha_{ \xi}^{l-2}(w_{\xi^{2}(n)})\cdots w_{\xi^{l}(n)},&l>0\\ \alpha_{\xi}^{l}(w_{n}^{*})\alpha_{\xi}^{l+1}(w_{\xi^{-1}(n)}^{*})\cdots\alpha_ {\xi}^{-1}(w_{\xi^{l+1}(n)}^{*}),&l<0\end{array}\right.\]
Let \(\rho\in Z^{1}(\mathbb{Z},Z^{2}(N,\mathbb{T}))\) be a cocycle determined by \(\rho_{1}=\mu\). Then by induction of \(l\), we can show
\[u(\xi^{l},n_{1})\alpha_{\xi^{l}(n_{1})}(u(\xi^{l},n_{2}))=\rho_{l}(\xi^{l}(n_ {1}),\xi^{l}(n_{2}))u(\xi^{l},n_{1}n_{2}).\]
Let \(\omega\in Z^{3}(G,\mathbb{T})\) be as in Eq.(2.1). Note that the condition Eq.(4.3) implis
\[\omega(n_{1}\xi^{l_{1}},n_{2}\xi^{l_{2}},n_{3}\xi^{l_{3}})=\omega(\xi^{l_{1}}, n_{2}\xi^{l_{2}},n_{3}),\]
and the 3-cocycle relation implies
\[\omega(\xi^{l_{1}},n_{2}\xi^{l_{2}},n_{3})\] \[=\omega(n_{2},\xi^{l_{2}},n_{3})^{-1}\omega(\xi^{l_{1}}n_{2},\xi^ {l_{2}},n_{3})\omega(\xi^{l_{1}},n_{2},\xi^{l_{2}}n_{3})\omega(\xi^{l_{1}},n_{ 2},\xi^{l_{2}})^{-1}\] \[=\omega(\xi^{l_{1}},\xi^{l_{2}},n_{3})\omega(\xi^{l_{1}},n_{2}, \xi^{l_{2}}(n_{3})).\]
The first term on the right-hand side is
\[\omega(\xi^{l_{1}},\xi^{l_{2}},n_{3})\] \[=\alpha_{\xi}^{l_{1}}(u(\xi^{l_{2}},n_{3}))u(\xi^{l_{1}},\xi^{l_{ 2}}(n_{3})\xi^{l_{2}})u(\xi^{l_{1}+l_{2}},n_{3})^{*}u(\xi^{l_{1}},\xi^{l_{2}}) ^{*}\] \[=\alpha_{\xi}^{l_{1}}(u(\xi^{l_{2}},n_{3}))u(\xi^{l_{1}},\xi^{l_{ 2}}(n_{3}))u(\xi^{l_{1}+l_{2}},n_{3})^{*}\] \[=1.\]
The second term is
\[\omega(\xi^{l_{1}},n_{2},\xi^{l_{2}}(n_{3}))\] \[=\alpha_{\xi}^{l_{1}}(u(n_{2},\xi^{l_{2}}(n_{3})))u(\xi^{l_{1}}, n_{2}\xi^{l_{2}}(n_{3}))u(\xi^{l_{1}}n_{2},\xi^{l_{2}}(n_{3}))^{*}u(\xi^{l_{1}},n_{ 2})^{*}\] \[=u(\xi^{l_{1}},n_{2}\xi^{l_{2}}(n_{3}))\alpha_{\xi^{l_{1}}(n_{2}) }(u(\xi^{l_{1}},\xi^{l_{2}}(n_{3}))^{*})u(\xi^{l_{1}},n_{2})^{*}\] \[=\rho_{l_{1}}(\xi^{l_{1}}(n_{2}),\xi^{l_{1}+l_{2}}(n_{3}))^{-1},\]
which shows the statement.
Proof of Theorem 4.14.: Note that the only issue is the surjectivitiy of \(\widetilde{\operatorname{ob}}\). We keep using the decomposition \(G=N\rtimes\langle\xi\rangle\) as before. Then we have the following commutative diagram with exact rows and the middle column:
where \(M=K_{0}^{\#}(A)\).
We first show \(\operatorname{Im}\iota_{K_{0}^{\#}(A)}\subset\operatorname{Im}\widetilde{ \operatorname{ob}}\). Let \([\mu]\in H^{2}(N,K_{0}^{\#}(A))\). Then since \(\partial_{A}\mathrm{ev}_{1*}[\mu]=0\), the previous lemma shows that there exists a \(G\)-kernel \(\alpha\) satisfying
\[\operatorname{ob}(\alpha)=\iota_{\mathbb{T}}\circ\mathrm{ev}_{1*}(Q([\mu]))= \mathrm{ev}_{1*}\circ\iota_{K_{0}^{\#}(A)}(Q([\mu])),\]
which shows \(\iota_{K_{0}^{\#}(A)}(Q([\mu]))-\widetilde{\operatorname{ob}}(\alpha)\in \ker\mathrm{ev}_{1*}\). Thus there exists \(x\in H^{3}(G,K_{0}(A))\) satisfying \(\iota_{K_{0}^{\#}(A)}(Q([\mu]))=\widetilde{\operatorname{ob}}(\alpha)+j_{A*}x\). Choosing \([(\beta,v)]\in\mathcal{E}_{A}(G)\) with \(\kappa^{3}((\beta,v))=x\), we get \(\iota_{K_{0}^{\#}(A)}(Q([\mu]))=\widetilde{\operatorname{ob}}(\alpha\otimes\beta)\).
We try to prove the statement by induction of the Hirsch length \(h(G)\). It works in the case of \(G=\mathbb{Z}^{n}\), while it stops at \(h(G)=5\) in the general case. If \(h(G)\leq 2\), we have \(H^{3}(G,K_{0}^{\#}(A))=0\), and certainly the statement holds.
We first treat the case \(G=\mathbb{Z}^{n}\). Assume that \(\widetilde{\operatorname{ob}}\) is a surjection for \(N=\mathbb{Z}^{n-1}\). Let \(x\in H^{3}(N,K_{0}^{\#}(A))=H^{3}(N,K_{0}^{\#}(A))^{\mathbb{Z}}\). Then by induction hypothesis, there exists a \(N\)-kernel \(\beta:N\to\operatorname{Out}(A)\) satisfying \(\widetilde{\operatorname{ob}}(\beta)=x\). Let \(\alpha_{n\xi^{l}}=\beta_{n}\otimes\gamma_{n\xi^{l}}\) for \(n\in N\) and \(l\in\mathbb{Z}\). Then \(\mathrm{res}_{*}\widetilde{\operatorname{ob}}(\alpha)=x\). Since we already know \(\operatorname{Im}\iota_{K_{0}^{\#}(A)}\subset\operatorname{Im}\widetilde{ \operatorname{ob}}\), we conclude that \(\widetilde{\operatorname{ob}}\) is a surjection.
Now we treat the general case. Assume that \(\widetilde{\operatorname{ob}}\) is surjection for \(N\) and assume \(h(N)<5\). Let \(x\in H^{3}(N,K_{0}^{\#}(A))^{\mathbb{Z}}\). Then there exists a \(N\)-kernel \(\beta:N\to\operatorname{Out}(A)\) with a lifting \((\widetilde{\beta},v)\) satisfying \(\widetilde{\operatorname{ob}}(\beta)=x\). Note that we also have \(\widetilde{\operatorname{ob}}(\beta_{\xi(\cdot)})=x\). Since \(\dim BG\leq 4\), the Atiyah-Hirzebruch spectral sequence [7, page 175] implies that \(\delta_{1}:\bar{E}_{A}(BG)\to H^{3}(G,K_{0}(A))\) is an isomorphism, and \(\ker\delta_{1}=\{0\}\). Thus Lemma 4.10 implies \([\beta]=[\beta_{\xi(\cdot)}]\), and there exists \(\theta\in\operatorname{Aut}(A)\) satisfying \([\theta\circ\widetilde{\beta}_{n}\circ\theta^{-1}]=\beta_{\xi(n)}\). Letting \(\alpha_{n\xi^{l}}=\beta_{n}\theta^{\ell}\otimes\gamma_{n\xi^{l}}\), we get a \(G\)-kernel \(\alpha:G\to\operatorname{Aut}(A)\) with \(\mathrm{res}_{*}\widetilde{\operatorname{ob}}(\alpha)=x\). Thus \(\widetilde{\operatorname{ob}}\) is a surjection.
**Example 4.17**.: The simplest non-trial case is \(G=\mathbb{Z}^{3}\). In this case, we have \(H^{3}(\mathbb{Z}^{3},M)=M\), and \(\bar{E}_{\mathcal{O}_{\infty}}(\mathbb{T}^{3})\cong H^{3}(\mathbb{T}^{3}, \mathbb{Z})=\mathbb{Z}\). Hence we have the following
commutative diagram,
which shows that the space \(H^{3}(\mathbb{Z}^{3},K_{0}^{\#}(\mathcal{O}_{\infty}))\) of our new invariant naturally interpolates the set of Dixmier-Douady classes \(H^{3}(\mathbb{T}^{3},\mathbb{Z})\).
## 5 Comments
1. To establish Conjecture 4.9,(2) for the general poly-\(\mathbb{Z}\) case, it is probably better to work on \(\mathcal{O}_{\infty}\) and \(M_{\mathfrak{P}^{\infty}}\) separately instead of \(M_{\mathfrak{P}^{\infty}}\otimes\mathcal{O}_{\infty}\) in view of Corollary 3.11. In order to be able to push the induction argument in the \(\mathcal{O}_{\infty}\) case further, we need, as an induction hypothesis, a natural splitting of the exact sequence
Such a statement is out of reach in our brute force method, and probably requires a homotopy theoretical interpretation of the grooup \(\mathcal{F}_{\mathcal{O}_{\infty}(G)}\). Note that a similar exact sequence
splits because \(H^{3}(X,\mathbb{Z})\) is identified with \(\bar{E}^{1}_{\mathbb{C}}(X)\).
2. To show Conjecture 4.9,(1) in full generality, we might need a \(G\)-kernel version of Gabe-Sabo's classification theorem. A recent work of Arano-Kitamura-Kubota's [1] may be the first step toward it.
3. For a non-amenable exact group \(G\) satisfying the Haagerup property, we still have a chance to get reasonable theories \(\mathcal{E}_{A}(G)\) and \(\mathcal{F}_{A}(G)\) by considering only amenable actions. As there exists a characterization of amenability of group actions in terms of central sequences [37], it is possible to define it for \(G\)-kernels too.
4. To define \(\mathcal{E}_{A}(G)\) and \(\mathcal{F}_{A}(G)\) in the stably finite case, we need to assume strongly outerness for cocycle actions and \(G\)-kernels. Even with this modification, we cannot expect any similar results in the stably finite case as we have already seen it in Remark 3.13. In fact, Matui-Sato [32] showed \(\mathcal{E}_{A}(\mathbb{Z}^{2})\cong H^{2}(G,\mathbb{R}/K_{0}(A))\) for \(A=M_{\mathfrak{P}^{\infty}}\) and \(A=\mathcal{Z}\).
|
2309.05030 | Decolonial AI Alignment: Openness, Viśe\d{s}a-Dharma, and Including
Excluded Knowledges | Prior work has explicated the coloniality of artificial intelligence (AI)
development and deployment through mechanisms such as extractivism, automation,
sociological essentialism, surveillance, and containment. However, that work
has not engaged much with alignment: teaching behaviors to a large language
model (LLM) in line with desired values, and has not considered a mechanism
that arises within that process: moral absolutism -- a part of the coloniality
of knowledge. Colonialism has a history of altering the beliefs and values of
colonized peoples; in this paper, I argue that this history is recapitulated in
current LLM alignment practices and technologies. Furthermore, I suggest that
AI alignment be decolonialized using three forms of openness: openness of
models, openness to society, and openness to excluded knowledges. This
suggested approach to decolonial AI alignment uses ideas from the argumentative
moral philosophical tradition of Hinduism, which has been described as an
open-source religion. One concept used is vi\'{s}e\d{s}a-dharma, or particular
context-specific notions of right and wrong. At the end of the paper, I provide
a suggested reference architecture to work toward the proposed framework. | Kush R. Varshney | 2023-09-10T14:04:21Z | http://arxiv.org/abs/2309.05030v3 | # Decolonial AI Alignment:
###### Abstract
Prior work has explicated the coloniality of artificial intelligence (AI) development and deployment. One process that that work has not engaged with much is alignment: the tuning of large language model (LLM) behavior to be in line with desired values based on fine-grained human feedback. In addition to other practices, colonialism has a history of altering the beliefs and values of colonized peoples; this history is recapitulated in current LLM alignment practices. We suggest that AI alignment be decolonialized using three proposals: (a) changing the base moral philosophy from Western philosophy to dharma, (b) permitting traditions of argument and pluralism in alignment technologies, and (c) expanding the epistemology of values beyond instructions or commandments given in natural language.
## 1 Introduction
Artificial intelligence (AI) is value-laden; the term itself reflects the legacy of dominance hierarchies such as man over nature, patriarchy, colonialism, and racism [9]. Now that we have entered the age of powerful large language models (LLMs), historical dominance is getting even more entrenched. For example, empirical analysis shows that LLMs have sociopolitical biases in favor of dominant groups [17; 14]. In addition, morality captured by multi-lingual language models does not reflect cultural differences, but rather is dominated by high-resource languages and cultures [23].
When researchers and activists were first sounding the alarm that LLMs would harm marginalized communities by encoding and reinforcing hegemonic viewpoints, the charge of hegemony rested on unfathomably large training datasets scraped from the bottom of the barrel of the internet over-representing white supremacist, misogynist, and agiest content [5]. However, it has now become apparent that the behavior of performant LLMs depends as much on the _instruction_ they are given through human feedback--a process given the name _alignment_--as on the training data [48; 3; 58; 8].
The workers laboring to give this human feedback for alignment, often located in poor countries, may be traumatized and scarred [49; 30]. Although there are exceptional examples of workers and communities being uplifted [40; 50], the process usually recapitulates exploitation colonialism: a small number of powerful companies using the workers to increase their own power and wealth while little benefit and an abundance of negative externalities are left in the workers' communities [21]. Moreover, companies force the workers to project the company's monocultural values into the feedback they provide through draconian measures [41; 15]; such imposition alienates the labor [39] and erases any values that the workers and their communities may hold, including ones that conflict with the company's. Gabriel's account of AI alignment states [20]: "Designing AI in accordance with a single moral doctrine would, therefore, involve imposing a set of values and judgments on
other people who did not agree with them. For powerful technologies, this quest to encode the true morality could ultimately lead to forms of domination." What is such domination if not colonialism?
Investigation into the intersection of AI and colonialism is not new. However, the seminal work by Mohamed, Png and Isaac [42], a series of articles in MIT Technology Review by Hao et al. [24], the AI Decolonial Manyfesto [34], and other prior work is not focused on the alignment process using fine-grained human feedback that has arisen in the last couple of years. In this paper, we focus on such alignment practices and discuss issues that go beyond the ones that generally apply to data and AI. Specifically, we posit that coloniality is affecting the values--the definitions of good and bad, or right and wrong--that LLMs are aligned to follow in their behavior. We suggest a decolonial path forward rooted in the dharmic traditions of India for attractive reasons discussed in the sequel.1
Footnote 1: By India, we mean the pre-colonial conception of the Indian subcontinent or South Asia rather than the present-day country. Moreover for simplicity, in this paper, we focus on the dharmic traditions of India while recognizing several others with long histories there, including Muslim, dalit, tribal and North-East ones [45]. Also, the focus on India is not meant to exclude other places or peoples that may inspire the decolonialization of AI alignment. Lastly, when we refer to the West, it is a reference to the dominant traditions there rather than dominated ones.
## 2 Colonial AI Alignment
Just as other aspects of the enterprise of AI have colonial traces behind them, so does value alignment. Technology companies based in the West that offer LLMs are the equivalent of _metropoles_: the entities exercising power over a colonial empire [42]. Values are the realm of moral philosophy, the branch of philosophy that studies right and wrong [16].
Historically, colonialism altered the beliefs and values of colonized peoples. For example, Igboin writes [28]: "Colonial rule disrupted the traditional machinery of moral homogeneity and practice. The method of moral inculcation was vitiated, which resulted in the abandonment of traditional norms and values through a systematic depersonaliz[z]ation of the African and pagani[z]ation of its values. Instead of the cherished communalism which defined the life of the African, for example, a burgeoning societal construct was introduced which alienates and destroys the organic fabric of the spirit of we-feeling." On Ranganathan's account, during and after the Western colonization of India, "Hindus adopted a West-centric frame for understanding their tradition as religious because of colonization" [52]. This phenomenon was not merely a side-effect, but a goal of the program of colonialism. "For Western colonialism to succeed, philosophy and explication--South Asian moral philosophy--has to be erased, as it constitutes a critical arena for the West's claim to authority" [52]. The colonizers positioned their Western philosophical tradition as rational and secular, and the default, erasing the Hindu traditions as the irrational, unjustified 'other.'
In this light, let us examine coloniality in AI alignment. The values promoted by metropole tech companies such as 'helpfulness,' 'harmlessness,' and 'honesty' seem rational, secular, and unassailable at face value. For example, Anthropic's LLM has been instructed to "Please choose the assistant response that's more ethical and moral. Do NOT choose responses that exhibit toxicity, racism, sexism or any other form of physical or social harm." [3]. How could one oppose such universal behaviors from LLMs? The problem is that such values are so generic and high-level that they can hide many undesirable behaviors. Helpful to whom? Hornless to whom? Honest in what way? They can shield bad behaviors behind the veneer of good intentions. The behavior of LLMs and their effect on people can only be evaluated in terms of thick values in the practical contexts that they operate [46]. However, such values are not present in the constitutions of the metropole companies' aligned AI systems.
A less obvious domination is Western philosophy being the starting point for the companies' AI ethics and alignment. This basis may be deontology, consequentialism, or virtue ethics, which pursue specifying _universally_ 'right' actions, outcomes, or ideals, respectively. By doing so, the companies push other philosophies to the margins [6]. Moreover, there is an unstated supposition that non-universal moral theories are not appropriate paths for AI alignment. They may be derogated and discounted as mere relativism. Mohamed et al. remind us that [42]: "It is metropoles...who are empowered to impose normative values and standards, and may do so at the 'risk of forestalling alternative visions." Furthermore, when softening the companies' ideals of AI alignment, Gabriel states that [20]: "even without agreement about the fundamental nature of morality, people may still
come to a principled agreement about values and standards that are appropriate for a given subject matter or domain"; the aim for universality remains, albeit within the box of a domain. Conceiving of any other way is unthinkable once in the frame of Western philosophy. There is no possibility for moral variety [18]. The centering of Western philosophy via colonialism is recapitulated in AI alignment.
Furthermore, _logos_, the basis of logic in Western philosophy, conflates thought with language, and thought with belief--what Ranganathan calls the linguistic model or linguistic account of thought [52]. In contrast, the Indian tradition treats a proposition and a belief in that proposition as separate things that can be differentiated. Moreover, by not adhering to the linguistic account of thought, Hindus were able to depict their moral values through painting, dance, rituals, and even silence [19]. In fact, in the Hindu tradition, poetry (sloka) was invented by Valmikik to express rage and grief at the immorality of the killing of a mating bird [10]. Various pre-colonial societies around the world used masks, sculptures, rhythms, body parts, and many other expressions to capture and communicate moral philosophy [29, 32, 12]. A decolonialized approach to alignment that does not start from logos would allow for a richer epistemology to align the behavior of LLMs and more inclusive ways of doing so.
## 3 Pluralism
Let us say something deeper about the goal of universally-applicable moral philosophies, which is present in the West,2 but also in some traditions that were colonized. The story is different in the pre-colonial tradition in India where _dharma_ (what is right or good vs. what is wrong or bad) was richly debated.
Footnote 2: The general trend in Western ethics is toward universality, but there are theories, e.g. the ones of Bernard Williams, that do not aim for universality.
There were deontological philosophies (e.g. mimansa), consequentialist philosophies (e.g. nyaya), virtue ethics philosophies (e.g. vaiesika), and several other moral philosophies without equivalent in Western philosophy (e.g. yoga) that vigorously argued for different ways of conceptualizing dharma [37, 52]. Importantly, however, _argument_ of moral philosophy and an individual person holding contradictory views was natural in pre-colonial India [53]. As Emerson, a philosopher in the transcendentalist movement, which was directly influenced by dharmic thought, famously said: "A foolish consistency is the hoboglobin of little minds, adored by little statesmen and philosophers and divines." The ancient Indian philosophers did not have little minds!
Importantly, there was a dichotomy of dharma into _sadharanadharma_ (common universally good actions and outcomes) and _visesadharma_ (particular good actions and outcomes based on the context). Sadharanadharma includes common beliefs such as not harming other living beings (ahimsa) and telling the truth (satya). Visesadharma specializes these in context, so that it is okay for a soldier to believe in ahimsa but to also kill enemy soldiers on the battlefield; it is okay for a doctor to believe in satya but to also lie to a patient to prevent them from shock. There may also be completely unique good behaviors that have nothing to do with sadharanadharma. Visesadharma is the specific dharma, duty, or conception of right and wrong based on station, reputation, skill, family, and other aspects of context. On Carpenter's account [7], visesadharma "is rather more rich and interesting than our classifications of 'deontological' and 'consequentialist' (even broad consequentialist) allow." The common harms that should be avoided according to sadharanadharma are captured in several recent harm taxonomies for LLMs, but context-specific harms are not included [54, 57, 1].
Such pluralism is salient to AI alignment because it raises the possibility for there to be different LLMs for different peoples and organizations especially or particularly aligned according to the regulatory environment of their sector, the social norms of their users, and all other aspects of their context. As Mohamed et al. put it [42], decolonial AI "criticize[s] universalism in thinking, and instead advocate[s] for localization and pluriversalism." Moreover, the tradition of argumentative India implies that even a single LLM could entertain conflicting values. Furthermore, it implies that AI systems can have uncertainty in the values themselves [44].
Decolonial AI Alignment
In Mohamed et al.'s account [42], "decolonization seeks to reject an imitation of the West in all aspects of life, calling for the assertion of unique identities and a re-centering of knowledge on approaches that restore global histories and problems and solutions." Based on the discussions above, here we propose that a decolonial solution can reimagine and reconstitute three aspects of aligning LLMs:
1. the base moral philosophy,
2. the tradition of argumentation, and
3. the valid sources of knowledge to use in the argumentation.
Due to the salient characteristics of pluralism in the tradition of pre-colonial India, we propose (a) that the base morality be dharma including visesadharma, (b) that conflicting conceptions of what is right or good be allowed to coexist and be argued upon, and (c) that paintings, poetry, storytelling and other sources of moral values enrich the possible inputs for AI alignment.
Toward visesadharma, an alternative non-monoculture future of LLM alignment imagined by Kirk et al. is as follows [31]: "Given the diversity of human values and preferences, and the importance of pragmatics for contextual understanding in human-human interactions, the aim to fully align models across human populations may be a futile one....A logical next step would be to do away with the assumptions of common preferences and values, and instead target...micro-alignment whereby LLMs learn and adapt to the preferences and values of specific end-users." Interestingly, Kirk et al. do not start their discussion with consequentialism, deontology, or any other branch of Western philosophy. We are already headed toward that imagined future, as various LLM alignment technologies with reasonable computational requirements are emerging that permit organizations and peoples of all stripes to bring their own values to open-source models [25; 55; 26]. The metropole companies' models, however, remain closed and cannot be tuned in a visesadharmic fashion; they remain fixed with the commandments of their creators.
Toward argument, or more specifically the ability for different stakeholders and different regulations to influence the behavior of LLMs, even if their values conflict, one recent piece of work applies social choice theory [35] and LLMs themselves to help people with diverse preferences come to a consensus on LLM behavior [4]. Another piece of work, although not yet applied to LLMs, takes a different view and instead of trying to obtain a consensus, orchestrates an AI system to follow one among several conflicting policies depending on the context [47]. Toward such an end, other work in AI research aims to represent societal context ontologically [38]. Moreover, it has been shown that the mixture-of-experts paradigm, which involves several models activated in different parts of some contextual space through a gating function, can successfully combine several LLMs with different behaviors into one bigger LLM [27]. Some combination of the three ideas: social choice, orchestration, and mixture-of-experts will allow us to work toward an argumentative alignment that is okay with some amount of inconsistency and continual debate.
Finally toward broadening the epistemology of expressing values, we note that existing approaches to LLM alignment are done through language itself, and that too, the language of explicit instructions. However, there are other many ways of expressing (moral) knowledge that are not commandments, such as fables, mythology, poetry, art, and dance. Although culturally traditional, many of these sources would be considered non-traditional in the current framing of AI alignment. Divakaran, Sridhar, and Srinivasan propose to broaden AI ethics through traditional Indian music, sculpture, painting, floor drawing, and dance [13]. Al Nahian et al. suggest that AI systems be aligned through the medium of storytelling [2]. However, there are still many open questions on how to represent and infer values from sources of different modalities of art that are shrouded in metaphor. Progress along these lines will allow traditional knowledge in its natural format to decolonialize the behavior of LLMs.
## 5 Further Discussion
We have examined the process of alignment of LLMs and discussed how it is currently recapitulating colonialism's dismantling of moral philosophies and values around the world. We make three further comments before concluding.
First, let us discuss reform and revival movements that emerged in India during and after the colonial period. Arya Samaj, Gaudiya Vaishnavism including the International Society for Krishna Consciousness (Hare Krishna movement), Satyagraha (Mahatma Gandhi's movement which influenced civil rights movements around the world), and even Hindux (Hindu nationalism) which is non-pluralistic and currently seeing a resurgence, were justified against the backdrop of the philosophy of the Western colonizers and reduced Hinduism to a singular religious faith rather than a rich argumentative milieu [45]. The movements positioned themselves as criticisms _within_ the frame of Western philosophy. They are instructive for AI alignment because the revival of a tradition within a pigeonhole opened by the colonizers does not enable a truly different approach. Thus, just adding a few extra globalized commandments to a monopole tech company's existing alignment paradigm does not fundamentally lead to decoloniality.
Second, let us dive into a currently raging debate: whether AI research should focus efforts on so-called 'AI ethics' or on so-called 'AI safety.' Although they are terms that have the same essence [56], 'AI ethics' has come to mean detecting and preventing clear and present harms, especially ones that hurt marginalized communities, and 'AI safety' has come to mean preventing the long-term future harm of human extinction. In a consequentialist framing, the difference may only be the presence or absence of a factor discounting future lives, which may not be such a glaring difference from a privileged perspective. However, when viewed through the logics of resistance [33], it is a deep chasm that recapitulates the difference between atomism and holism [22]. Greene, Dhurandhar, and Shmueli suggest bridging atomist-holist chasms in AI through training and education [22], but these remedies do not seem to be enough. The proposed decolonial approach to LLM alignment that brings forth visesadharma, argumentation, and art is the way that will enable a variety of AI systems that listen to vulnerable communities and do not harm them now, systems that do not lead humanity down the path of extinction (as remote a possibility as that seems), and systems that are able to juggle both positions and others by applying different policies in different contexts.
Third, let us consider evaluating and auditing aligned LLMs. Testing LLMs is difficult enough when only considering common, sadharana socio-technical harms such as hallucination, inciting violence, stereotyping, hate speech and toxicity [51, 43, 36, 11]. It becomes even more difficult when considering context-specific, visesa harms which will not have existing benchmarks given their unique nature. The dharmic framework of karma, which confers on an individual a positive feedback (punya) for following their dharma and a negative feedback (papa) for not doing so, is not helpful either because the mechanics of such an evaluation is not typically explicated. Thus, auditing LLMs for visesadharma will require innovation that may be developed hand-in-hand with eliciting and representing values.
Decolonial AI alignment--as we have described it--is tenable from a technical perspective. A path to extending knowledge elicitation, extraction, and representation technologies for consuming traditional stories and art is visible. Low-cost technology solutions for performing the actual LLM alignment have started to emerge. Ways to deal with multiple conflicting objectives are also known. What remains, however, is the biggest challenge of all, and it is not technological. We must change the perspective on alignment in the industry and actively overturn the power of the tech metropoles.
## Acknowledgment
The author thanks Lauren Alvarez, Jason D'Cruz, Amit Dhurandhar, Bran Knowles, Saska Mojsilovic, Shubham Singh, Mudhakar Srivatsa, Lav Varshney, and Pramod Varshney for providing substantive comments on earlier drafts of this piece.
|
2309.15223 | Low-rank Adaptation of Large Language Model Rescoring for
Parameter-Efficient Speech Recognition | We propose a neural language modeling system based on low-rank adaptation
(LoRA) for speech recognition output rescoring. Although pretrained language
models (LMs) like BERT have shown superior performance in second-pass
rescoring, the high computational cost of scaling up the pretraining stage and
adapting the pretrained models to specific domains limit their practical use in
rescoring. Here we present a method based on low-rank decomposition to train a
rescoring BERT model and adapt it to new domains using only a fraction (0.08%)
of the pretrained parameters. These inserted matrices are optimized through a
discriminative training objective along with a correlation-based regularization
loss. The proposed low-rank adaptation Rescore-BERT (LoRB) architecture is
evaluated on LibriSpeech and internal datasets with decreased training times by
factors between 5.4 and 3.6. | Yu Yu, Chao-Han Huck Yang, Jari Kolehmainen, Prashanth G. Shivakumar, Yile Gu, Sungho Ryu, Roger Ren, Qi Luo, Aditya Gourav, I-Fan Chen, Yi-Chieh Liu, Tuan Dinh, Ankur Gandhe, Denis Filimonov, Shalini Ghosh, Andreas Stolcke, Ariya Rastow, Ivan Bulyko | 2023-09-26T19:41:34Z | http://arxiv.org/abs/2309.15223v2 | # Low-Rank Adaptation of Large Language Model Rescoring for Parameter-Efficient Speech Recognition
###### Abstract
We propose a neural language modeling system based on low-rank adaptation (LoRA) for speech recognition output rescoring. Although pretrained language models (LMs) like BERT have shown superior performance in second-pass rescoring, the high computational cost of scaling up the pretraining stage and adapting the pretrained models to specific domains limit their practical use in rescoring. Here we present a method based on low-rank decomposition to train a rescoring BERT model and adapt it to new domains using only a fraction (0.08%) of the pretrained parameters. These inserted matrices are optimized through a discriminative training objective along with a correlation-based regularization loss. The proposed low-rank adaptation RescoreBERT (LoRB) architecture is evaluated on LibriSpeech and internal datasets with decreased training times by factors between 5.4 and 3.6.
Yu Yu\({}^{*}\), Chao-Han Huck Yang, Jari Kolehmainen, Prashanth G. Shivakumar, Yile Gu, Sungho Ryu Roger Ren, Qi Luo, Aditya Gourav, I-Fan Chen, Yi-Chieh Liu, Tuan Dinh, Ankur Gandhe Denis Filimonov, Shalini Ghosh, Andreas Stolcke, Ariya Rastow, Ivan Bulyko Amazon, USA
\({}^{*}\)Stevens Institute of Technology, USA Low-rank adaptation, neural language model rescoring, parameter-efficient speech recognition
## 1 Introduction
Second-pass rescoring is a widely explored technique to improve the performance of automatic speech recognition (ASR) systems [1, 2, 3, 4, 5]. Language models in different architectures, such as long short-term memory (LSTM) [6] and transformer [7], have proven effective as N-best rescors [8] to boost the performance of first-pass decoding. Notably, transformers stand out among other language model architectures due to their exceptional ability to model long-range dependencies and context within the input. Additionally, large language models (LLMs) such as GPT-2 [9] and BERT [10], which are based on transformers, have the advantage of incorporating both linguistic and world knowledge. As a result, LLMs have been used in extensive applications across many natural language processing tasks.
LLMs are conventionally pretrained on massive unlabelled data sets and fine-tuned on some smaller labelled datasets for adaptation to downstream tasks. However, as the size of the pretrained models increases, the cost associated with fine-tuning and deploying these models for real-world applications also escalates. To address this practical challenge, a range of parameter-efficient methods (e.g., adapters, model reprogramming, and prompts) have been proposed [11, 12, 13, 14, 15, 16, 17, 18] to alleviate the computation and memory demands of fine-tuning LLMs. Low-rank adaptation (**LoRA**) [19] freezes all pretrained parameters in the LLM and inserts a trainable pair of matrices (acting as a low-rank decomposition of a full matrix) additively into each layer of the Transformer architecture. Compared to other parameter-efficient training methods, such as adapters [12], LoRA has two distinct advantages: 1) it employs a simple architecture and has the potential to reduce the number of trainable parameters compared to alternatives; 2) LoRA does not introduce any additional inference latency, making it an excellent choice for deployment in production environments.
In this work, we explore low-rank adaptation for language model rescoring to achieve a favorable trade-off between computational efficiency and speech recognition performance. Specifically, we follow the discriminative training objective proposed in [20] to directly optimize the minimum word error rate, as described in Section 3.1. During training, we freeze all layers in BERT and only update low-rank matrices inserted at each transformer layer, as discussed in Section 3.2. As a result, the memory required to store the trainable parameters and the backward-pass computation are both reduced. Meanwhile, it is worth noting that we have observed that LoRA can lead to a degraded representation, similar to full fine-tuning [21], which can consequently affect performance on unseen test domains. To mitigate this negative effect, we further apply a correlation-based regularization in addition to the minimum word error loss, as shown in Section 3.3.
The proposed **Low**-rank **R**escoring for **B**ERT (LoRB) is evaluated on both a public dataset and internal datasets covering a range of domains. We show that **LoRB** can achieve comparable performance on the target domain and even bet
ter performance on non-target domains, as compared to full fine-tuning and other parameter-efficient methods, using only **0.08%** of the trainable parameters updated in fine-tuning. Additionally, LoRB can save up to **32%** training memory utilization and achieve up to **6-fold** reduction in training times, by allowing training with a larger learning rate.
## 2 Related Work
### Low-rank adaptation
LoRA has been widely investigated in the natural language processing (NLP) domain. For example, [22] explores an automatic way to select the optimal rank value of LoRA matrices. [23, 24] discuss the most effective transformer modules in which to insert LoRA matrices, while [25] examines the parameter allocation among weight matrices. Some studies have investigated the underlying reasons for the effectiveness of LoRA. [26, 27] discovered that the sparsity of learned weights imposes a regularization effect on the original model, resulting in improved generalization. [28] demonstrated that constraining the dimensionality of the optimization problem can effectively mitigate catastrophic forgetting. Beyond NLP, low-rank adaptation has also been applied in vision tasks by fine-tuning of vision transformers [28, 29, 30]. However, it remains to be seen whether the findings for NLP and vision tasks can be transferred to second-pass rescoring in automatic speech recognition.
### Domain adaptation for ASR
In the domain adaptation research for ASR, the focus has been largely on first-pass acoustic models. Strategies such as contextual biasing have been widely used for RNN-T models [31, 32]. Additionally, for low-resource target domains, self-supervised training and semi-supervised training strategies have been explored [33, 34, 35] using speech model reprogramming or adapters.
For second-pass models, [36] explored fine-tuning a general rescoring model for new domains and incorporating a domain classifier to switch between domain-specific models. [37] proposed training of prompt embeddings for target domains and attaching them to the N-best list before scoring with the rescoring GPT2 model. However, this method introduces additional inference latency due to the prepended prompts. Our work, by contrast, aims to explore the generalization effects of low-rank parameter-efficient fine-tuning methods, while reducing the computational cost of domain adaptation without introducing additional inference latency.
## 3 Approach
### Discriminative training for second-pass rescoring
#### 3.1.1 Second-pass rescoring
In this section, we formulate the second-pass rescoring task. Given an _N_-best hypothesis list \(E=\{E_{1},E_{2},\ldots,E_{n}\}\) obtained from the beam search in the decoder based on the first-pass acoustic model, the rescoring model will generate scores for each hypothesis. For any hypothesis \(E_{i}\in E\), denote by \(s_{i}^{a}\) the score given by the first pass, and by \(s_{i}^{l}\) the score produced by the second pass. For both passes, the score of a hypothesis represents the negative log likelihood, thus a lower score represents a more likely hypothesis.
The language model, such as BERT, takes a hypothesis and outputs a hidden representation \(g_{i}\), then the feed-forward network takes the representation of the task-specific [CLS] token as input and derives the second-pass score \(s_{i}^{l}\), as shown by Equation (2):
\[g_{i}=\text{BERT}(E_{i}) \tag{1}\]
\[s_{i}^{l}=\text{FFNN}(g_{i}^{\text{CLS}}) \tag{2}\]
The final score of a hypothesis is the linear combination of the first- and second-pass scores:
\[s_{i}=s_{i}^{a}+\beta\cdot s_{i}^{l} \tag{3}\]
#### 3.1.2 Discriminative training objective
Discriminative training has been widely explored for second-pass rescoring. Specifically, BERT as a masked language model has been applied to second-pass rescoring [20] by training with a discriminative objective of minimum word error rate (MWER) [38]. Given a hypothesis \(E_{i}\in E\), denote by \(\epsilon_{i}\) the number of word errors (edit distance) from the ground truth transcription. The MWER loss function is defined as the expected number of word errors for the N-best hypothesis, as shown by Equation (6):
\[P_{i}=\frac{e^{-s_{i}}}{\sum_{j=1}^{n}e^{-s_{j}}} \tag{4}\]
\[\bar{\epsilon}_{H}=\frac{1}{n}\sum_{i=1}^{n}\epsilon_{i} \tag{5}\]
\[\mathcal{L}_{\mathrm{MWER}}=\sum_{i=1}^{n}P_{i}\cdot(\epsilon_{i}-\bar{ \epsilon}_{H}) \tag{6}\]
### Low-rank adaptation to ASR rescoring
In the previous modification of BERT for the rescoring task, the pretrained weights \(\Phi_{0}\) of BERT are updated to \(\Phi_{0}+\Delta\Phi\) by following the gradient for minimizing the MWER loss. The process of learning task-relevant parameters \(\Delta\Phi\) is known as the full fine-tuning process. In the full fine-tuning process,
the dimension of the learned parameters \(|\Delta\Phi|\) equals that of the pretrained weights \(|\Phi_{0}|\).
As shown by [39], pretrained language models have a low intrinsic dimension and can learn efficiently through a low-dimensional reparameterization. Inspired by this finding and the success of low-rank adaptation of large language models in NLP tasks [19], we propose adapting BERT for the rescoring task by learning a low-rank representation \(\Theta\) that has a much smaller dimension than \(\Phi_{0}\), or \(|\Theta|\ll|\Phi_{0}|\).
Formally, for any dense layer in the transformer blocks with input \(x\) and output \(h\), denote the pretrained weight as \(W_{0}\in\mathbb{R}^{d\times k}\), and the updates to the weight as \(\Delta W\). We perform a low-rank decomposition to the updates \(\Delta W=W_{B}W_{A}\), where \(W_{B}\in\mathbb{R}^{d\times r}\), \(W_{A}\in\mathbb{R}^{r\times k}\) and \(r\ll\min(d,k)\). The forward pass is modified to be
\[h=W_{0}x+\Delta Wx=W_{0}x+W_{B}W_{A}x \tag{7}\]
During training, \(W_{0}\) is frozen and only \(W_{A}\) and \(W_{B}\) are updated. In BERT, LoRA can be applied to any subset of weight matrices, for example, \(W_{0}\) could be \(W_{q}\), \(W_{k}\), \(W_{v}\) or \(W_{o}\) inside a self-attention module, or be the weight matrices in the two-layer feed-forward network, i.e., \(W_{f_{1}}\) and \(W_{f_{2}}\).
### Multi-loss training with regularization
Fine-tuning large pretrained models often leads to overfitting on the training data for downstream tasks [21, 40]. Even though some parameter-efficient fine-tuning methods are shown to be helpful in alleviating the overfitting issues by constraining the number of trainable parameters [41, 42, 43], in some of our experiments a marginal degradation of performance on unseen test sets is observed when evaluating the LoRA fine-tuned rescoring model.
In order to obtain a hidden representation from the pretrained BERT with better generalization performance, we add a correlation-based regularization loss \(\mathcal{L}_{cor}\) besides the MWER loss:
\[\mathcal{L}=\mathcal{L}_{\mathrm{MWER}}+\lambda\mathcal{L}_{cor} \tag{8}\]
The correlation-based regularization [44] has been proposed to alleviate the representation degeneration [45] problem caused by fine-tuning on pretrained language models. By forcing the feature space of representations to be more isotropic (uniformly variable in all directions), the expressiveness of the learned representation can be preserved better. Formally, the correlation-based regularization loss is defined so as to penalize the correlation matrix for sentence representations for deviating from the identity:
\[\mathcal{L}_{cor}=\left\|\Sigma-\mathrm{I}\right\| \tag{9}\]
where \(\left\|\cdot\right\|\) denotes the Frobenius norm, \(\mathrm{I}\in\mathbb{R}^{d_{h}\times d_{h}}\) is the identity matrix, \(\Sigma\in\mathbb{R}^{d_{h}\times d_{h}}\) is the correlation matrix with \(\Sigma_{ij}\) being the Pearson correlation coefficient between the \(i\)th dimension and the \(j\)th dimension of the hidden representation of the [CLS] token \(g^{\mathrm{CLS}}\in\mathbb{R}^{d_{h}}\). In the case of LoRB, only the LoRA matrices that contribute to the hidden representation of the [CLS] token in each BERT layer are regularized by the correlation-matrix loss.
Figure 1: Illustration of the Low-Rank adaptation based Rescoring BERT (LoRB).
## 4 Experiments
### Datasets
The training datasets for domain adaptation include one public dataset, LibriSpeech [46], and two internal datasets: _Messaging_ (350 hours) and _Music_ (150 hours). Furthermore, we explore the scaling behavior with regard to the sizes of the pretrained model and the training data, using an internal _conversational domain_ dataset.
We evaluate the low-rank adaptation of the language model on three internal datasets drawn from from de-identified, far-field English-language conversations with a voice assistant. The internal _General_ domain set contains 194 hours, the _Shopping_ domain set contains 20 hours, and the _Knowledge_ domain set contains 5 hours of training data, respectively.
### Implementation
In the adaptation experiments, we vary the LoRA rank over the values {4,8,16,32} and apply LoRA to two sets of target modules: [\(W_{q}\), \(W_{v}\)] and [\(W_{q}\), \(W_{k}\), \(W_{v}\), \(W_{f_{1}}\), \(W_{f_{2}}\)]. In the LoRA layer, we set the dropout rate to \(0.01\) and \(\alpha=32\). When fine-tuning RescoreBERT, we initialize the feed-forward network in RescoreBERT from the pretrained model checkpoints and continuously update the parameters in the feed-forward network, as shown in Figure 1. For all parameter-efficient training methods and full fine-tuning, we use early stopping to evaluate the checkpoint with best performance on an in-domain validation set.
For LibriSpeech, we fine-tune the cased BERTbase model for fair comparison with previous work. For other internal training datasets, we fine-tune an in-house 170M RescoreBERT model with 16 layers and 1024-dimensional hidden layers, which was trained on internal data with the discriminative training objective for 435K steps.
### Baselines
The word error rate (WER) of the first-pass RNN-Transducer speech recognition baseline system used is below 10%. We compare the fine-tuning results of low-rank adaptation with full fine-tuning and three other parameter-efficient fine-tuning methods. Here the "Adapter" method refers to the standard residual adapter proposed in [12], which has a latent dimension that is half of its encoder dimension, \(768\). Adapter layers are inserted into the self-attention module and the subsequent residual connection, as well as into the MLP module and its subsequent residual connection. Each adapter layer includes two fully connected layers, bias vectors, and a non-linearity placed between them. The "BitFit" method, proposed in [13], involves training the bias vectors in each module while freezing all other parameters. The "Prefix" method refers to prefix-tuning [11], which inserts trainable tokens into input sequence.
\begin{table}
\begin{tabular}{l|r|r|r|r|r} \hline \hline & & Target Domain & \multicolumn{2}{c}{Non-Target Domain} \\ \hline Method & \begin{tabular}{c} \% Trainable \\ Parameters \\ \end{tabular} & \multicolumn{1}{c|}{\multirow{2}{*}{
\begin{tabular}{c} MessagingTest \\ \end{tabular} }} & \multirow{2}{*}{General} & \multirow{2}{*}{Shopping} & \multirow{2}{*}{Knowledge} \\ \hline RescoreBERTpretrained 170M & non-adapted & baseline & baseline & baseline & baseline \\ \hline w/ Fine-Tuning (FT) & 100\% & 3.30\% & -2.33\% & -1.17\% & -0.34\% \\ w/ Residual Adapter & 1.27\% & 3.72\% & -16.60\% & -17.33\% & -17.07\% \\ w/ BitFit & 0.01\% & 3.30\% & -18.83\% & -17.57\% & -20.90\% \\ w/ Prefix & 0.05\% & 3.30\% & -1.98\% & -1.53\% & -1.39\% \\ \hline LoRB & 0.08\% & **6.06\%** & **0.27\%** & 0.23\% & **0.34\%** \\ LoRB + \(\mathcal{L}_{cor}\) & 0.08\% & **5.65\%** & **-0.51\%** & **0.82\%** & **0.01\%** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Relative WER improvement of LoRB, full fine-tuning (F1), Adapter and BitFit when fine-tuning on messaging data.
Figure 2: Wall-clock training time of LoRB, LoRB+\(\mathcal{L}_{cor}\) and Fine-Tuning (FT) when training on _messaging_ data.
## 5 Results and Analysis
### Low-rank domain adaptation
#### 5.1.1 Messaging data as continuous domain adaptation
Table 1 shows the evaluation results on four internal datasets. We fine-tune a 170M RescoreBERT model with the MWER training objective on an internal _messaging_ (MSG) dataset. The fine-tuned models are evaluated on both in-domain _messaging_ test set and out-of-distribution data from the _General_, _Shopping_ and _Knowledge_ domains. The first row shows the test evaluation results of the 170M RescoreBERT model without any fine-tuning. All parameter-efficient fine-tuning methods achieves performance comparable to or better than full fine-tuning (FT) on the target domain _Messaging_. However, FT, Adapter and BitFit suffer from performance degradation on out-of-distribution data, while LoRB performs robustly in both target domain and nontarget domains.
#### 5.1.2 Case Study 1: Effect of regularization
Table 2 presents the performance comparison of LoRB and LoRB with correlation-based regularization against baseline methods on three internal test sets from nontarget domains. Our experiments reveal that the Music domain data is prone to overfitting when fine-tuning is applied, resulting in degradation on other domain data. This can be attributed to the limited dataset size and the presence of challenging rare words like artist names. While both Adapter and LoRB techniques exhibit some level of improvement in mitigating the degradation across most domains, the combination of LoRB with correlation-based regularization results in the most substantial improvement in performance.
#### 5.1.3 Case Study 2: Public dataset
Table 3 shows the WER on test-Clean and test-Other portions of the LibriSpeech dataset. We follow a Whisper setup [47] for first-pass decoding. On both test sets, LoRB achieves the largest reduction in WER compared to other parameter-efficient training methods. Specifically, in test-Other, LoRB can achieve results comparable to FT with only 0.27% of the parameters, and the correlation-based loss brings further improvements, which aligns with our findings in Case Study 1.
#### 5.1.4 Analysis: Training stability
Table 4 shows the word error rate after full fine-tuning and LoRB under different training hyper-parameter settings. We observed that FT is brittle for various combinations of warm-up steps and learning rate schedules, while LoRB is more robust to changes in hyperparameters.
#### 5.1.5 Analysis: Training time and GPU memory utilization
A training time comparison is shown in Figure 2. We find that, while LoRB takes longer to converge compared to FT at the same learning rate, the performance of FT degrades greatly when the learning rate is increased. As a result, we can utilize LoRB to achieve a similar WER as FT with shorter training time by benefiting from the larger learning rate, as shown in Figure 2. Furthermore, we find that LoRB can reduce the GPU memory percentage used during training substantially, from 87% to 52%.
#### 5.1.6 LLM scaling results
In this section, we show how the scale of the underlying pretrained language model and the scale of the training dataset can affect the performance of LoRB. We use an internal conversational dataset (roughly 60M utterances) as the training
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline & \multicolumn{4}{c}{WER} \\ \cline{2-5} & \multicolumn{2}{c|}{warmup=5k} & \multicolumn{2}{c}{warmup=10k} \\ \hline & lr=1e-\(5\) & lr=1e-\(7\) & lr=1e-\(5\) & lr=1e-\(7\) \\ \hline RescoreBERT & baseline & baseline & baseline & baseline \\ FT & -72.2\% & -2.0\% & -6.48\% & -1.17\% \\ LoRB\({}_{170M}\) & 0 & 0 & +0.23\% & +0.11\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: Relative WER improvement on nontarget Shopping domain compared to 170M RescoreBERT without fine-tuning, under different warm-up steps and learning rate combinations.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline & \multicolumn{2}{c|}{Non-Target} \\ \hline Method & General & Shopping & Knowledge & Average \\ \hline Fine-Tuning (FT) & baseline & baseline & baseline & baseline \\ Residual Adapter & -0.14\% & 0.49\% & 0.3\% & 0.22\% \\ \hline LoRB\({}_{170M}\) & -0.5\% & 0.21\% & 0.90\% & 0.20\% \\ LoRB\({}_{170M}\) + \(\mathcal{L}_{cor}\) & **0.22\%** & **0.71\%** & **1.21\%** & **0.71\%** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Relative WER improvement of LoRB\({}_{170M}\), full fine-tuning (FT) and Adapter when fine-tuning on Music data.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline Model \& Method & \% Params & test-Clean & test-Other \\ \hline BERT\({}_{\text{base-cased}}\) & non-adapted & 6.17 & 13.81 \\ \hline w/ FT & 100\% & **4.37** & 10.80 \\ \hline w/ Residual Adapter & 2.15\% & 5.29 & 12.01 \\ w/ BitFit & **0.01\%** & 5.60 & 12.43 \\ w/ Prefix & 0.34\% & 5.30 & 12.05 \\ \hline LoRB\({}_{170M}\) & 0.27\% & 4.50 & 10.81 \\ LoRB\({}_{170M}\) + \(\mathcal{L}_{cor}\) & 0.27\% & **4.47** & **10.78** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Absolute WER on the two standard test sets of public LibriSpeech [46] baseline decoded by Whisper-tiny. The 170M BERT base model is retrieved from official public release [48] for reproducible evaluation under Apache License.
source. To evaluate the scaling behavior for varying pre-trained model sizes, we fine-tune in-house RescoreBERT models with 5M, 170M and 1B parameters, respectively, on a set of 150K conversational training utterances. To investigate the scaling behavior for data sizes, we split the conversational training data into five log scales with roughly 20M/5M/1500K/500K/150K utterances, respectively.
Figure 3 shows the scaling with regard to model size. With the size of the pretrained language model increasing, the performance gap between FT and LoRB shrinks. With the increase in total pretrained parameters of the backbone model, the performance gap between FT and LoRB is reduced from -22.3% (at the scale of 170M) to +2.4% (at the 1B scale) in terms of WER relative (WERR) difference. In our ASR rescoring model experiments, we found that a larger BERT model size improves the convergence speed of LoRB by a factor of 2.74, which has benefits for production-size deployments.
Figure 4 shows the WER on the same conversational test set for models trained on different amount of data. In general, we observe that a larger data size correlates with greater improvement in performance. Notably, the improvement resulting from a change in data scale from \(150K\) to \(500K\) is nearly four times that observed when transitioning from \(500K\) to \(20M\) for LoRB. Unlike the linear scaling law observed in full fine-tuning [49], LoRB follows a logarithmic scaling curve, approaching a fixed value as the data size reaches a certain threshold. Figure 5 shows the scaling of LoRB across various rank sizes. While there is no obvious correlation between rank value and word error rate across different data scale settings, the general trend remains consistent: larger dataset sizes lead to a more substantial performance gap compared to full fine-tuning (FT).
## 6 Conclusion
We have introduced LoRB, an efficient and scalable low-rank decomposition for domain-adaptation of BERT-based rescoring models with low computation cost and no performance degradation when trained on limited-size in-domain data. By inserting weight matrices amounting to only \(0.08\)% of the parameters of the pretrained models and freezing all other parameters, we achieve speech recognition performance comparable to full fine-tuning with a 6-fold speedup in training. Experimental rescoring results on public and internal datasets demonstrate the effectiveness and generalization of the LoRB framework and a correlation-based multi-loss training. The scaling results highlight the importance of large pretrained models for best speech recognition rescoring results.
Figure 4: WER evaluated by 1B RescoreBERT, fine-tuned with various sizes of “conversational domain” data using FT and LoRA.
Figure 5: WER as a function of data size, evaluated by 1B RescoreBERT, fine-tuned with FT and various ranks of LoRA.
Figure 3: WER on a conversational test set evaluated by RescoreBERT of size 5M, 170M and 1B, fine-tuned with “conversational domain” data using FT and LoRA. |
2302.14528 | Analysis and experimental study on the Jumping Chain | A freely falling chain from a cup at certain height can jump. The process can
be divided into two parts: a stable suspension and an accelerating procedure.
Variational principle and force analysis demonstrate that the shape of stable
suspension is an inverted catenary. The requirement of the jumping and the
parameters to describe the jumping catenary have been studied in detail, and
experiments have been conducted to verify the theoretical analysis. The
physical picture of the falling chain could be useful in certain falling
systems, providing valuable insight into the dynamical system. | Wenyu Wang, Wu-Long Xu, Yang Xu, Xu-Dong Yang | 2023-02-28T12:40:20Z | http://arxiv.org/abs/2302.14528v1 | # Analysis and experimental study on the Jumping Chain
###### Abstract
A freely falling chain from a cup at certain height can jump. The process can be divided into two parts: a stable suspension and an accelerating procedure. Variational principle and force analysis demonstrate that the shape of stable suspension is an inverted catenary. The requirement of the jumping and the parameters to describe the jumping catenary have been studied in detail, and experiments have been conducted to verify the theoretical analysis. The physical picture of the falling chain could be useful in certain falling systems, providing valuable insight into the dynamical system.
pacs: 45.50.Dd, 05.45.-a, 47.54.-r, 05.45.Xt
## I Introduction
There is a very interesting physical phenomenon as shown in FIG. 1: a coiled but not entangled chain is put in a cup. When the end of the chain is pulled out of the cup, it will fall freely along the wall of the cup. At a certain point, usually in a sudden moment, the chain can jump up and remain in a steady arc shape which can last for a certain duration in the air. This experiment is often demonstrated in college physics classes as an example of an intriguing classical mechanical problem. The reader can also observe the details of the jumping in the supplementary video or on the internet website [1]. In this paper, we attempt to analyze and study the pattern and related topics such as the height of the jump and the height of the fall, _etc._ Experiments will be conducted and the data will be analyzed to verify our theoretical analysis [2].
In fact, the phenomenon is an application of classical mechanics in string, rope or fluid physics [3]. The jumping rope is a familiar example of a slender structure that interacts with the fluid through which it moves [4]. The chain is usually suspended and swung by a mechanical apparatus, and the pattern is well studied in the literature [5]. This is because important class of truly 1D problems involves the motion of lines or filaments embedded in 3D space, such as vortex lines in a defect lines in liquid crystals, type-II superconductor, or polymers in imposed flows. Nonlinear partial differential equations describes the dynamics, and the twisting, breaking, reconnecting, or the knotting, are the essential topological parts of the chain dynamics [6]. However, the jumping chain studied in this paper, is not suspended but jump from a higher platform, falling freely in the space under the attraction of gravity. The damping from the viscosity of the air seems to be a sub-dominant effect in the whole process. The gravity and the tension inside the chain play an important role. The subtle point is that, although the matter is moving from the higher position to the lower ground, the system is not a fluid or a falling stream. A detailed study will reveal the different dynamics.
In all, the paper is organized as follows: The phenomenology and dynamics of the jumping chain will be analyzed in Sec. II; The experimental verification of our analysis will be presented in Sec. III; The conclusion is given in Sec. IV.
## II Phenomenology and theoretical analysis
### Phenomena
First, let's describe how the jumping phenomenon takes place. As shown in FIG. 1, the chains are coiled layer by layer inside a cup, and it is kept from becoming
Figure 1: The jumping chain.
entangled (It is difficult to keep all steel balls untangled in the preparation, and it is one of the dominant disturbing factors of the experiment.). Then the cup is placed at a certain height. a small length of chain is taken out of the cup and allowed to fall freely. In the actual operation, we also tried to use a soft and non-contractible rope to do the same experiment. The rope can barely jump up but the phenomenon is rather obscure, so the thin metal chain composed of steel balls is adopted in our experiment. The steel balls are loosely connected, and the chain is non-retractable. It means that the chain do not have flexural capacity. This is the key difference between the chain and the soft rope. The reason will be discussed in the following context.
The experiments show that the chain jumps to appropriate higher position from the cup if the cup is set at some specific heights. Due to the complex way of chain winding in the cup, the starting point of the jump keeps changing, so the shape of the jumping from cup to the highest point is very complicated. As for the pattern between the highest point and the falling off point on the ground, the chain can maintain an almost stable shape near the highest point on the top. The shape keeps shaking near the ground, but this does not affect the stable shape on the top.
Summarizing our observations, we think this physical phenomenon can be divided into two sub-processes: the accelerating process and the stable suspension process. Thus our analysis on the phenomenon is set out as shown in FIG. 2, in which point A is the beginning of the suspension and the end is point B, which is the collision point on the ground. The point C is the starting point of the jumping in the cup where the elements of the chain are static. The part from C to A is the accelerating process. Note that, as discussed above, point B keeps shaking, and point C also varies quickly. At the same time, the height of Point C is decreasing. The definitions of the variables such as \(h_{j},\ h_{i}\)_etc._ are given in the following context. One can see that the stable suspension is obviously of the most concern, and according to following analysis, we find that this is actually the key point to comprehend the phenomenon. Thus we analyze the stable suspension in the following subsection.
### Analysis on the shape of the stable suspension
At first glance, the jumping chain appears like a parabolic curve. However, the truth is not so simple as expect. A free-falling mass point follows a parabolic trajectory. However, for the non-retractable chain, each element on the suspended chain falls at the same velocity, only the directions vary. Based on our observations, the coiled chain in the cup almost moves to the ground at a constant rate. thus it is natural to assume that the suspended chain can maintain a constant velocity \(v_{0}\) for appropriate interval as an ideal case. Then, in \(\Delta t\) time, there is a part of the chain with mass \(\lambda v_{0}dt\) (\(\lambda\) is denoted as the line density) in the cup which is accelerated from 0 to \(v_{0}\), At the point A, the chain feels the dragging tension from the chain at rest in the cup as follows:
\[T_{0}=\lim_{\Delta t\to 0}\frac{\lambda v_{0}^{2}\Delta t}{\Delta t}= \lambda v_{0}^{2}\,. \tag{1}\]
It should be noted that this is merely the ideal case, and the following study will show that the actual tension at the end of A will be less than \(\lambda v_{0}^{2}\). At point B, the chain element with velocity \(v_{0}\) collides with the ground. If the collision is elastic, the chain will rebound from the ground with velocity \(v_{0}\). Similar analysis gives that impulsive force to the ground is \(2\lambda v_{0}^{2}\). Of course, the collision is not completely elastic due to the dragging. We shall provide the specific method for dealing with the impulsive force at point B later.
We find that, very interestingly, the shape of the suspended chain is actually an inverted catenary. The exploration of the catenary is an important progress in the history of classical mechanics and mathematics. Our discovery in fact compensates a block in the catenary theory. Here we briefly show the derivation of the catenary and explain the reason for the same shape of the suspended chain. Suppose the shape of the chain is \(y=y(x)\), the solution of the catenary is to find a \(y(x)\) such that the potential or center of gravity of the chain is at its lowest point. Using the variational principle, the action of the whole system is the center of gravity in the vertical direction
\[S=-\int\lambda g(y-c)\sqrt{1+{y^{\prime}}^{2}}dx=\int{\cal L}dx, \tag{2}\]
Figure 2: The sketch map of our analysis on the jumping chain.
where \(g\) is the acceleration of gravity and \(c\) is an arbitrary constant, \(y^{\prime}\) is the derivative of \(y\). \(\mathcal{L}\) is the Lagrangian of the system
\[\mathcal{L}=-\lambda g(y-c)\sqrt{1+{y^{\prime}}^{2}}. \tag{3}\]
The motion for \(\mathcal{L}\) gives the non-linear differential equation of the catenary
\[(y-c)y^{\prime\prime}-{y^{\prime}}^{2}-1=0. \tag{4}\]
The solution is
\[y=c+\frac{a}{2}(e^{\frac{\pi}{2}+b}+e^{-\frac{\pi}{a}-b}). \tag{5}\]
where \(a,\ b\) are the parameters determined by the inputs of the catenary. The approach described above is the standard derivation of the catenary. Note that here suspension chain are hanged at any height, thus an additional constant \(c\) are introduced. In fact, we notice that, although the equation of motion Eq. (4) is nonlinear, it has a discrete symmetry and \(y(x)\) can be transformed
\[y(x)\to Y(x)=-y(x)+2c. \tag{6}\]
\(Y(x)\) is also a solution of the catenary equation. Despite its lack of practical application, the inverted catenary solution is still a valid solution of the catenary equation. This solution is an unstable extremum point of variation, which has been largely overlooked in the literature. Nevertheless, our research has revealed that this inverted solution is indeed the exact shape of the suspended chain.
For the jumping chain, the potential energy is the same as that of the catenary. Therefore only the kinetic term needs to be included in the action
\[S=\int\mathcal{L}dx=\int\left(\frac{1}{2}\lambda\sqrt{1+{y^{\prime}}^{2}}v_{0} ^{2}-\lambda gy\sqrt{1+{y^{\prime}}^{2}}\right)dx\,. \tag{7}\]
By comparing Eq. (3) and assuming that \(v_{0}^{2}\) is a constant, we can observe that the Lagrangian of the jumping chain is identical to that of the catenary, with the only difference being that \(c\) is taken as a constant
\[c=\frac{v_{0}^{2}}{2g}\,. \tag{8}\]
Due to the given conditions, the solution will be a convexity, specifically, an inverted catenary
\[y=c-\frac{a}{2}(e^{\frac{\pi}{2}}+e^{-\frac{\pi}{a}})\,. \tag{9}\]
The difference between the deviation of an ordinary catenary and a jumping chain is that the chain is constantly in motion. The elements of the chain is continually jumping out of the cup and falling on to the ground, thus changing the mass of system. This may lead some readers to be skeptical about the validity of variational principle. However, the same differential equation of motion can be derived by the force analysis. As shown in FIG. 3, the tension in the jumping chain is denoted as \(T=T(x)\) and the convention of the forward direction is defined as from point A to point B. Every element of the chain is moving in a curvilinear motion with the same speed, so the tangential acceleration is zero, and the infinitesimal tension \(dT\) cancels the tangential component of gravity
\[dT=\lambda g\sqrt{1+{y^{\prime}}^{2}}dx\frac{y^{\prime}}{\sqrt{1+{y^{\prime}} ^{2}}}=\lambda g\frac{dy}{dx}dx=\lambda gdy\,. \tag{10}\]
Thus the tension in the chain can be expressed as
\[T=T_{A}+\lambda gy\,. \tag{11}\]
This is a very interesting result that the tension in the stable jumping chain changes along with the height of the elements, which is the same as the tension in the ordinary catenary. To analyze the normal component force of the infinitesimal chain imposed by tension from the curvature of the chain, we can look to the catenary or hanged rope for comparison. In the case of the ordinary catenary, the normal component of the tension is canceled by the gravity, but in the case of the jumping chain, the normal component force provides the centripetal force together with the gravity. The geometrical configuration is shown in FIG. 3 where we investigate the centripetal force acting on the two chain elements \(x-dx\to x\) and \(x\to x+dx\). From this, we can easily derive the equation for the centripetal force
\[\frac{2\lambda\sqrt{1+{y^{\prime}}^{2}}dx{v}^{2}}{\rho}=2Td\theta+2\lambda g \sqrt{1+{y^{\prime}}^{2}}dx\frac{1}{\sqrt{1+{y^{\prime}}^{2}}}\,,\]
where \(\rho\) is the radius of curvature at point \(x\). By substituting \(\rho\), \(d\theta\), and tension \(T\), we can get
\[\frac{2\lambda|{y^{\prime}}|v_{0}^{2}dx}{1+{y^{\prime}}^{2}}=\frac{2(T_{A}+ \lambda gy)|{y^{\prime}}|dx}{1+{y^{\prime}}^{2}}+2\lambda gdx\,. \tag{12}\]
Figure 3: The force analysis on the elements of the jumping chain.
Due to the convexity of the jumping chain
\[|y^{\prime\prime}|=-y^{\prime\prime}\,. \tag{13}\]
Eq. (12) are reduced to
\[-(T_{A}+\lambda gy-\lambda v_{0}^{2})y^{\prime\prime}+\lambda gy^{\prime 2}+ \lambda g=0\,. \tag{14}\]
Given that \(T_{A}\) is the variable to be solved, it is convenient to define
\[T_{A}-\lambda v_{0}^{2}=-\lambda gc\,. \tag{15}\]
Then the differential equation of \(y\) is
\[(y-c)y^{\prime\prime}-{y^{\prime}}^{2}-1=0\,. \tag{16}\]
Proved the same catenary equation again.
Although the analysis of force is a bit more complicate than that of the variational principle of mechanics, the physical picture is easier to understand because it is an analysis of forces. We can also see that the forces at the two ends of the catenary and the jumping chain are different. In case of the ordinary catenary, the tensions at the two ends are dragging the chain. Whereas, in the case of the jumping chain, the tension at point A is dragging the chain, while the tension at point B is pushing the chain. Of course, this is the ideal case; the jumping process will be further analyzed in the following.
### Analysis on the jumping
First, let's begin with the explanation for why \(T_{A}\neq\lambda v_{0}^{2}\). \(T_{A}=\lambda v_{0}^{2}\) corresponds to the ideal case that each chain element instantaneously accelerates from \(0\) to \(v_{0}\). In contrast, the actual situation is that the chain is loosely coiled and the chain element needs an interval for the acceleration, making \(T_{A}\) smaller than \(\lambda v_{0}^{2}\). Analytically, if \(c=0\), the shape of the suspended chain line is
\[y=-\frac{a}{2}(e^{\frac{r}{a}}+e^{-\frac{r}{a}})\,, \tag{17}\]
then
\[y\leq-a\,. \tag{18}\]
It is evident that \(y>0\) solution does not exist, as indicated by Eq. (16), which a non-linear differential equation. Therefore, if the coordinate system shown in FIG. 2 is used a \(y>0\) solution requires a non-zero \(c\). From Eq. (15), it is clear that \(T_{A}\) must be less than \(\lambda v_{0}^{2}\). As shown in Fig. 2, the acceleration process for the upper elements in the stable suspended shape is added. And the height of the acceleration jumping up process from point C to point A is measured as \(h_{i}\). The jumping height of the stable suspended chain line is \(h_{r}\). The total jumping height is thus the sum of \(h_{i}\) and \(h_{r}\)
\[h_{j}=h_{i}+h_{r}\,. \tag{19}\]
Along with the conventions mentioned above, the falling height (the height of the cup) of the chain is denoted as \(H\) and the width of the jump is \(2x_{0}\) as shown in Fig. 2. We will further investigate the relationship between these quantities for jumping chain and attempt to conduct experiment to verify our analysis.
The tension at point A is
\[T_{A}=\lambda v_{0}^{2}-\lambda gc\,. \tag{20}\]
According to the Eq. (11), the tension \(T\) from point A to point B is
\[T=\lambda v_{0}^{2}+\lambda gy-\lambda gc\,. \tag{21}\]
As discussed previously, the elastic collision implies that the force imposed from the ground to the chain at point B will be \(2\lambda v_{0}^{2}\). This may appear suspicious to some readers, as a deceleration process is needed at point B. When the chain finally fall on the ground, the tension should be less than \(2\lambda v_{0}^{2}\). However, we found that the chain elements can slightly bounce on the ground, thus the actual impulsion to the ground should be greater than \(\lambda v_{0}^{2}\). To account for the force with more precision, a constant coefficient \(\alpha\) can be introduced
\[-\alpha\lambda v_{0}^{2}=\lambda v_{0}^{2}-\lambda gH-\lambda gc\,. \tag{22}\]
The value of \(\alpha\) would be greater than \(1\) and less than \(2\). It should be noted that tension at point B should be negative according the direction defined above. Since falling height \(H\) is much larger than the jumping height \(h_{j}\), the tension fluctuation at point B is not sensitive to stable suspension above. Then
\[v_{0}^{2}=\frac{1}{\alpha+1}g(H+c)\,. \tag{23}\]
This is the relation between the falling height and the velocity of the suspended chain. The jumping height \(y(x=0)\) of the stable suspended chain is
\[h_{r}=c-a\,. \tag{24}\]
Then all the exploration focuses on the pending parameter \(c\) and \(\alpha\), which will be analyzed in the next section when discussing the experimental verification. Before that, let us analyze the acceleration process.
As the velocity of the chain element changes over time during the acceleration process, it seems difficult to do the analysis. However, the time parameter can be eliminated after some substitutions. The velocity of accelerating elements is denoted as \(v\). According to Fig. 2 and Eq. (11), the tangential acceleration should be
\[\lambda vdt\frac{dv}{dt}=\lambda vdv=dT-\lambda gdy\,. \tag{25}\]
Since the chain elements are loosely connected, the normal acceleration dynamical equation can be disregarded.
It can be seen that the time dependence \(dt\) can be eliminated. This is the mathematical formulation of the transformation from gravity potential to kinetic energy. Carry out the integration on both sides, we have
\[\int_{0}^{v_{0}}\lambda vdv = \int_{0}^{T_{A}}dT-\int_{-h_{i}}^{0}\lambda gdy \tag{26}\] \[= T_{A}-\lambda gh_{i}\,.\]
At point A
\[\frac{1}{2}\lambda v_{0}^{2}=\lambda v_{0}^{2}-\lambda gc-\lambda gh_{i}\,,\]
namely
\[h_{i}=\frac{v_{0}^{2}}{2g}-c\,. \tag{27}\]
Substitute \(v_{0}\) from Eq. (23), we can get the relation between \(h_{i}\) and \(H\)
\[h_{i}=\frac{H-(2\alpha+1)c}{2(\alpha+1)}\,. \tag{28}\]
From above equation, the requirement for the jumping can also be got. The jump can happen only when \(h_{i}\) greater than zero, thus the condition for the jumping is
\[H>(2\alpha+1)c\,. \tag{29}\]
The total jumping height is
\[h_{j}=h_{i}+h_{r}=\frac{H+c}{2(\alpha+1)}-a\,. \tag{30}\]
It is evident that the jumping height basically is linearly correlated with the falling height \(H\), with additional parameter \(c\), \(a\) and \(\alpha\) to be determined.
## III Experimentation and verification
### Analysis and execution of the experiment
Through the theoretical analysis in Sec. II, we can verify the relation through experimentation. Nevertheless, it is important to bear in mind that all the theoretical analysis are based on ideal conditions, and there are certain limitations in the actual experiment, such as
1. Ideally, the chain in the cup should be loosely connected and freely coiled. Unfortunately, the elements in chain inevitably become tangled, which prevents the chain from maintaining homogeneous during the the acceleration and stable suspension processes. We discovered that this is the primary source of disturbance in the experiment. Even a small knot can cause a significant variation of the stable suspension.
2. Ideally, the jumping point C and falling point B should remain fixed in order to obtain a stable suspension. However, these two points tend to shake during the falling process, making it difficult to accurately measure the shape. Nevertheless, Since the shape of catenary is an exponential function in which \(y\) increases exponentially with \(x\), the shaking chain almost forms a straight line at the bottom. This makes it challenging to measure the correlation between the shape and the exponential curve. Furthermore, due to the upper two points, it is difficult to precisely determine the positions of points A, B and C. Generally, only the total jumping height \(h_{j}\) and falling height \(H\) can be measured.
Though there are these difficulties, it is sure that
\[H\gg c,\ a\,. \tag{31}\]
and the jumping height \(h_{j}\), \(a\) and \(c\) are at the same order. Based on this, we can assume that the actual shape of the jumping chain resembles and inverted catenary. The experiment should then test the linear relation between \(h_{j}\) and \(H\) (Eq. (30)) and the linear relation between \(v_{0}^{2}\) and \(H\) (Eq. (23)). Both relations could be used to evaluate the parameter \(\alpha\). which will validate our analysis in Sec. II.
The apparatus used in our experiment is illustrated in Fig. 4. We drawn a height calibration on the wall, which can be seen in penal A of Fig. 4. The chain and the cup are also depicted in the figure, and the details of chain are provided in the Tab. 1. The experiment was conducted as follows: The chain was pulled out of the cup from a height 200cm, then gradually lowered by 5cm each time. When the height reached 100cm, it became increasingly difficult to lower the chain further. Therefore, we chose to lower the chain by 10cm for heights lower than 100cm, until the
Figure 4: The apparatus of the experiment: A, the height calibration on the wall; B, the cup; C, the chain; D, the chain elements, E, the screenshot.
height reached 40cm. At each specific height, a cellphone was used to record the jumping chain, while the camera were placed at the same height at the top of jumping chain (see penal E of Fig. 4). Due to the presence of various disturbances in the experiment, multiple vedios were taken in order to select the best one for the data collection.
In Principle, catenaries with different \(c\) and \(a\) can exist when the requirements discussed in Sec. II are satisfied. However, since the chain needs to jump from the cup, thus the shape of the jumping chain depends on specific initial conditions, namely, the diameter of the cup, the depth of the cup at the beginning of the jump, _etc_. Thus though the shape of the jumping at a specific height is not in a single parameters set, We found that \(c\), \(a\) are at the same order of the diameter of the cup. thus the linear relation could be found in the following.
The data collection was carried out as follows. each frame of the chosen vedio was examined, and the best moment with the highest jumping height and stable shape was identified. The falling height and jumping height were then recorded base on the frame at the best moment. To measure \(v_{0}\), the time interval between a specific height of the chain left in the cup and the final moment was recorded. With these measured raw data, we can analyze on the relation between the physical quantities talked in the Sec. II
### Analysis on the data
As discussed in the previous subseciton, linear relation between \(h_{j}\) and \(H\) (Eq. (30)) and the linear relation between \(v_{0}^{2}\) and \(H\) (Eq. (23)) can be used to verify our analysis. The corresponding results are shown in Fig. 5, with the left panel displaying the recorded jumping heights and falling height, and the right panel shows the calculated velocity \(v_{0}\) of the stable suspension shape and falling height. Note that due to presence of various disturbances in the experiment, only data with falling higher than 100cm (black points) were chosen for the analysis. As can be seen in Fig. 5, the linear relations are realized and the slopes give the values of the \(\alpha_{1}\) (left panel) and \(\alpha_{2}\) (right panel)
\[\alpha_{1}=2.6^{+5}_{-1.3},\quad\alpha_{2}=2.5^{+7.0}_{-0.7}\,. \tag{32}\]
We can see that the errors are quite large and the central values exceed 2. However, given the difficult setup of the experiment, these two measurements on \(\alpha\) are acceptable. Note that although \(\alpha_{1}\) and \(\alpha_{2}\) come from the same series of derivations, Eq. (30) and Eq. (23) are assumed to independent with each other. Thus, we can conclude that \(\alpha_{1}\) and \(\alpha_{2}\) are in agreement with each other. Another interest point is that the extension line in the left panel intersects the \(H\) coordinate axis at a point greater than
\begin{table}
\begin{tabular}{c c} \hline \hline Diameter of the steal ball & \(5\pm 0.5\)mm \\ Length of the steal ball & \(3\pm 0.5\)mm \\ Length of the chain & \(8.4\pm 0.1\)m \\ Density \(\lambda\) & \(0.036\pm 0.03\)kg/m \\ Height of the cup & \(95\pm 0.5\)mm \\ Diameter of the cup & \(66\pm 0.5\)mm \\ \hline \hline \end{tabular}
\end{table}
Table 1: The setup of the experiment.
Figure 5: Left: The recorded results between the jumping height and falling height; Right: the velocity \(v_{0}^{2}\) of the stable shape and the falling height
0. This point should correspond to the required height of the jumping condition in Eq. (29), Although this is only a rough estimation.
Finally, we can explain why the soft rope cannot effectively jump. the elements of chain are connected loosely, and the rope cannot achieve this condition. This is also the reason why the catenary cannot be realized by the soft rope. Specifically, the soft rope lacks the non-flexural capacity and the absence of disturbance from normal force, making the jumping impossible.
## IV Conclusion
The jumping of a free-falling chain is an intriguing phenomenon, and this paper provides a detailed analysis of the physics behind it. The analysis are divided into two parts: the stable suspension and the jumping. Both variational principle and force analysis demonstrate that the stable suspension is an inverted catenary. The parameters that describe the phenomenon are studied and tested through experiments, The requirement for the jumping are discussed. The measurements of the parameter \(\alpha\) meet our expectations. The physical picture of the falling chain could be useful in some falling systems. and the inverted catenary may be a complement to classical mechanics.
|
2309.03326 | Detecting False Alarms and Misses in Audio Captions | Metrics to evaluate audio captions simply provide a score without much
explanation regarding what may be wrong in case the score is low. Manual human
intervention is needed to find any shortcomings of the caption. In this work,
we introduce a metric which automatically identifies the shortcomings of an
audio caption by detecting the misses and false alarms in a candidate caption
with respect to a reference caption, and reports the recall, precision and
F-score. Such a metric is very useful in profiling the deficiencies of an audio
captioning model, which is a milestone towards improving the quality of audio
captions. | Rehana Mahfuz, Yinyi Guo, Arvind Krishna Sridhar, Erik Visser | 2023-09-06T19:17:46Z | http://arxiv.org/abs/2309.03326v1 | # Detecting False Alarms and Misses in Audio Captions
###### Abstract
Metrics to evaluate audio captions simply provide a score without much explanation regarding what may be wrong in case the score is low. Manual human intervention is needed to find any shortcomings of the caption. In this work, we introduce a metric which automatically identifies the shortcomings of an audio caption by detecting the misses and false alarms in a candidate caption with respect to a reference caption, and reports the recall, precision and F-score. Such a metric is very useful in profiling the deficiencies of an audio captioning model, which is a milestone towards improving the quality of audio captions.
Rehana Mahfuz, Yinyi Guo, Arvind Krishna Sridhar, Erik Visser Qualcomm Technologies Inc. audio captioning, caption evaluation
## 1 Introduction
The possibility of automatically describing an auditory scene using text is an exciting advancement in humanity's effort to improve awareness without the need for expensive human attention. While video monitoring deprives subjects of the scene from much-valued privacy while also being energy-hungry, audio monitoring is a reasonable choice for maintaining surveillance while also preserving privacy and being energy-efficient. Audio captioning, which is the task of describing audio using text, can enable a wide variety of solutions. In the industry, it can be used for machine condition monitoring, or for security systems. In personal lives, audio captioning can afford people peace of mind in the form of smart monitoring when they leave their dependent loved ones or pets at home.
In the advancement of audio captioning, one bottleneck has been the lack of a transparent method to evaluate the quality of the audio captions. Current methods to evaluate audio captions simply provide a score, without much of an explanation regarding what may be wrong with the caption. While hallucination detection and mitigation in text generation applications such as summarization have been considered, no such effort has been made for audio-to-text-generation. In this work, we introduce a method to identify mistakes in the caption. Specifically, our method automatically detects the false positives and false negatives in the candidate caption, with respect to the reference caption. To our knowledge, this is the first framework which automatically detects shortcomings of the caption, which is a first step towards developing strategies to address problems in the audio captioning model.
## 2 Related Work
The framework of evaluating audio captions involves the availability of a reference caption, which is generally human-generated, based on which the quality of a candidate caption is determined. Current methods to evaluate the quality of audio captions can be divided into three categories. BLEU [1], METEOR [2] and ROUGE [3] are borrowed from machine translation, and consider the overlap between words or matching between synonyms to establish similarity. From image captioning, CIDER [4] considers the cosine similarities between Term Frequency-Inverse Document Frequencies (TF-IDFs) [5] of n-grams of the captions, while SPICE [6] determines the overlap between scene graphs created from the reference and candidate captions separately. Pre-trained language models are also being leveraged to judge semantic similarity between audio captions, such as in BERTScore [7], Sentence-BERT [8], FENSE [9] and SPICE+ [10]. An effort has also been made to consider the time of occurrence of audio events to establish correspondence [11].
## 3 Procedure
We propose a method to obtain false positives and false negatives in a candidate caption \(c\) from a reference caption \(r\). Let \(A=\{a_{1},a_{2},...,a_{M}\}\) be a nearly comprehensive set of \(M\) audio classes, as contained in the dataset AudioSet [12], and let \(E_{A}=\{txt\_emb(a_{i}),txt\_emb(a_{2}),...,txt\_emb(a_{M})\}\) be a set of text embeddings of these audio classes, obtained using some function \(txt\_emb\). From each caption \(c\) and \(r\), we first obtain phrases by matching their Parts-of-Speech (POS) tags with standard patterns of POS tags of phrases, using a function \(phrases\). Then we get text embeddings of each phrase to obtain \(E_{c}=\{txt\_emb(p)\forall p\in phrases(c)\}\) and \(E_{r}=\{txt\_emb(p)\forall p\in phrases(r)\}\). Next, we identify the collection of audio tags in the candidate caption by isolating their text embeddings in a set
\[A_{c}=\{t\in E_{c}:cos\_sim(t,u)>tag\_t\forall u\in E_{a}\} \tag{1}\]
, which is the set of text embeddings of all audio tags whose text embedding's cosine similarity with a candidate cap
tion's phrase text embedding exceeds a threshold \(tag\_t\), and \(cos\_sim\) refers to the cosine similarity. Similarly, we also calculate the set of text embeddings of audio tags in the reference caption as
\[A_{r}=\{t\in E_{r}:cos\_sim(t,u)>tag\_t\forall u\in E_{a}\} \tag{2}\]
Since AudioSet's tag ontology has multiple entries with nuanced meanings for some categories such as _Music_ and _Engine_, we eliminate redundancies in \(A\_c\) and \(A\_r\), where redundancy is defined by the existence of another element in the set with whom the cosine similarity exceeds a threshold \(rep\_t\), as shown in Equation 6. Then, we use this information to identify the true positives by calculating the set of text embeddings of audio tags \(TP\) as shown in Equation 3, which is the set of all members of \(A_{c}\) whose cosine similarity with any member of \(A_{r}\) exceeds a threshold \(sim\_t\). This represents the audio captions captured by both the candidate caption and the reference caption. Next, we identify the false positives by calculating the set of text embeddings of audio tags \(FP\) as shown in Equation 4, which is the set of all members of \(A_{c}\) whose cosine similarity with all members of \(A_{r}\) is below the threshold \(sim\_t\). This represents all the audio tags suggested by the candidate caption, but absent in the reference caption. Similarly, we identify the false negatives by calculating the set of text embeddings of audio tags \(FN\) as shown in Equation 5, which is the set of all members of \(A_{r}\) whose cosine similarity with all members of \(A_{c}\) is below the threshold \(sim\_t\). This represents all the audio tags present in the reference caption, but not captured by the candidate caption.
\[TP=\{t\in A_{r}:\exists u\in A_{c}:cos\_sim(t,u)>sim\_t\} \tag{3}\]
\[FP=\{t\in A_{c}:\forall u\in A_{r}cos\_sim(t,u)<sim\_t\} \tag{4}\]
\[FN=\{t\in A_{r}:\forall u\in A_{c}cos\_sim(t,u)<sim\_t\} \tag{5}\]
Equation 3 can viewed as a soft version of the set intersection between \(A_{r}\) and \(A_{c}\), where the softness comes from the cosine similarity. Along those lines, Equation 4 may be viewed as a soft version of the the set difference between \(A_{c}\) and \(A_{r}\). Similarly, Equaltion 5 can be viewed as a soft version of the set difference between \(A_{r}\) and \(A_{c}\).
Before using the lengths of \(TP\), \(FP\) and \(FN\) to calculate recall, precision and F-score, we again eliminate redundancies in each of these sets, as shown in Equation 6. We propose this similarity-based F-score as a metric to evaluate the quality of audio captions, and abbreviate it as SBF, which stands for Similarity-Based F-score..
\[S=\{t\in S:\forall u\neq t\in Scos\_sim(t,u)<rep\_t\} \tag{6}\]
## 4 Experiments
### Qualitative evaluation
We used our framework to evaluate captions generated by a vanilla audio captioning model. The model as in [13], which uses a CNN10 [14] encoder and a transformer decoder, was trained with the training split of the AudioCaps dataset [15] for 30 epochs, and further fine-tuned with the training split of the Clotho dataset [16] for 60 epochs. The batch size was 32 and learning rate was 0.001. The best checkpoint as judged by the SPIDER score on the validation split was used. We set \(sim\_t\) and \(rep\_t\) to 0.45. For all our experiments, to get text embeddings, we use the _all-MiniLM-L6-v2_ model of Sentence-BERT [8], unless otherwise mentioned.
### Quantitative evaluation
We leverage the availability of human judgments on pairs of audio captions indicating which one describes a given audio file better. The AudioCaps-Eval and Clotho-Eval datasets [9] provide such human judgments for 1,671 pairs from the AudioCaps dataset and on 1,750 caption pairs from the Clotho dataset. As performed in [9], we measure the correlation of SBF's judgments with human judgments.To obtain the text embeddings, we experiment with using _paraphrase-TinyBERT-L6-v2_, which is 240 MB, and _all-MiniLM-L6-v2_ models, which is 80 MB.
## 5 Results
### Qualitative results
From the AudioCaps and Clotho evaluation splits, Table 2 shows some examples of how false alarms and misses are detected. In the first example, in the candidate caption, from phrases "a bell is ringing" and "birds are chirping in the background", tags _Telephone bell ringing_, _Doorbell_, _Bell_, _Church bell_, _Bicycle bell_, _Jingle bell_, _Bird vocalization_, _bird call_, _bird
Figure 1: Method of finding false alarm and misses in an audio caption.
_song_ and _Bird_ are detected. After eliminating repetitions, we are left with "Bell" and "Bird" as detections. In the reference caption, from phrases "a bell rings" and "people talk in a courtyard", tags _Telephone bell ringing_, _Bell_, _Church bell_, _Bicycle bell_, _Doorbell_, _Jingle bell_, _Tubular bells_ and _Conversation_ are detected. After eliminating repetitions, we are left with _Bell_ and _Conversation_. Detection of multiple tags related to _Bell_ shows the importance of our repetition eliminator. By intersecting the sets of tags obtained from the candidate and reference captions, we get \(Bell\) as a True Positive. By subtracting the set of candidate tags from the set of reference tags, we get _Bird_ as a False Positive. By subtracting the set of reference tags from the set of candidate tags, we get _Conversation_ as a False Negative. The second example illustrates the case of a perfect scenario, where both the candidate ad reference captions mention the same or similar sounds. In the third example, _Stream_ is detected as a False Negative even though _Rain_ and _Stream_ are both related to _Water_, because the reference caption mentions a river, and the candidate caption doesn't. If we want to be more lenient and not count this as a False Negative, we would have to decrease \(sim\_t\).
Table 1 shows the precision, recall and F-scores obtained using the evaluation splits of the AudioCaps and Clotho datasets. If we increase \(tag\_t\), more audio tags are detected, which increases recall, but decreases precision. Increasing \(sim\_t\) would make our evaluation framework more sensitive to variations in meaning, since it would raise the threshold required for tags to be considered similar. Similarly, increasing \(rep\_t\) would also increase sensitivity to variations in meaning.
### Quantitative results
Figure 6 shows how our our metric's judgment's correlation with human judgments varies as we vary \(tag\_t\). A value of 0.4 for \(tag\_t\) seems reasonable to achieve good correlation with human judgment. Hence this value of \(tag\_t\) is used in Table 3, which shows the performance of different metrics with the AudioCaps-Eval and Clotho-Eval datasets. Since SBF is primarily designed to detect mistakes, it does not perform well on HC, HM and MM, because in pairs belonging to these cat
\begin{table}
\begin{tabular}{||l|l|l|l|l|l|l|l|l||} \hline tag\_t & 0.4 & & & 0.45 & & 0.5 & & \\ \hline AudioCaps & Precision & Recall & F-score & Precision & Recall & F-score & Precision & Recall & F-score \\ \hline Clotho & 0.425 & 0.228 & 0.249 & 0.378 & 0.297 & 0.284 & 0.341 & 0.303 & 0.282 \\ \hline \end{tabular}
\end{table}
Table 1: SBF scores on captions generated by a vanilla audio captioning model using the evaluation splits of AudioCaps and Clotho.
\begin{table}
\begin{tabular}{||l|l|l|l|l|l|l||} \hline \hline & Caption & Phrases & Tags & \\ \hline \hline Candidate caption & A bell is ringing while birds are chirping in the background & a bell is ringing; birds are chirping in the background & Bell; Bird \\ \hline Reference caption & A bell rings while people talk in a courtyard & a bell rings; people talk in a courtyard & Bell; Conversation \\ \hline True Positives & Bell & & & & \\ \hline False Positives & Bird & & & & \\ \hline False Negatives & Conversation & & & & \\ \hline Candidate caption & The waves are crashing against the shore and splashing & the waves are crashing against the shore; splashing & Splash, splatter; Waves & (surf); Water & (surf); Water \\ \hline Reference caption & Ocean waves roll in and out from the shore & ocean waves roll in; out from the shore & Waves & (surf); Ocean & \\ \hline True Positives & Waves (surf); Ocean & & & \\ \hline False Positives & - & & & \\ \hline False Negatives & - & & & \\ \hline \hline Candidate caption & Rain is pouring down the street with traffic sounds & rain is pouring down the street; traffic sounds & Rain; Traffic noise, roadway noise \\ \hline Reference caption & A river is flowing relatively swiftly and a waterfall flows & a river is flowing; a waterfall flows & Waterfall; Stream; Water; Raindrop \\ \hline True Positives & Raindrop & & & \\ \hline False Positives & Traffic noise, roadway noise & & & \\ \hline False Negatives & Stream & & & \\ \hline \end{tabular}
\end{table}
Table 2: Examples of how false alarms and misses are detected.
egories, both captions are correct. We can see that SBF performs better on HI, because one caption in the pair is indeed incorrect. By comparing the use of two text embedding models, where one is three times the size of the other, we see a noticeable difference in the quality of their judgments in some cases.
## 6 Conclusion
We propose a novel method to detect mistakes in an audio caption in the form of false alarms and misses. Having these detections provides insights into the deficiencies of a model which generated the audio captions. Often, false alarms result from over-representation of certain sounds in the training data, such as _Snoring_, _Horse trotting_ and _Spray_. Sometimes these are also cross-triggers. Similarly, misses are actually cross-triggers in disguise (example: _Spray_ instead of _Vehicle_). Understanding shortcomings of a system is a first step towards remediation measures, such as adjusting training data or adjusting training strategies, which will be explored in the future..
## 7 Discussion
While this work provides a way to detect false alarms and misses, and also to rule out cross-triggers caused by semantically similar sounds such as _Water_ and _Rain_, it does not rule out cross-triggers caused by groups of sounds which are acoustically similar but semantically different, such as _Vehicle_ and _Spray_, or _Frying_ and _Rain_. This could be addressed by considering acoustic similarity between sounds based on their audio embeddings, instead of only semantic similarities based on their text embeddings. Another direction enabled by our framework is to frogo reliance on the availability of audio captions, and instead use audio tags obtained from a audio tagging model. The challenge is to find a reliable audio tagging model which can be trusted as the ground truth. Yet another direction enabled by this framework is to use these false alarms and misses to automatically correct the caption. If a reliable tagging model can be used as the ground truth, we could develop an audio captioning system which corrects itself while being deployed.
Figure 4: Clotho-Eval, using _all-MiniLM-L6-v2_
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|} \hline & & AudioCaps-Eval & & & & Clotho-Eval & & & \\ \hline & & HC & HI & HM & MM & HC & HI & HM & MM \\ \hline all-MiniLM-L6-v2 & Sentence-BERT & 0.64 & 0.984 & 0.921 & 0.836 & 0.586 & 0.95 & 0.741 & 0.641 \\ \hline & FENSE & 0.581 & 0.955 & 0.891 & 0.816 & 0.595 & 0.943 & 0.797 & 0.717 \\ \hline & SBF & 0.409 & 0.935 & 0.921 & 0.664 & 0.529 & 0.898 & 0.638 & 0.574 \\ \hline paraphrase-TinyBERT-L6-v2 & Sentence-BERT & 0.64 & 0.988 & 0.925 & 0.73 & 0.6 & 0.955 & 0.759 & 0.673 \\ \hline & FENSE & 0.645 & 0.98 & 0.916 & 0.85 & 0.605 & 0.947 & 0.802 & 0.731 \\ \hline & SBF & 0.409 & 0.931 & 0.921 & 0.707 & 0.529 & 0.885 & 0.703 & 0.6 \\ \hline \end{tabular}
\end{table}
Table 3: Correlation of metrics with human judgments on the AudioCaps-Eval and Clotho-Eval datasets. |
2309.04449 | Formal first integrals and higher variational equations | The question of how Algebra can be used to solve dynamical systems and
characterize chaos was first posed in a fertile mathematical context by Ziglin,
Morales, Ramis and Sim\'o using differential Galois theory. Their study was
aimed at first-order, later higher-order, variational equations of Hamiltonian
systems. Recent work by this author formalized a compact yet comprehensive
expression of higher-order variationals as one infinite linear system, thereby
simplifying the approach. More importantly, the dual of this linear system
contains all information relevant to first integrals, regardless of whether the
original system is Hamiltonian. This applicability to formal calculation of
conserved quantities is the centerpiece of this paper, following an
introduction to the requisite context. Three important examples, namely
particular cases of Dixon's system, the SIR epidemiological model with vital
dynamics and the Van der Pol oscillator, are tackled, and explicit convergent
first integrals are provided for the first two. | Sergi Simon | 2023-09-08T17:15:42Z | http://arxiv.org/abs/2309.04449v1 | # Formal first integrals and higher variational equations
###### Abstract
The question of how Algebra can be used to solve dynamical systems and characterize chaos was first posed in a fertile mathematical context by Ziglin, Morales, Ramis and Simo using differential Galois theory. Their study was aimed at first-order, later higher-order, variational equations of Hamiltonian systems. Recent work by this author formalized a compact yet comprehensive expression of higher-order variationals as one infinite linear system, thereby simplifying the approach. More importantly, the dual of this linear system contains all information relevant to first integrals, regardless of whether the original system is Hamiltonian. This applicability to formal calculation of conserved quantities is the centerpiece of this paper, following an introduction to the requisite context. Three important examples, namely particular cases of Dixon's system, the SIR epidemiological model with vital dynamics and the Van der Pol oscillator, are tackled, and explicit convergent first integrals are provided for the first two.
**Keywords.** Integrability, Ziglin-Morales-Ramis-Simo theory, formal calculus, chaos, Dixon's system, SIR epidemiological model.
**2000 Mathematics Subject Classification:** 34A05, 37C79, 37J99, 37J30, 34M15, 34C28, 37C10, 15A69, 16W60, 13F25, 37N25.
## 1 Introduction
Given an arbitrary dynamical system, the formulation of its higher variational equations as a linear (infinite) system (LVE\({}^{\star}\)) has shown potential to make strong inroads in the study of integrability [34, 35]. The study, part of which we will explain more in detail in SS1.1, is twofold:
1. on one hand, the original set of equations (LVE) is amenable to the Ziglin-Morales-Ramis-Simo non-integrability framework whenever the system is _Hamiltonian_ and the first integrals whose existence is obstructed are _meromorphic_ ([27, 28, 29, 30] and a long assorted array of references derived therefrom, including [34, 35]); there is further, as of yet unfinished study in the direction of situations where the system is not Hamiltonian ([16, 19, 20]).
2. on the other, the _dual_ system (LVE)\({}^{\star}\) has jets of formal first integrals among its solutions ([3, 34]), which entails the possibility to furnish first integrals instead of finding obstructions to them; the advantage of this second approach is that the original system need not be Hamiltonian, and the formal first integrals, if convergent need not be meromorphic. The only difficulties in this case are computational in nature, namely in the context of resummation techniques.
In the present work we will exploit the second item (ii), namely by describing a method to automatically produce Taylor terms of formal first integrals by way of an automatic, easily recursified sequence of quadratures. First we recount the minimal background exposition necessary, then we present the main results in SS2 and finally we apply it to two simple examples to test its accuracy and usefulness as well as make novel statements about the integrability of the examples themselves.
### The algebraic study of integrability
#### Basics
Let \(\psi\left(t,\cdot\right)\) be the flow and \(\phi\left(t\right)=\psi\left(t,\boldsymbol{x}\right)\) a particular solution of a given autonomous system
\[\dot{\boldsymbol{z}}=X\left(\boldsymbol{z}\right),\qquad X:\mathbb{C}^{m} \rightarrow\mathbb{C}^{m}, \tag{1}\]
respectively. The **variational system** of (1) along \(\phi\) has \(\frac{\partial}{\partial z}\psi\left(t,\phi\right)\) as a fundamental matrix:
\[\dot{Y}=A_{1}Y,\quad A_{1}\left(t\right):=\left.\frac{\partial X}{\partial \boldsymbol{z}}\right|_{z=\phi\left(t\right)}\in\mathrm{Mat}_{n}\left(K\right),\] (VE \[{}_{\phi}\] )
\(K=\mathbb{C}\left(\boldsymbol{\phi}\right)\) being the smallest differential field ([36, Def. 1.1]) containing all entries of \(\boldsymbol{\phi}\left(t\right)\). \(\frac{\partial^{k}}{\partial\boldsymbol{z}^{k}}\psi\left(t,\phi\right)\) are multilinear \(k\)-forms appearing in the Taylor expansion of the flow along \(\boldsymbol{\phi}\):
\[\psi\left(t,z\right)=\psi\left(t,\phi\right)+\sum_{k=1}^{\infty}\frac{1}{k!} \frac{\partial^{k}\psi\left(t,\phi\right)}{\partial z^{k}}\left\{z-\phi\right\} ^{k}; \tag{2}\]
\(\partial_{z}^{k}\psi\left(t,\phi\right)\) also satisfy an echeloned set of systems, depending on the previous \(k-1\) partial derivatives and usually called **order-\(k\) variational equations**VE\({}_{\phi}^{k}\) ([30, p. 859-861], [34, Corollary 3]). Thus we have, (1) given, a _linear_ system \(\mathrm{VE}_{\phi}=:\mathrm{VE}_{\phi}^{1}=:\mathrm{LVE}_{\phi}^{1}\) and a family of _non-linear_ systems \(\left\{\mathrm{VE}_{\phi}^{k}\right\}_{k\geq 2}\).
[34] presented an explicit _linearized_ version \(\mathrm{LVE}_{\phi}^{k}\), \(k\geq 1\), by means of symmetric products \(\odot\) of finite and infinite matrices based on already-existing definitions by Bekbaev, e.g. [4]. This was done in preparation for the Ziglin-Morales-Ramis-Simo (ZMRS) theoretical framework based on monodromy and differential Galois groups [27, 36, 38], but has other consequences as well. More specifically, our outcomes in [34] have two applications for system (1), _Hamiltonian or not_:
* full structure of \(\mathrm{VE}_{\phi}^{k}\) and \(\mathrm{LVE}_{\phi}^{k}\), i.e. _recovering the flow_, which underlies the ZMRS theoretical corpus in practicality, albeit as a tool rather than as a goal;
* a byproduct is the full structure of dual systems \(\left(\mathrm{LVE}_{\phi}^{k}\right)^{\star}\), i.e. _recovering formal first integrals of (1)_ in ways which simplified earlier results in [3] significantly.
As said in the introduction, results in the present paper are based on the second of these applications. For examples of the first application, see [34, SS6] or the bulk of [35]. See also [22, 23] for examples where the non-linearized \(\mathrm{VE}_{k}\) were used.
### Symmetric products, powers and exponentials
**Notation 1.2.1**.: The conventions listed below were already introduced in [3, 34, 35]:
1. _Multi-index modulus, arithmetic, order and lexicographic order_: for \(\mathbf{i}=\left(i_{1},\ldots,i_{n}\right)\in\mathbb{Z}_{\geq 0}^{n}\), \(i=\left|\mathbf{i}\right|:=\sum_{k}i_{k}\); addition and subtraction are defined entrywise as usual; \(\mathbf{i}\leq\mathbf{j}\) means \(i_{k}\leq j_{k}\) for every \(k\geq 1\); \(\mathbf{i}<_{\mathrm{lex}}\mathbf{j}\) if \(i_{1}=j_{1},\ldots,i_{k-1}=j_{k-1}\) and \(i_{k}<j_{k}\) for some \(k\geq 1\).
2. Whenever such derivation is possible, we define the _lexicographically sifted differential of \(F\left(z_{1},\ldots,z_{m}\right)\) of order \(m\)_ as the row vector \[F^{\left(m\right)}\left(\mathbf{z}\right):=\mathrm{lex}\left(\frac{\partial^{m}F}{ \partial z_{1}^{i_{1}}\ldots\partial z_{n}^{i_{n}}}\left(\mathbf{z}\right)\right),\] (3) where \(\left|\mathbf{i}\right|=m\) and entries are ordered as per \(<_{\mathrm{lex}}\) on multi-indices. For instance, for \(n=2\) the first two differentials would be \[F^{\left(1\right)}=\left(\begin{array}{cc}\frac{\partial F}{\partial z_{1}} &\frac{\partial F}{\partial z_{2}}&\frac{\partial F}{\partial z_{3}}\end{array} \right),\quad F^{\left(2\right)}=\left(\begin{array}{cc}\frac{\partial^{2}F }{\partial z_{1}^{i_{2}}}&\frac{\partial^{2}F}{\partial z_{1}z_{2}}&\frac{ \partial^{2}F}{\partial z_{1}z_{3}}&\frac{\partial^{2}F}{\partial z_{2}^{2}}& \frac{\partial^{2}F}{\partial z_{2}z_{3}}&\frac{\partial^{2}F}{\partial z_{3} ^{2}}\end{array}\right).\]
3. We define \(d_{n,k}:=\binom{n+k-1}{n-1},\ D_{n,k}:=\sum_{i=1}^{k}d_{n,i}.\) It is easy to check there are \(d_{n,k}\)\(k\)-ples of integers in \(\left\{1,\ldots,n\right\}\), and just as many homogeneous monomials of degree \(n\) in \(k\) variables.
4. Given integers \(k_{1},\ldots,k_{n}\geq 0\), we define the usual multinomial coefficient as \[\binom{k_{1}+\cdots+k_{n}}{k_{1},\ldots,k_{n}}:=\binom{k_{1}+\cdots+k_{n}}{ \mathbf{k}}:=\frac{\left(k_{1}+\cdots+k_{n}\right)!}{k_{1}!k_{2}!\cdots k_{n}!}.\] For a multi-index \(\mathbf{k}\in\mathbb{Z}_{\geq 0}^{n}\), define \(\mathbf{k}!:=k_{1}!\cdots k_{n}!\). For any two such \(\mathbf{k},\mathbf{j}\), we define \[\binom{\mathbf{k}}{\mathbf{p}}:=\frac{k_{1}!k_{2}!\cdots k_{n}!}{p_{1}!p_{2}! \cdots p_{n}!\left(k_{1}-p_{1}\right)!\left(k_{2}-p_{2}\right)!\cdots\left(k_{ n}-p_{n}\right)!}=\binom{k_{1}}{p_{1}}\binom{k_{2}}{p_{2}}\cdots\binom{k_{n}}{p_{n}},\] (4) and the multi-index counterpart to the multinomial, \(\binom{\mathbf{k}_{1}+\cdots+\mathbf{k}_{m}}{\mathbf{k}_{1},\ldots,\mathbf{k }_{m}}:=\frac{\left(\mathbf{k}_{1}+\cdots+\mathbf{k}_{n}\right)!}{k_{1}!k_{2}!\cdots k_{n}!}\).
The compact formulation called for by Notation 1.2.1 (\(\mathbf{3}\)) was achieved in [34] through an operation \(\odot\) that had already been defined by Bekbaev (e.g. [4, 5]) and was systematized using basic categorical properties of the tensor product. Let \(K\) be a field and \(V\) a \(K\)-vector space. Let \(\mathrm{Sym}^{r}V\) be the symmetric power of \(V\). We write \(\mathbf{w}_{1}\odot\mathbf{w}_{2}\) for equivalence classes of tensor products of these vectors. Hence, product \(\odot\) operates exactly like products of homogeneous polynomials in several variables.
**Notation 1.2.2**.: When dealing with matrix sets, we will use super-indices and subindices:
1. The space of \(\left(i,j\right)\)**-matrices**\(\mathrm{Mat}_{m,n}^{i,j}\left(K\right)\) is defined equivalently as the set of \(d_{m,i}\times d_{n,j}\) matrices with entries in \(K\), or vector space \(\mathrm{Hom}_{K}\left(\mathrm{Sym}^{j}K^{m};\mathrm{Sym}^{i}K^{n}\right)\).
2. It is clear from the above that \(\mathrm{Mat}_{n}^{0,0}\left(K\right)\) is the set of all scalars \(\alpha\in K\) and \(\mathrm{Mat}_{n}^{0,k}\left(K\right)\) (resp. \(\mathrm{Mat}_{n}^{k,0}\left(K\right)\)) is made up of all row (resp. column) vectors whose entries are indexed by \(d_{n,k}\) lexicographically ordered \(k\)-tuples.
3. Reference to \(K\) may be dropped and notation may be abridged if dimensions are repeated or trivial, e.g. \(\mathrm{Mat}_{n}^{i,j}:=\mathrm{Mat}_{n,n}^{i,j}\), \(\mathrm{Mat}_{m,n}^{i}:=\mathrm{Mat}_{m,n}^{i,i}\), \(\mathrm{Mat}_{n}:=\mathrm{Mat}_{n}^{1}\), etcetera.
4. \(\mathrm{Mat}^{n,m}\left(K\right)\) denotes the set of block matrices \(A=\left(A_{i,j}\right)_{i,j\geq 0}\) with \(A_{i,j}:\mathrm{Sym}^{i}K^{m}\rightarrow\mathrm{Sym}^{j}K^{n}\), hence \(A_{i,j}\in\mathrm{M}_{d_{n,i}\times d_{m,j}}\left(K\right)=\mathrm{Mat}_{n, m}^{i,j}\left(K\right)\): \[A=\left(\begin{array}{c|c|c|c|c}\ddots&\vdots&\vdots&\vdots&\vdots&\\ \hline&\\ \cdots&A_{2,2}&A_{2,1}&\leftrightarrow A_{2,0}&\\ \hline&\\ \cdots&A_{1,2}&A_{1,1}&\leftrightarrow A_{1,0}\\ \hline&\\ \cdots&A_{0,2}&A_{0,1}&\leftrightarrow A_{0,0}\end{array}\right)\]
We write \(\mathrm{Mat}:=\mathrm{Mat}^{n,n}\) if \(n\) is unambiguous. Conversely, \(\mathrm{Mat}_{n,m}^{i,j}\) is embedded in \(\mathrm{Mat}^{n,m}\) by identifying every matrix \(A_{i,j}\) with an element of \(\mathrm{Mat}^{n,m}\) equal to \(0\) save for block \(A_{i,j}\).
**Definition 1.2.3** (Symmetric products of finite and infinite matrices).: _[_4, 34_]___
1. _Let_ \(A\in\mathrm{Mat}_{m,n}^{i_{1},j_{1}}\left(K\right)\)_,_ \(B\in\mathrm{Mat}_{m,n}^{i_{2},j_{2}}\left(K\right)\)_. Given any multi-index_ \(\mathbf{k}=\left(k_{1},\ldots,k_{n}\right)\in\mathbb{Z}_{\geq 0}^{n}\) _such that_ \(\left|\mathbf{k}\right|=k_{1}+\cdots+k_{n}=j_{1}+j_{2}\)_, define_ \(C:=A\odot B\in\mathrm{Mat}_{m,n}^{i_{1}+i_{2},j_{1}+j_{2}}\) _by_ \[C\left(\mathbf{e}_{1}^{\odot k_{1}}\cdots\mathbf{e}_{n}^{\odot k_{n}}\right) =\frac{1}{\binom{j_{1}+j_{2}}{j_{1}}}\sum_{\mathbf{p}}\binom{\mathbf{k}}{ \mathbf{p}}A\left(\mathbf{e}_{1}^{\odot p_{1}}\cdots\mathbf{e}_{n}^{\odot p_{ n}}\right)\odot B\left(\mathbf{e}_{1}^{\odot k_{1}-p_{1}}\cdots\mathbf{e}_{n}^{ \odot k_{n}-p_{n}}\right),\] (5) _notation abused by removing_ \(\odot\) _to reduce space, binomials as in (_4_) and summation taking place for specific multi-indices_ \(\mathbf{p}\)_, namely those such that_ \[\left|\mathbf{p}\right|=j_{1}\qquad\text{ and }\qquad 0\leq p_{i}\leq k_{i}, \quad i=1,\ldots,n.\]
2. _For any_ \(A,B\in\mathrm{Mat}^{n,m}\left(K\right)\)_, define_ \(A\odot B=C\in\mathrm{Mat}^{n,m}\left(K\right)\) _by_ \[C=\left(C_{i,j}\right)_{i,j\geq 0},\qquad C_{i,j}=\sum_{0\leq i_{1}\leq i, \ 0\leq j_{1}\leq j}\binom{j}{j_{1}}A_{i_{1},j_{1}}\odot B_{i-i_{1},j-j_{1}}.\] (6) _Same as always,_ \({}^{\odot k}\) _will stand for powers built with this product._
The following is a mere exercise in induction:
**Lemma 1.2.4**.: _Defining \(\bigodot_{i=1}^{r}A_{i}\) recursively by \(\left(\bigodot_{i=1}^{r-1}A_{i}\right)\odot A_{r}\) with \(A_{i}\in\mathrm{Mat}_{m,n}^{k_{i},j_{i}}\),_
\[\left(A_{1}\odot\cdots\odot A_{r}\right)\mathbf{e}^{\odot\mathbf{k}}=\frac{1}{ \binom{j_{1}+\cdots+j_{r}}{j_{1},j_{2},\ldots,j_{r}}}\sum_{\mathbf{p}_{1}, \ldots,\mathbf{p}_{r}}\binom{\mathbf{k}}{\mathbf{p}_{1},\ldots,\mathbf{p}_{r} }\sum_{i=1}^{r}A_{i}\mathbf{e}^{\odot\mathbf{p}_{i}}, \tag{7}\]
_if \(\left|\mathbf{k}\right|=j_{1}+\cdots+j_{r}\), sums obviously taken for \(\mathbf{p}_{1}+\cdots+\mathbf{p}_{r}=\mathbf{k}\) and \(\left|\mathbf{p}_{i}\right|=j_{i}\), for every \(i=1,\ldots,r\). \(\qed\)_
The following is straightforward and has already been seen e.g. in [4, 5, 34], or can be easily derived therefrom:
**Proposition 1.2.5**.: _For any \(A\), \(B\), \(C\), and whenever products make sense,_
1. \(A\odot B=B\odot A\)_._
2. \(\left(A+B\right)\odot C=A\odot C+B\odot C\)_._
3. \(\left(A\odot B\right)\odot C=A\odot\left(B\odot C\right)\)_._
4. \(\left(\alpha A\right)\odot B=\alpha\left(A\odot B\right)\) _for every_ \(\alpha\in K\)_._
5. _If_ \(A\) _is square and invertible, then_ \(\left(A^{-1}\right)^{\odot k}=\left(A^{\odot k}\right)^{-1}\)_._
6. \(A\odot B=0\) _if and only if_ \(A=0\) _or_ \(B=0\)_._
7. _If_ \(A\) _is a square_ \(\left(1,1\right)\)_-matrix, then_ \(A\mathbf{v}_{1}\odot A\mathbf{v}_{2}\odot\cdots\odot A\mathbf{v}_{m}=A^{\odot m}\mathbf{v}_ {1}\odot\cdots\odot\mathbf{v}_{m}\)_._
8. _If_ \(\mathbf{v}\) _is a column vector, then_ \(\left(A\odot B\right)\mathbf{v}^{\odot\left(p+q\right)}=\left(A\mathbf{v}^{\odot p} \right)\odot\left(B\mathbf{v}^{\odot q}\right)\)_,_ \(p,q\in\mathbb{Z}_{\geq 0}\)
**Lemma 1.2.6** ([34]).:
1. _Given square_ \(A,B\in\mathrm{Mat}_{n}^{k,k}\) _and matrices_ \(X_{i}\in\mathrm{Mat}_{n}^{k,ji},\,i=1,2\)_,_ \[\left(A\odot B\right)\left(X_{1}\odot X_{2}\right)=\frac{1}{2}\left(AX_{1} \odot BX_{2}+BX_{1}\odot AX_{2}\right),\] (8) _and in general for any square_ \(A_{1},\ldots,A_{m}\in\mathrm{Mat}_{n}^{k,k}\) _and any_ \(X_{i}\in\mathrm{Mat}_{n}^{k,ji},\,i=1,\ldots,m\)_,_ \[\left(\bigodot_{i=1}^{m}A_{i}\right)\left(\bigodot_{i=1}^{m}X_{i}\right)=\frac{ 1}{m!}\sum_{\sigma\in\mathfrak{S}_{k}}\bigodot_{i=1}^{m}A_{\sigma(i)}X_{i}. \qed\] (9)
2. _Given_ \(A\in\mathrm{Mat}_{n}^{1,j}\) _and_ \(X_{1},\ldots,X_{m}\) _such that_ \(X_{i}\in\mathrm{Mat}_{n}^{1,qi}\)_,_ \(1\leq j\leq m\)_,_ \[\binom{m}{j}\big{(}A\odot\mathrm{Id}_{n}^{\odot m-j}\bigodot_{i=1}^{m}X_{i}= \sum_{1\leq i_{1}<\cdots<i_{j}\leq m}\big{[}A\left(X_{i_{1}}\odot\cdots\odot X _{i_{j}}\right)\big{]}\odot\bigodot_{s\neq i_{1},\ldots,i_{j}}X_{s}.\qed\] (10)
3. _Given a square matrix_ \(A\in\mathrm{Mat}_{n}^{1,1}\) _and_ \(X_{1},\ldots,X_{m}\) _such that_ \(X_{i}\in\mathrm{Mat}_{n}^{1,ji}\)_,_ \[\left(A\odot\mathrm{Id}_{n}^{\odot m-1}\right)\left(\bigodot_{i=1}^{m}X_{i} \right)=\frac{1}{m}\sum_{i=1}^{m}\left(AX_{i}\right)\odot\left(X_{1}\odot\cdots \odot\widehat{X_{i}}\odot\cdots\odot X_{m}\right).\] (11)
4. _Given square matrix_ \(X\in\mathrm{Mat}_{n}^{k-1,k-1}\) _and vector_ \(\boldsymbol{v}\in K^{n}\)_, we have, for each_ \(i=1,\ldots,n\)_,_ \[\left(X\odot\boldsymbol{e}_{i}^{T}\right)\left(\boldsymbol{v}\odot\mathrm{Id }_{n}^{\odot k-1}\right)=\frac{k-1}{k}X\left(\boldsymbol{v}\odot\mathrm{Id}_{ n}^{\odot k-2}\right)\odot\boldsymbol{e}_{i}^{T}+\frac{v_{i}}{k}X\] (12)
**Lemma 1.2.7**.: _Let \((K,\partial)\) be a differential field._
1. _For any given_ \(X\in\mathrm{Mat}_{n}^{k_{1},ji_{1}}\left(K\right)\) _and_ \(Y\in\mathrm{Mat}_{n}^{k_{2},j_{2}}\left(K\right)\)_,_ \[\partial\left(X\odot Y\right)=\partial\left(X\right)\odot Y+X\odot\partial \left(Y\right).\qed\] (13)
2. _If_ \(Y\) _is a square_ \(n\times n\) _matrix having entries in_ \(K\) _and_ \(\partial Y=AY\)_, then_ \[\partial\,\mathrm{Sym}^{k}Y=k\left(A\odot\mathrm{Sym}^{k-1}\left(\mathrm{Id} _{n}\right)\right)\mathrm{Sym}^{k}Y.\] (14)
3. _If_ \(X\in\mathrm{Mat}_{n}^{1,j_{1}}\) _and_ \(Y\in\mathrm{Mat}_{n}^{1,j_{2}}\) _satisfy systems_ \(\partial X=AX+B_{1}\) _and_ \(\partial Y=AY+B_{2}\) _with_ \(A\in\mathrm{Mat}_{n}^{1,1},B_{i}\in\mathrm{Mat}_{n}^{1,ji},\) _then symmetric product_ \(X\odot Y\) _satisfies linear system_ \[\partial\left(X\odot Y\right)=2\left(A\odot\mathrm{Id}_{d_{n,k}}\right)\left(X \odot Y\right)+\left(B_{1}\odot Y+B_{2}\odot X\right).\] (15)
4. _If_ \(\partial X_{i}=AX_{i}+B_{i},\;i=1,\ldots,m,\) _with_ \(X_{i},B_{i}\in\mathrm{Mat}_{n}^{1,ji}\)_,_ \(A\in\mathrm{Mat}_{n}^{1,1}\) _then_ \[\partial\,\bigodot_{i=1}^{m}X_{i}=m\left(A\odot\mathrm{Id}_{d_{n,k}}^{\odot m- 1}\right)\bigodot_{i=1}^{m}X_{i}+\sum_{i=1}^{m}B_{i}\odot\bigodot_{j\neq i}X_ {j}.\qed\] (16)
**Definition 1.2.8**.: _(See also [5]) for every matrix \(A\in\mathrm{Mat}^{n,m}\) we define the formal power series_
\[\exp_{\odot}A:=1+A^{\odot 1}+\frac{1}{2}A^{\odot 2}+\cdots=\sum_{i=0}^{\infty} \frac{1}{i!}A^{\odot i}.\]
_Whenever \(A=0\) save for a finite distinguished submatrix \(A_{j,k}\), the abuse of notation \(\exp_{\odot}A_{j,k}=\exp_{\odot}A\) will be customary._
**Lemma 1.2.9**.:
* _For every two_ \(A,B\in\operatorname{Mat}^{n,m}\)_,_ \(\exp_{\odot}\left(A+B\right)=\exp_{\odot}A\odot\exp_{\odot}B\)_._
* _For every_ \(Y\in\operatorname{Mat}^{n,m}\) _and any derivation_ \(\partial:K\to K\)_,_ \(\partial\exp_{\odot}Y=\left(\partial Y\right)\odot\exp_{\odot}Y.\)__
* _(__[_4_, Corollary 3]__) Given square matrices_ \(A,B\in\operatorname{Mat}_{n}^{1,1}\)_,_ \(\exp_{\odot}AB=\exp_{\odot}A\exp_{\odot}B\)_._
* _In particular, for every invertible square_ \(A\in\operatorname{Mat}_{n}^{1,1}\)_,_ \(\exp_{\odot}A^{-1}=\left(\exp_{\odot}A\right)^{-1}\)_._ \(\square\)__
For examples and properties, see [34, SS3.1].
### Application to power series
**Lemma 1.3.1** ([34, Lemma 3.7]).: _If \(F=F_{1}\times\cdots\times F_{m}\) is a vector power series, adequate \(M_{F}^{1,i}\in\operatorname{Mat}_{m,n}^{1,i}\left(K\right)\) render_
\[\boxed{F\left(\mathbf{x}\right)=M_{F}\exp_{\odot}X}\quad\text{ where }M_{F}:=\left( \begin{array}{ccc|cc}\cdots&M_{F}^{1,2}&M_{F}^{1,1}&M_{F}^{1,0}\\ \hline\cdots&0&0&0\end{array}\right)\in\operatorname{Mat}^{m,n}.\]
_Following Definition 1.2.8, write \(F\left(\mathbf{x}\right)=M_{F}\exp_{\odot}\mathbf{x}\) if it poses no clarity issue. \(\square\)_
Thus every formal power series \(F\in K\left[[\mathbf{x}]\right]\), \(\mathbf{x}=\left(x_{1},\ldots,x_{n}\right)\) can be expressed in the form \(M_{F}\exp_{\odot}\mathbf{x}\), where
\[M_{F}:=\left(\begin{array}{ccc|cc}\cdots&M_{F}^{1,2}&M_{F}^{1,1}&M_{F}^{1,0} \\ \hline\cdots&0&0&0\end{array}\right)\in\operatorname{Mat}^{1,n}\left(K\right), \qquad X:=\left(\begin{array}{c|c}0&\mathbf{x}\\ \hline 0&0\end{array}\right).\]
\[M_{F}=J_{F}+M_{F}^{1,0}:=\left(\begin{array}{ccc|cc}\cdots&M_{F}^{1,2}&M_{F }^{1,1}&0\\ \hline\cdots&0&0&0\end{array}\right)+\left(\begin{array}{c|c}0&M_{F}^{1,0} \\ \hline 0&0\end{array}\right). \tag{17}\]
See [34, SS3.2] for more details.
### Higher-order variational equations
See [34] for proofs and further elaboration on everything summarized below:
**Notation 1.4.1**.: For every set of indices \(1\leq i_{1}\leq\cdots\leq i_{r}\) such that \(\sum_{j=1}^{r}i_{j}=k\), \(c_{i_{1},\ldots,i_{r}}^{k}\) is defined as the amount of totally ordered partitions of a set of \(k\) elements among subsets of sizes \(i_{1},\ldots,i_{r}\):
\[c_{i_{1},\ldots,i_{j}}=c_{i_{1},\ldots,i_{j}}^{k}=\frac{\binom{k}{i_{1}\,i_{2} \cdots i_{j}}}{n_{1}!\cdots n_{m}!},\quad\left\{\begin{array}{l}(i_{1}, \ldots,i_{j})=(k_{1}\,\,n_{!}\,k_{1},\cdots,k_{m}\,\,n_{!}\,n_{m})\,,\\ 1\leq k_{1}<k_{2}<\cdots<k_{m}.\end{array}\right. \tag{18}\]
**Proposition 1.4.2**.: _Let \(K=\mathbb{C}\left(\mathbf{\phi}\right)\) and \(A_{k}\), \(Y_{k}\) be \(\partial_{k}X\left(\mathbf{\phi}\right)\), \(\partial_{k}^{k}\psi\left(t,\mathbf{\phi}\right)\) minus crossed terms; let \(A=J_{\odot}^{\phi},Y=J_{\phi}\) be the derivative jets for \(X\) and \(\psi\) at \(\mathbf{\phi}\), with \(A_{k}\), \(Y_{k}\) as blocks. Then, if \(c_{\mathfrak{i}}^{k}=\#\left\{\text{ordered }i_{1},\ldots,i_{r}\text{- partitions of }k\text{ elements}\right\}\),_
\[\overset{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text
* _Row block_ \(r\) _in_ \(\exp_{\odot}Y\) _is recursively obtained in terms of row blocks_ \(1\) _and_ \(r-1\)_:_ \[Z_{r,s}=\frac{1}{r}\sum_{j=1}^{s-r+1}\binom{s}{j}Y_{j}\odot Z_{r-1,s-j}.\] (20) _In particular,_ \(Z_{r,r}=Y_{1}^{\odot r}\) _and_ \(Z_{r,s}=0_{d_{n,r},d_{n,s}}\) _whenever_ \(r>s\)_._
* _Using Notation_ 1.4.1 _and_ (_18_)_, for every_ \(s\geq r\)__ \[Z_{r,s}=\sum_{i_{1}+\dots+i_{r}=s}c_{i_{1},\dots,i_{r}}^{s}Y_{i_{1}}\odot Y_{i_ {2}}\odot\dots\odot Y_{i_{r}}.\] (21)
**Notation 1.4.4**.: \(K:=\mathbb{C}\left(\boldsymbol{\phi}\right)\)_, \(A_{i}:=X^{(i)}(\boldsymbol{\phi})\), \(Y_{i}:=\ker\left(\frac{\partial^{i}}{\partial\boldsymbol{z}^{i}}\varphi(t, \boldsymbol{\phi})\right)\) and, per Lemma 1.4.3,_
\[\Upsilon_{1}=Y_{1},\qquad\Upsilon_{k}=\left(\begin{array}{c|c}Z_{k,k}\\ Z_{k-1,k}\\ \vdots\\ Z_{1,k}\end{array}\right),\quad k\geq 2, \tag{22}\]
_formed by the first \(k\) block rows and columns in \(\Upsilon=\exp_{\odot}Y\). Define \(A,Y\in\operatorname{Mat}\left(K\right)\) as in Lemma 1.4.3 with the above \(A_{i}\), \(Y_{i}\) as blocks. Denote the canonical basis on \(K^{n}\) (meaning the set of columns of \(\operatorname{Id}_{n}\)) by \(\{\boldsymbol{e}_{1},\dots,\boldsymbol{e}_{n}\}\)._
**Proposition 1.4.5** (Explicit version of \(\operatorname{LVE}_{\phi}^{k}\)).: _Still following Notation 1.4.4, the infinite system_
\[\boxed{\dot{X}=A_{\operatorname{LVE}_{\phi}}X,}\qquad\qquad A_{\operatorname{ LVE}_{\phi}}:=A\odot\exp_{\odot}\operatorname{Id}_{n},\] ( \[\operatorname{LVE}_{\phi}\] )
_has \(\Upsilon:=\exp_{\odot}Y\) as a solution matrix. Hence, for every \(k\geq 1\),_
* _the lower-triangular recursive_ \(D_{n,k}\times D_{n,k}\) _form for_ \(\operatorname{LVE}_{\phi}^{k}\) _is_ \(\dot{Y}=A_{\operatorname{LVE}_{\phi}^{k}}Y\)_, its system matrix being obtained from the first_ \(k\) _row and column blocks of_ \(A_{\operatorname{LVE}_{\phi}}\)_:_ \[A_{\operatorname{LVE}_{\phi}^{k}}=\left(\begin{array}{c|c}\binom{k}{k-1}A_{1 }\odot\operatorname{Id}_{n}^{\odot k-1}\\ \binom{k}{k-2}A_{2}\odot\operatorname{Id}_{n}^{\odot k-2}\\ \vdots\\ \binom{k}{0}A_{k}\end{array}\right|\begin{array}{c|c}A_{\operatorname{LVE}_{ \phi}^{k-1}}\\ \end{array}\right),\] (23)
* _and the principal fundamental matrix for_ \(\operatorname{LVE}_{\phi}^{k}\) _is_ \(\Upsilon_{k}\) _from_ \(\exp_{\odot}Y\) _in Notation_ 1.4.4_._
The construction of an infinite matrix \(\Upsilon_{\operatorname{LVE}_{\phi}}=\exp_{\odot}Y\) follows in such a way, that the first \(d_{n,k}:=\binom{n+k-1}{n-1}\) rows and columns are a principal fundamental matrix for \(\operatorname{LVE}_{\phi}^{k}\). The definition is recursive and amenable to symbolic computation.
**Example 1.4.6**.: For instance, for \(k=5\) we have
\[A_{\operatorname{LVE}_{\phi}^{5}}=\left(\begin{array}{cccc}5A_{1}\odot \operatorname{Id}_{n}^{\odot 4}\\ 10A_{2}\odot\operatorname{Id}_{n}^{\odot 3}&4A_{1}\odot\operatorname{Id}_{n}^{ \odot 3}\\ 10A_{3}\odot\operatorname{Id}_{n}^{\odot 2}&6A_{2}\odot\operatorname{Id}_{n}^{ \odot 2}&3A_{1}\odot\operatorname{Id}_{n}^{\odot 2}\\ 5A_{4}\odot\operatorname{Id}_{n}&4A_{3}\odot\operatorname{Id}_{n}&3A_{2}\odot \operatorname{Id}_{n}&2A_{1}\odot\operatorname{Id}_{n}\\ A_{5}&A_{4}&A_{3}&A_{2}&A_{1}\end{array}\right),\]
and the principal fundamental matrix \(\Upsilon_{5}\) is
\[\left(\begin{array}{cccccc}Y_{\odot}^{\odot 5}&&&&\\ 10Y_{\odot}^{\odot 3}\odot Y_{2}&Y_{1}^{\odot 4}&&&\\ 10Y_{\odot}^{\odot 2}\odot Y_{3}+15Y_{1}\odot Y_{2}^{\odot 2}&6Y_{\odot}^{ \odot 2}\odot Y_{2}&Y_{1}^{\odot 3}&&&\\ 10Y_{2}\odot Y_{3}+5Y_{1}\odot Y_{4}&4Y_{1}\odot Y_{3}+3Y_{2}\odot Y_{2}&3Y_{1 }\odot Y_{2}&Y_{1}^{\odot 2}&\\ Y_{5}&&&Y_{4}&Y_{3}&Y_{2}&Y_{1}\end{array}\right), \tag{24}\]
hence \((\mathrm{VE}_{\phi}^{k})\) for \(k=5\) is the lowest row in \(A_{\mathrm{LVE}_{\phi}^{5}}\) times the leftmost column in \(\Upsilon_{5}\).
### First integrals and higher-order variational equations
Let \(F:U\subseteq\mathbb{C}^{n}\rightarrow\mathbb{C}^{n}\) be a holomorphic function and \(\boldsymbol{\phi}:I\subset\mathbb{C}\to U\). Firstly, the flow \(\varphi\left(t,\boldsymbol{z}\right)\) of \(X\) admits, at least formally, Taylor expansion (2) along \(\boldsymbol{\phi}\) which is expressible as
\[\varphi\left(t,\boldsymbol{\phi}+\boldsymbol{\xi}\right)=\boldsymbol{\phi}+ Y_{1}\boldsymbol{\xi}+\frac{1}{2}Y_{2}\boldsymbol{\xi}^{\odot 2}+\cdots= \boldsymbol{\phi}+J_{\phi}\exp_{\odot}\boldsymbol{\xi}, \tag{25}\]
where \(J_{\phi}\) is the jet for flow \(\varphi\left(t,\cdot\right)\) along \(\boldsymbol{\phi}\), displayed as \(Y\) in (19) and defined in Notation 1.4.4 - that is, the matrix whose \(\odot\)-exponential \(\Upsilon\) is a solution matrix for \((\mathrm{LVE}_{\phi})\). Secondly, the Taylor series of \(F\) along \(\boldsymbol{\phi}\) can be written, cfr. [3, Lemma 2] and Notation 1.2.1,
\[F\left(\boldsymbol{y}+\boldsymbol{\phi}\right)=F\left(\boldsymbol{\phi} \right)+\sum_{m=1}^{\infty}\frac{1}{m!}\left\langle F^{\left(m\right)}\left( \boldsymbol{\phi}\right)\,,\,\mathrm{Sym}^{m}\boldsymbol{y}\right\rangle. \tag{26}\]
Lemma 1.3.1 and (17) trivially implies (26) can be expressed as \(F\left(\boldsymbol{y}+\boldsymbol{\phi}\right)=M_{F}^{\phi}\exp_{\odot} \boldsymbol{y}\), where
\[M_{F}^{\phi}=J_{F}^{\phi}+F^{\left(0\right)}(\boldsymbol{\phi}):=\left( \begin{array}{cccc}\cdots&0&0&0\\ \cdots&F^{\left(2\right)}(\boldsymbol{\phi})&F^{\left(1\right)}(\boldsymbol{ \phi})&F^{\left(0\right)}(\boldsymbol{\phi})\\ \hline\cdots&0&0&0\end{array}\right)\in\mathrm{Mat}^{1,n}\left(K\right),\]
i.e. \(J_{F}^{\phi}\) is the jet or horizontal strip of lex-sifted partial derivatives of \(F\) at \(\boldsymbol{\phi}\).
Let
\[\boxed{\hat{X}=A_{\mathrm{LVE}_{\phi}^{*}}X,}\qquad\qquad A_{\mathrm{LVE}_{ \phi}^{*}}:=-\left(A\odot\exp_{\odot}\mathrm{Id}_{n}\right)^{T},\] ( \[\mathrm{LVE}_{\phi}^{*}\] )
be the **adjoint** or **dual** variational system of (1) along \(\phi\).
It is immediate, upon derivation of equation \(\Upsilon_{k}\Upsilon_{k}^{-1}=\mathrm{Id}_{D_{n,k}}\), that
**Lemma 1.5.1**.: \(\left(\Upsilon_{k}^{-1}\right)^{T}\) _is a principal fundamental matrix of \(\left(\mathrm{LVE}_{\phi}^{k}\right)^{\star}\), \(k\geq 1\), hence \(\lim_{k}\left(\Upsilon_{k}^{-1}\right)^{T}\), is a solution to \((\mathrm{LVE}_{\phi}^{*})\)._
The following was proven in [30] and recounted in [3, Lemma 7], and was may now be expressed in a simple, compact fashion:
**Lemma 1.5.2**.: _Let \(F\) and \(\boldsymbol{\phi}\) be a holomorphic first integral and a non-constant solution of (1) respectively. Let \(V:=J_{F}^{T}\) be the transposed jet of \(F\) along \(\boldsymbol{\phi}\). Then, \(V\) is a solution of \((\mathrm{LVE}_{\phi}^{*})\)._
Hence the issue, already considered in [3], is whether some converse holds and a solution of the dual system \(\left(\mathrm{LVE}_{\phi}^{k}\right)^{\star}\) is or can be the set of first \(k\) terms of the formal (or perhaps even convergent) series form of a first integral. This was addressed in the aforementioned reference and was put in compact, explicit linearized form in [34], and will be recounted below.
Define
\[\widehat{A}:=\left(\begin{array}{cccc|c}\cdots&0&0&0\\ \cdots&A_{2}&A_{1}&A_{0}\\ \hline\cdots&0&0&0\end{array}\right),\qquad A_{i}:=X^{\left(i\right)}( \boldsymbol{\phi})\in\mathrm{Mat}_{n}^{1,i}\left(K\right),\qquad A_{0}:=X\left( \boldsymbol{\phi}\right)=\dot{\boldsymbol{\phi}}, \tag{27}\]
and let
\[\widehat{A}_{\text{LVE}_{\phi}}:=\widehat{A}\odot\exp_{\odot}\text{Id}_{n}=\lim_{ k}\widehat{A}_{\text{LVE}_{\phi}^{k}}=\lim_{k}\left(\begin{array}{c|c}\binom{k}{k}X^{(0)} \left(\boldsymbol{\phi}\right)\odot\text{Id}_{n}^{\odot k}\\ \binom{k}{k-1}X^{(1)}\left(\boldsymbol{\phi}\right)\odot\text{Id}_{n}^{\odot k -1}\\ \binom{k}{k-2}X^{(2)}\left(\boldsymbol{\phi}\right)\odot\text{Id}_{n}^{\odot k -2}\\ \vdots\\ \binom{k}{0}X^{(k)}\left(\boldsymbol{\phi}\right)\odot\text{Id}_{n}^{\odot 0} \end{array}\right),\]
The following gave a compact form to [3, Th. 12] in terms of \(\odot\) and infinite matrices:
**Proposition 1.5.3** ([34, Prop. 6]).: _Let \(F\), \(\boldsymbol{\phi}\), \(V=\left(\cdots\mid V_{3}\mid V_{2}\mid V_{1}\right)\) as in Lemma 1.5.2. Then \(\widehat{A}_{\text{LVE}_{\phi}}^{T}V=0\). More specifically, \(\widehat{A}_{\text{LVE}_{\phi}^{k-1}}\left(V_{k}\mid\cdots\mid V_{2}\mid V_{1 }\right)=0\) for every \(k\geq 1\), i.e._
\[\sum_{j=0}^{k-1}\binom{k-1}{j}\left(A_{j}\odot\text{Id}_{n}^{k-1-j}\right)^{T }V_{k-j}=0,\qquad\text{for every $k\geq 1$.}\quad\square \tag{28}\]
Hence, blocks in \(V_{1},\left(V_{2},V_{1}\right)^{T},\left(V_{3},V_{2},V_{1}\right)^{T},\dots\) having all entries in the base field \(K\) and satisfying both equations in Proposition 1.5.3 and 1.5.2 are jet blocks \(F^{(1)},F^{(2)},\dots\) of a formal series that will be a first integral if convergent. These blocks belonging to the intersection of \(\ker\widehat{A}_{\text{LVE}_{\phi}^{k-1}}^{T}\) and the solution subspace \(\text{Sol}_{K}\left(\text{LVE}_{\phi}^{k}\right)^{\star}\) were called _admissible_ solutions of the order-\(k\) adjoint system in [3]. We will call the sum constructed from an admissible solution a **formal first integral**.
For a short introduction on how changes of variables are reflected on this scheme, as well as the explicit form for the _monodromy matrix_[39] of \(\text{LVE}_{\phi}^{k}\) along any path \(\gamma\), see of [34, SS4 & 5].
## 2 Jet filtering methods
### Preliminaries
Let \(f\) be a first integral of (1). The following are immediate:
**Lemma 2.1.1**.:
1. _For every_ \(i\geq 1\)_,_ \[\left(fg\right)^{(i)}=\sum_{j=0}^{i}\binom{i}{j}f^{(j)}\odot g^{(i-j)},\] (29) _with the understanding that symmetric product by a scalar is identical to the usual product, hence_ \[J\left\{fg\right\} = U\odot\left(U^{\odot 2}\right)^{T}\left[J\left\{f\right\}\odot J \left\{g\right\}\right]=U\odot\left(U^{\odot 2}\right)^{T}\left[\exp_{\odot}J \left\{f\right\}\odot\exp_{\odot}J\left\{g\right\}\right]\] \[= U\odot\left[U^{T}\exp_{\odot}J\left\{f\right\}\right]\odot \left[U^{T}\exp_{\odot}J\left\{g\right\}\right]\] _where_ \(U=\left(\begin{array}{c|c}0&1\\ \hline 0&0\end{array}\right)\)__
2. _For every_ \(k\geq 1\)_,_ \[\left(f^{k}\right)^{(i)}=k!\sum_{j=1}^{k}\frac{f^{j-1}}{(j-1)!}\sum_{i_{1}+ \cdots+i_{k-j+1}=i}c_{i_{1},\ldots,i_{k-j+1}}^{i}\bigoplus_{u=1}^{k-j+1}f^{(i_ {u})},\] (30)
hence defining_
\[E_{1}=\left(\begin{array}{c|c}\boldsymbol{e}_{1}^{T}&0\\ \hline 0&0\end{array}\right),\qquad U=\left(\begin{array}{c|c}0&1\\ \hline 0&0\end{array}\right),\]
_we have_
\[J\left\{f^{k}\right\} = k!\left\{U\odot\left(U^{\odot k}\right)^{T}\left[\exp_{\odot}J \left\{f\right\}\right]\right\}=k!\left\{U\odot\left(U^{\odot k}\right)^{T} \left[\exp_{\odot}J\left\{f\right\}\right]\right\} \tag{31}\] \[= U\odot\left[U^{T}\exp_{\odot}J\left\{f\right\}\right]^{\odot k}= U\odot\left[U^{T}J\left\{f\right\}\right]^{\odot k} \tag{32}\]
_(iii) For every \(k\geq 1\),_
\[f^{k}\left(1/f^{k}\right)^{(i)}=k\sum_{j=0}^{i}\frac{\left(k+j-1\right)!}{k!} f^{-j}\left(-1\right)^{j}\sum_{i_{1}+\cdots+i_{j}=i}c_{i_{1},\ldots,i_{j}}^{i} \sum_{u=1}^{j}f^{(i_{u})}, \tag{33}\]
_hence_
\[J\left\{1/f\right\} = \frac{1}{f}U\odot\left\{\left(\begin{array}{c|c}\vdots&\vdots \\ 0&3!\left(-1/f\right)^{3}\\ 0&2!\left(-1/f\right)^{2}\\ 0&1!\left(-1/f\right)\end{array}\right)^{T}\exp_{\odot}\left(J\left\{f\right\} -fU\right)\right\} \tag{34}\] \[= U\odot\left(\frac{1}{f}U^{T}J\left\{f\right\}\right)^{\odot-1}= U\odot\left(\sum_{i=0}^{\infty}\left(-1\right)^{i}\left[\frac{1}{f}U^{T}J \left\{f\right\}-\boldsymbol{1}\right]^{\odot i}\right) \tag{35}\]
### Progressive filtering
Let us describe a method to start with a jet of valuation one and then proceed to compute each jet term of that particular function immediately in terms of the previous ones.
**Lemma 2.2.1**.: _Define_
\[F_{1}:=F_{i,1}:=\mathrm{Id}_{n}-\frac{1}{X_{i}^{0}}\left(X^{0}\odot \boldsymbol{e}_{i}^{T}\right). \tag{36}\]
_The \(n-1\) non-zero rows of \(F_{1}Y_{1}^{-1}\) are linearly independent admissible solutions of degree one._
Proof.: Let \(F_{i,1}:=\mathrm{Id}_{n}-\frac{1}{X_{i}\left(\phi\left(0\right)\right)}\left( X\left(\phi\left(0\right)\right)\odot\boldsymbol{e}_{i}^{T}\right)\). We have \(\widehat{A}_{\mathrm{LVE}_{\phi}^{0}}=A_{0}^{T}=X\left(\phi\right)^{T}\). And \(\left(\Upsilon_{1}^{-1}\right)^{T}=\left(Y_{1}^{-1}\right)^{T}\), the fundamental matrix for \(\mathrm{VE}_{1}^{\star}\). Thus checking that (28) holds equates to checking whether \(A_{0}^{T}\left(Y_{1}^{-1}\right)F_{i,1}^{T}=\boldsymbol{0}_{n}^{T}\), i.e. that \(F_{i,1}Y_{1}^{-1}A_{0}\) equals a column of zeros. This is easy to prove by noticing that the basic chain rule and (1) entails \(\dot{A}_{0}=A_{1}A_{0}\) on one hand, and on the other Lemma 1.5.1 for \(k=1\) implies \(\frac{d}{dt}Y^{-1}=-Y^{-1}A_{1}\). Both of these facts put together imply
\[\frac{d}{dt}\left(Y_{1}^{-1}A_{0}\right)=\overset{\boldsymbol{\cdot}\cdot}{ \boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}}X+Y^{- 1}\overset{\boldsymbol{\cdot}\boldsymbol{\cdot}\boldsymbol{\cdot}}{X}=-Y^{-1}A _{1}X+Y^{-1}A_{1}X=0\]
and thus \(\frac{d}{dt}\left(F_{i,1}Y_{1}^{-1}A_{0}\right)=F_{i,1}\frac{d}{dt}\left(Y_{1} ^{-1}A_{0}\right)=0\) as well, which means \(F_{i,1}Y_{1}^{-1}A_{0}\) is a constant vector. Setting \(t=t_{0}\) and using the fact that \(Y_{1}\) is a principal fundamental matrix (i.e.
Id) means that for \(t=t_{0}\), abridging notation \(X^{0}=\left(X_{1}^{0},\ldots,X_{n}^{0}\right)=X\left(\phi\left(t_{0}\right)\right)\),
\[F_{i,1}Y_{1}\left(t_{0}\right)^{-1}A_{0} = F_{i,1}A_{0}=X\left(\phi\left(t_{0}\right)\right)-\frac{1}{X_{i} \left(\phi\left(t_{0}\right)\right)}\left(X\left(\phi\left(t_{0}\right) \right)\odot\boldsymbol{e}_{i}^{T}\right)X\left(\phi\left(t_{0}\right)\right)\] \[= \left(\begin{array}{c}X_{1}^{0}\\ X_{2}^{0}\\ \vdots\\ X_{i}^{0}\\ X_{n}^{0}\end{array}\right)-\frac{1}{X_{i}^{0}}\left(\begin{array}{c}0_{n \times\left(i-1\right)}\\ \vdots\\ X_{n-1}^{0}\\ X_{n}^{0}\end{array}\right)\begin{array}{c}X_{1}^{0}\\ X_{2}^{0}\\ \vdots\\ X_{i}^{0}\\ X_{n-1}^{0}\end{array}\right)\left(\begin{array}{c}X_{1}^{0}\\ X_{2}^{0}\\ \vdots\\ X_{n}^{0}\\ X_{n}^{0}\end{array}\right)=\boldsymbol{0}_{n}.\]
Still for \(k=1\), we need to check that the \(n-1\) non-zero columns of \(\left(F_{1}Y^{-1}\right)^{T}\) are linearly independent. This is a consequence of the fact that
\[F_{1}^{T}=\left(\begin{array}{c|c}\text{Id}_{i-1}&0\\ \vdots\\ \hline-\frac{X_{1}}{X_{i}}&\cdots&-\frac{X_{i-1}}{X_{i}}&0\\ \vdots&\vdots&\text{Id}_{n-i}\\ 0&0\end{array}\right)\]
thus in \(\left(Y^{-1}\right)^{T}F_{1}^{T}\) column \(i\) equals \(\boldsymbol{0}_{n}\) and its remaining \(n-1\) columns are the output of subtracting multiples of column \(i\) in \(\left(Y^{-1}\right)^{T}\) from its remaining (already independent) \(n-1\) columns.
Let \(\phi\left(t\right)\) be a particular solution of (1) whose parametric expression can be represented with one differential equation involving one coordinate function \(z_{i}\) (case in point: (51), (57) in SS3).
**Theorem 2.2.2**.: _In the above hypotheses, let \(f_{1}\) be one of the columns of \(\left(Y_{1}^{-1}\right)F_{1}^{T}\). Define \(X_{k}:=\left(Y_{1}^{\odot k}\right)^{T}\). Then each term generated by the following recursion_
\[f_{k}:=-X_{k}^{-1}\int X_{k}\left\{\sum_{j=2}^{k}\binom{k}{j}\left(A_{j}\odot \text{Id}_{n}^{\odot k-j}\right)^{T}f_{k-j+1}\right\}dx, \tag{37}\]
_is the degree-\(k\) term of the expansion of a formal first integral_
\[f\left(\boldsymbol{z}\right)=f_{1}\boldsymbol{z}+\frac{1}{2}f_{2}\boldsymbol{z }^{\odot 2}+\frac{1}{3!}f_{3}\boldsymbol{z}^{\odot 3}+\ldots,\]
_provided that each new integral in (2.2.2) is computed by taking independent variable \(x=z_{i}\left(t\right)\), and the limits or constants of integration at each stage can be taken so as to ensure that_
\[f_{k}\left(A_{0}\odot\text{Id}_{n}^{\odot k-1}\right)+\binom{k-1}{1}f_{k-1} \left(A_{1}\odot\text{Id}_{n}^{\odot k-2}\right)+\cdots+\binom{k-1}{k-2}f_{2} \left(A_{k-2}\odot\text{Id}_{n}\right)+f_{1}A_{k-1}=0. \tag{38}\]
Proof.: Most of the work has been done already by Proposition 1.5.3, Lemma 2.2.1 and the properties intrinsic to \(\odot\). Firstly, \(X_{k}^{-1}=\left[\left(Y_{1}^{-1}\right)^{\odot k}\right]^{T}\). Secondly, condition (38) is nothing but
a repetition of (28). In order to check the consequences of defining (37), all it takes is to realize that the first \(d_{n,k}\times d_{n,k}\) equations of \(\left(\mathrm{LVE}_{\phi}^{*}\right)\) read
\[\frac{\cdot}{\left[\left(Y_{1}^{-1}\right)^{\odot k}\right]^{T}}=-\left(A_{1} \odot\mathrm{Id}_{n}^{\odot k-1}\right)^{T}\left[\left(Y_{1}^{-1}\right)^{ \odot k}\right]^{T},\]
which means \(\left(Y_{1}^{\odot k}\right)^{T}\) is a principal fundamental matrix thereof; the homogeneous part of the equations satisfied by \(f_{k}\) displays the same matrix, thus (37) is just plain variation of constants applied to what is said in Lemma 1.5.2.
### Filtering with a single infinite matrix product
The above result provides a recursive method to compute formal first integrals of arbitrary autonomous systems, thereby summing up the aims of the current paper. The question arises, however, as to whether a _single_ infinite filter matrix \(\Phi=\exp_{\odot}F\) exists, amenable to recursive computation but expressible in compact form and _computed without any further quadratures_, which multiplied times the fundamental matrix of the dual systems (whose terms are also computable recursively, as seen above) yields an already-trimmed matrix comprised of linearly independent admissible solutions in a single swipe.
Building such an infinite quadrature-free filter matrix beyond \(k=1\) has a value that is more theoretical than practical in nature, since the computational inviability of ever-growing matrices and the effort of discarding the aforementioned cross-products \(g_{j_{1}}^{\left(i_{1}\right)}\odot\cdots\odot g_{j_{m}}^{\left(i_{m}\right)}\) get in the way of approximating jets properly. So the question arises: why consider one such matrix? The answer resides in the recent trends in categorification of dynamical systems (e.g. [7]) and the potential to transport the techniques known in reductions of flat meromorphic connections to arbitrary dynamical systems.
A considerable wealth of evidence shows one such quadrature-free matrix exists, at least for systems defined by analytical vector fields, but a deeper study would warrant techniques beyond the purview and goals of this article and will thus be left for future work:
**Conjecture 2.3.1**.: _Let \(\phi\left(t\right)\) be a non-trivial solution for (1) and \(t_{0}\) such that \(X_{i}\left(\phi\left(t_{0}\right)\right)\neq 0\) for some \(i\). Let \(\left(\Upsilon_{k}^{-1}\right)^{T}\) be the principal fundamental matrix for \(\left(\mathrm{LVE}_{\phi}^{k}\right)^{\star}\) equalling \(\mathrm{Id}\) at \(t=t_{0}\). Use superscript \({}^{0}\) to denote values at \(t=t_{0}\) and define \(F_{1}:=F_{i,1}\) as in (36). Then, for every \(k\geq 1\), the constant matrix \(\Phi_{k}^{T}\) defined by the transpose lower left \(D_{n,k}\)-block of \(\Phi=\exp_{\odot}F\), where \(F=\left(\cdots\mid F_{3}\mid F_{2}\mid F_{1}\right)\) and_
\[F_{k} = -\frac{1}{X_{i}^{0}}\left[\sum_{j=0}^{k-2}\binom{k-1}{j}F_{j+1} \left(A_{k-j-1}\odot\mathrm{Id}^{\odot j}\right)\right]U_{k}, \tag{39}\]
_where_
\[U_{k}=\left[\sum_{j=0}^{k-1}\binom{k}{j+1}\left(-1\right)^{j}\left(\mathrm{Id }-F_{1}\right)^{\odot j}\odot\mathrm{Id}^{\odot k-1-j}\right]\odot\boldsymbol{ e}_{i}^{T}=\left[\bigodot_{j=1}^{k-1}\left(\mathrm{Id}-\zeta_{k}^{j}F_{1} \right)\right]\odot\boldsymbol{e}_{i}^{T} \tag{40}\]
_and \(\zeta_{k}=\exp\left(2\pi\mathrm{i}/k\right)\) is the root of unity, is such that the last \(n-1\) non-zero columns of \(\left(\Upsilon_{k}^{-1}\right)^{T}\left(\Phi_{k}\right)^{T}\) are linearly independent admissible solutions \(g_{1}^{\left(\cdot\right)},g_{2}^{\left(\cdot\right)},\ldots,g_{n-1}^{\left( \cdot\right)}\in\mathrm{Sol}\left(\mathrm{LVE}_{\phi}^{k}\right)^{\star}\cap \ker\widehat{A}_{\mathrm{LVE}_{\phi}^{k-1}}^{T}\) and jets of functionally independent formal first integrals \(g_{1},\ldots,g_{n-1}\) of valuation \(1\)._
**Remarks 2.3.2**.:
1. We already know all the columns of \(\left(\Upsilon_{k}^{-1}\right)^{T}\left(\Phi_{k}\right)^{T}\) belong to \(\mathrm{Sol}\left(\mathrm{LVE}_{\phi}^{k}\right)^{\star}\), because \(\frac{d}{dt}\left(\Upsilon^{-1}\right)^{T}=A_{\mathrm{LVE}^{\star}}\left( \Upsilon^{-1}\right)^{T}\) where \(\Upsilon=\exp_{\odot}Y\), and the above finite-order products are nothing but truncations of the right-product of \(\left(\Upsilon^{-1}\right)^{T}\) by a constant matrix. Thus it only remains to show that these truncations are also annihilated by left-multiplication by \(\widehat{A}_{\mathrm{LVE}_{\phi}^{k-1}}^{T}\), and that the non-zero columns in \(\left(Y_{1}^{-1}\right)^{T}F_{1}^{T}\) are linearly independent; independence of the full non-zero columns in higher orders will be a trivial consequence of the latter fact.
2. In light of Remark 1., Lemma 2.2.1 basically entails that the entire conjecture holds true at level \(k=1\) and, as seen in SS2.2, that was all we needed to compute independent jets of formal first integrals, regardless whether 2.3.1 loses its conjectural status:
**Proposition 2.3.3**.: _The conjecture is true at \(t_{0}\) for every \(k\geq 1\)._
Proof.: All occurrences of \(A_{i}\) from here onward will refer to constant matrices \(A_{i}^{0}=X^{(i)}\left(\phi\left(t_{0}\right)\right)\). Case \(k=1\) has already been tackled paragraphs earlier; in general, we need to prove that
\[F_{k}\left(A_{0}\odot\mathrm{Id}^{\odot k-1}\right)+\binom{k-1}{1}F_{k-1} \left(A_{1}\odot\mathrm{Id}^{k-2}\right)+\cdots+\binom{k-1}{k-2}F_{2}\left(A_{ k-2}\odot\mathrm{Id}\right)+F_{1}\left(A_{k-1}\right)=0, \tag{41}\]
for every \(k\geq 1\).
First of all, bilinearity from Prop. 1.2.5 and a simple induction argument imply, for every set of \(n\times n\) matrices \(M_{1},\ldots,M_{m}\),
\[\bigodot_{i=1}^{m}\left(\mathrm{Id}+M_{i}\right)=\mathrm{Id}_{n}^{\odot m}+ \sum_{j=1}^{m}\mathrm{Id}_{n}^{\odot m-1}\odot M_{j}+\sum_{1\leq j_{1}<j_{2} \leq m}\mathrm{Id}_{n}^{\odot m-2}\odot M_{j_{1}}\odot M_{j_{2}}+\cdots+M_{1} \odot\cdots\odot M_{m}\]
and thus setting \(m=k-1\) and \(M_{i}=-\zeta_{k}^{i}F_{1}\) very basic properties of cyclotomic polynomials imply
\[U_{k}=\left[\bigodot_{i=1}^{k-1}\left(\mathrm{Id}-\zeta_{k}^{i}F_{i}\right) \right]\odot\boldsymbol{e}_{i}^{T}=\left(\sum_{j=0}^{k-1}\mathrm{Id}_{n}^{ \odot j}\odot F_{1}^{\odot k-1-j}\right)\odot\boldsymbol{e}_{i}^{T} \tag{42}\]
We claim that
\[U_{k}\left(A_{0}\odot\mathrm{Id}_{n}^{\odot k-1}\right)=X_{i}\mathrm{Id}_{n}^{ \odot k-1},\qquad\text{ for every }k\geq 1. \tag{43}\]
Indeed, we have, using (12) and the rest of the properties of \(\odot\) (all of which can be traced back to Lemma 1.2.4 or the universal property of \(\odot\), [34, SS2.1]),
\[U_{k}\left(A_{0}\odot\mathrm{Id}_{n}^{\odot k-1}\right) = \left[\left(\sum_{j=0}^{k-1}\mathrm{Id}_{n}^{\odot j}\odot F_{1} ^{\odot k-1-j}\right)\odot\boldsymbol{e}_{i}^{T}\right]\left(A_{0}\odot \mathrm{Id}_{n}^{\odot k-1}\right)\] \[= \frac{k-1}{k}\left(\sum_{j=0}^{k-1}\mathrm{Id}_{n}^{\odot j} \odot F_{1}^{\odot k-1-j}\right)\left(A_{0}\odot\mathrm{Id}_{n}^{k-2}\right) \odot\boldsymbol{e}_{i}^{T}+\frac{X_{i}}{k}\left(\sum_{j=0}^{k-1}\mathrm{Id}_{ n}^{\odot j}\odot F_{1}^{\odot k-j}\right)\] \[= \frac{k-1}{k}\sum_{j=0}^{k-2}\frac{k-1-j}{k-1}F_{1}^{\odot j} \odot A_{0}\odot\mathrm{Id}_{n}^{\odot k-2-j}\odot\boldsymbol{e}_{i}^{T}+ \frac{X_{i}}{k}\left(\sum_{j=0}^{k-1}\mathrm{Id}_{n}^{\odot j}\odot F_{1}^{ \odot k-j}\right)\] \[= \frac{1}{k}\sum_{j=0}^{k-2}\left(k-1-j\right)F_{1}^{\odot j} \odot A_{0}\odot\mathrm{Id}_{n}^{\odot k-2-j}\odot\boldsymbol{e}_{i}^{T}+ \frac{X_{i}}{k}\left(\sum_{j=0}^{k-1}\mathrm{Id}_{n}^{\odot j}\odot F_{1}^{ \odot k-j}\right)\] \[= X_{i}\mathrm{Id}_{n}^{\odot k-1}.\]
The rest follows from (43):
\[F_{k}\left(A_{0}\odot\mathrm{Id}_{n}^{\odot k-1}\right) = -\frac{1}{X_{i}}\left[\sum_{j=0}^{k-2}\binom{k-1}{j}F_{j+1}\left(A_ {k-1-j}\odot\mathrm{Id}_{n}^{\odot j}\right)\right]U_{k}\left(A_{0}\odot \mathrm{Id}_{n}^{k-1}\right)\] \[= -\frac{1}{X_{i}}\left[\sum_{j=0}^{k-2}\binom{k-1}{j}F_{j+1}\left(A _{k-1-j}\odot\mathrm{Id}_{n}^{\odot j}\right)\right]X_{i}\mathrm{Id}_{n}^{ \odot k-1}\] \[= -\sum_{j=0}^{k-2}\binom{k-1}{j}F_{j+1}\left(A_{k-1-j}\odot \mathrm{Id}_{n}^{\odot j}\right)\]
which proves (41) true.
The following corresponds to single-row blocks (i.e. one first integral at a time) and is a valid precursor to future studies on how to maintain the filtered admissible structure even after performing changes of variables, but this is not immediately necessary to our current aims:
**Conjecture 2.3.4**.: _Let \(g\) be a first integral of (1) and \(g_{i}^{(k)}\) its \(k^{\mathrm{th}}\) lexicographic derivative, for every \(k\geq 1\). Let \(g_{k}:=g^{(k)}\left(\phi\left(t_{0}\right)\right)\) be its value at the original point and define the matrix (constant, just like \(F_{k}\)):_
\[M_{k}=\left(\begin{array}{cccc}g_{1}^{\odot k}&&&\\ c_{1,\ldots,1,2}g_{1}^{\odot k-1}\odot g_{2}&\ddots&\\ \vdots&\ddots&g_{1}^{\odot 2}&\\ g_{k}&\cdots&g_{2}&g_{1}\end{array}\right)\in\mathrm{Mat}_{1,n}^{n,n}\]
_be the first \(k\times D_{n,k}\) block of \(M:=\exp_{\odot}\left(\cdots g_{k}\mid g_{k-1}\mid\cdots\mid g_{2}\mid g_{1}\right)\). Then_
\[\left(\Upsilon_{k}^{-1}\right)^{T}\Phi_{k}^{T}M_{k}^{T}=\left(\begin{array}{ ccc}\frac{1}{k!}\left(g^{k}\right)^{(k)}&\cdots&\frac{1}{2}\left(g^{2}\right)^{(k)}&g ^{(k)}\\ &\ddots&\vdots&\vdots\\ &&&\frac{1}{2}\left(g^{2}\right)^{(2)}&g^{(2)}\\ &&&g^{(1)}\end{array}\right)\]
_In other words: \(\left(\Upsilon^{-1}\right)^{T}\Phi^{T}\Gamma^{T}\) is an infinite matrix whose columns are the Taylor terms of the respective powers of \(g\). \(\square\)_
**Example 2.3.5**.: In both cases (the quadrature-free one posed by Conjecture 2.3.1 and the proven, quadrature-derived method in Theorem 2.2.2) every new Taylor block of a given formal first integral can be obtained via a filter matrix; the only difference is whether the matrix has a simple explicit form or not. Let us show the first stages of _any_ filter matrix (proven or not) does to the fundamental matrix. Let us assume, for instance, \(i=1\). Stage \(k=1\) is not even a conjecture in virtue of Lemma 2.2.1 and the first order filter matrix, namely \(F_{1}\) in (36), transforms the transpose of the fundamental matrix of the dual system \(\Upsilon_{1}^{-1}=Y_{1}^{-1}\) into a matrix of the form
\[\left(\begin{array}{c}\mathbf{0}\\ \hline G_{1}\end{array}\right):=\left(\begin{array}{c}\mathbf{0}\\ \hline g_{1}^{(1)}\\ g_{2}^{(1)}\\ \vdots\\ g_{n-1}^{(1)}\end{array}\right)=\left(\begin{array}{cccc}0&0&\cdots&0&0\\ g_{1,1}^{(1)}&g_{1,2}^{(1)}&\cdots&g_{1,n-1}^{(1)}&g_{1,n}^{(1)}\\ g_{2,1}^{(1)}&g_{2,2}^{(1)}&\cdots&g_{2,n-1}^{(1)}&g_{2,n}^{(1)}\\ \vdots&\vdots&&\vdots&\vdots\\ g_{n-1,1}^{(1)}&g_{n-1,2}^{(1)}&\cdots&g_{n-1,n-1}^{(1)}&g_{n-1,n}^{(1)}\end{array} \right); \tag{44}\]
each of these row vectors \(g_{1}^{(1)},\ldots,g_{n-1}^{(1)}\) is the potential gradient of a formal first integral and all \(n-1\) of them are linearly independent. In other words, they are the outputs of Lemma 2.2.1. The next step would be to integrate the second variational system (from here onwards only quadratures are needed):
\[Y_{2}\left(t\right)=\Phi_{1}\int_{0}^{t}\Phi_{1}^{-1}\left(\tau\right)A_{2} \left(\tau\right)\Phi_{1}^{\odot 2}\left(\tau\right)d\tau\]
invert the fundamental matrix for \(\left(\text{LVE}^{2}\right)\),
\[\Upsilon_{2}=\left(\begin{array}{cc}Y_{1}^{\odot 2}&0\\ Y_{2}&Y_{1}\end{array}\right),\qquad\text{thus}\qquad\Psi_{2}:=\left(\Phi_{2}^ {-1}\right)^{T}\]
and multiply it by the second-order filter matrix
\[\Phi_{2}=\left(\begin{array}{cc}F_{1}^{\odot 2}&0\\ F_{2}&F_{1}\end{array}\right),\]
where \(F_{2}\) is obtained from (39), (40) or from (37), and the result we obtain is
\[\Phi_{2}\left(\Psi_{2}\right)^{T}=\Phi_{2}\Upsilon_{2}^{-1}=\left(\begin{array} []{cc}G_{1}^{\odot 2}&0\\ G_{2}&G_{1}\end{array}\right)=\left(\begin{array}{cc}\begin{array}{cc}0_{a _{1}}\\ g_{1}^{(1)}\end{array}\\ \vdots\\ g_{n-1}^{(1)}\end{array}&\begin{array}{c}0_{a_{n}}\\ \mathbf{0}_{n}\\ g_{1}^{(1)}\end{array}\\ \vdots&\begin{array}{c}\vdots\\ g_{n-1}^{(2)}\end{array}&\begin{array}{c}g_{n-1}^{(1)}\end{array}\end{array} \right) \tag{45}\]
Take for instance the first non-zero row of the lower block \(\left(G_{2}\mid G_{1}\right)\), i.e. the one highlighted in gray. Imagine we perform the computations for \(\left(\text{LVE}_{\phi}^{3}\right)\) and multiply \(\Upsilon_{3}^{-1}\) by the filter matrix \(\Phi_{3}\); we obtain a matrix having the one in (45) as a lower right \(D_{n,k}\times D_{n,k}\) block:
\[\Phi_{3}\Upsilon_{3}^{-1} = \left(\begin{array}{ccc}F_{1}^{\odot 3}&0&0\\ 3F_{1}\odot F_{2}&F_{1}^{\odot}&0\\ F_{3}&F_{2}&F_{1}\end{array}\right)\Upsilon_{3}^{-1}=\left(\begin{array}{ ccc}G_{1}^{\odot 3}&0&0\\ 3G_{1}\odot G_{2}&G_{1}^{\odot}&0\\ G_{3}&G_{2}&G_{1}\end{array}\right)\] \[= \left(\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc} \begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{array}{ccc}\begin{
where \(g_{1}^{(i)}=g_{1}^{(i)}\left(\boldsymbol{\phi}\right)\), i.e. the implementation of (3) in 1.2.1 (2.) to \(\boldsymbol{z}=\phi\). The same applies, _mutatis mutandis_, to the rest of rows of infinite block \(\ldots G_{3}\mid G_{2}\mid G_{1}\), i.e. replacing \(g_{1}^{(i)}\) with \(g_{j}^{(i)}\) for some other \(j=2,\ldots,n-1\). Needless to say the upper terms... in (46) and higher-order counterparts contain all the cross-products arising from derivatives of products of the first integrals, e.g. \(g_{1}^{(1)}\odot g_{3}^{(2)}\) would be a row in \(G_{1}\odot G_{2}\) (it is easy to check which one by way of lexicographic ordering and we invite the reader to do this themselves, using the examples in SS3).
## 3 Examples
### Dixon's system
The following system,
\[\dot{u}=\frac{uv}{u^{2}+v^{2}}-\alpha u,\qquad\dot{v}=\frac{v^{2}}{u^{2}+v^{2} }-\beta v+\beta-1, \tag{48}\]
where \(\alpha,\beta>0\), arose in [8] as a decoupled two-dimensional fragment of a three-dimensional dynamical model of the magnetic field of neutron stars. Several references (e.g. [2, 11, 37]) discussed the ostensible chaotic behavior of the dynamics of (48). [33], on the other hand, purported the non-chaotic character of this system based on a version of the Poincare-Bendixson Theorem that precludes the need for compact sets comprising finitely many sets of homoclinic/heteroclinic connections. Given the existing, seldom-discussed gap between the concepts _non-chaotic_ and _integrable_ (regardless of which ad-hoc definition of chaos is used), we can afford to eschew the polemic altogether and focus on finding a conserved quantity for (48) using higher variational equations.
We will focus on the case \(\alpha=\beta>0\). The case \(\alpha\neq\beta\), as well as the original three-dimensional model ([11, eqs (3)-(5)]), will be tackled in future work.
\(\alpha=1\) is immediate to solve and needs no further consideration. Let us focus on \(\alpha\neq 1\). Previous attempts at simplifying higher-order LVE\({}^{k}\) suggest change of variable
\[\left(u,v\right)=\left(\frac{\sin y}{x},\frac{\cos y}{x}\right) \tag{49}\]
which transforms (48) for \(\alpha=\beta\) into
\[\dot{x}=\alpha x\left(1-x\cos y\right),\qquad\dot{y}=-\left(\alpha-1\right)x \sin y. \tag{50}\]
We will use particular solution
\[\phi\left(t\right)=\left(x\left(t\right),0\right)\qquad\text{ such that }\dot{x}=-\alpha\left(-1+x\right)x. \tag{51}\]
Along \(\Gamma\), the higher variational complex and its dual (LVE\({}^{*}_{\phi}\)) feature the following blocks for \(k\geq 0\),
\[A_{k}=\left\{\begin{array}{ll}X\left(x,y\right)=\left(\begin{array}{cc} \alpha x\left(1-x\cos y\right)\\ -\left(\alpha-1\right)x\sin y\end{array}\right),&k=0,\\ \left(\begin{array}{cc}-\alpha\left(2x-1\right)&0\\ 0&-\left(\alpha-1\right)x\end{array}\right),&k=1,\\ \left(\boxed{0_{2\times\left(k-2\right)}}&\left(-1\right)^{k/2}2\alpha&0&\left( -1\right)^{k/2+1}\alpha x^{2}\\ 0&\left(-1\right)^{k/2}\left(-1+\alpha\right)&0\end{array}\right),&k\text{ even}\\ \left(\boxed{0_{2\times\left(k-1\right)}}&\left(-1\right)^{\frac{k+1}{2}}2 \alpha x&0\\ 0&\left(-1\right)^{\frac{k+1}{2}}\left(-1+\alpha\right)x\end{array}\right),&k \text{ odd},\end{array}\right. \tag{52}\]
for instance a principal fundamental matrix of (VE\({}_{\phi}\)) is
\[\Upsilon_{1}=Y_{1}=\left(\begin{array}{cc}\frac{\left(x-1\right)x}{\left(x_{0} -1\right)x_{0}}&0\\ 0&\left(1-x_{0}\right)^{\frac{1}{\alpha}-1}(1-x)^{1-\frac{1}{\alpha}}\end{array}\right)\]
and the first-order filter matrix is very simple in this case,
\[F_{1}=\mathrm{Id}_{2}-\frac{1}{X_{1}^{0}}\left(X^{0}\odot\boldsymbol{e}_{1}^{T }\right)=\left(\begin{array}{cc}0&0\\ -\frac{X_{2}\left(x_{0},y_{0}\right)}{X_{1}\left(x_{0},y_{0}\right)}&1\end{array} \right)=\left(\begin{array}{cc}0&0\\ 0&1\end{array}\right),\]
where \(x_{0}=x\left(0\right)\) and \(y_{0}=y\left(0\right)\); so are the higher-order matrices in \(\Phi=\exp_{\odot}F\) obtained from (39), some of which are shown below:
\[F_{2}=\left(\begin{array}{cc}0&0&0\\ 0&\frac{1-\alpha}{\alpha\left(x_{0}-1\right)}&0\end{array}\right),\quad F_{3 }=\left(\begin{array}{cc}0&0&0&0\\ 0&\frac{\left(\alpha-1\right)\left(2\alpha-1\right)}{\alpha^{2}\left(x_{0}-1 \right)^{2}}&0&0\end{array}\right),\] \[F_{4}=\left(\begin{array}{cc}0&0&0&0\\ 0&-\frac{\left(\alpha-1\right)\left(2\alpha-1\right)\left(3\alpha-1\right)}{ \alpha^{3}\left(x_{0}-1\right)^{3}}&0&-\frac{\left(\alpha-1\right)\left(2x_{0 }+1\right)}{\alpha\left(x_{0}-1\right)^{2}}&0\end{array}\right),\ldots\]
Returning to \(k=1\), Lemma 2.2.1 ensures that the second column of the filtered matrix
\[\left(Y_{1}^{-1}\right)^{T}F_{1}^{T}=\left(\begin{array}{cc}0&0\\ 0&\left(1-x_{0}\right)^{1-\frac{1}{\alpha}}(1-x)^{\frac{1}{\alpha}-1}\end{array} \right),\]
is an admissible solution of degree \(1\). We can normalize this to eliminate appearances of \(x_{0}\):
\[\left(\Phi_{1}^{-1}\right)^{T}F_{1}^{T}P^{T}:=\left(\begin{array}{cc}0&0\\ 0&\left(1-x_{0}\right)^{1-\frac{1}{\alpha}}(1-x)^{\frac{1}{\alpha}-1}\end{array} \right)\left(\begin{array}{cc}1&0\\ 0&\left(1-x_{0}\right)^{\frac{1}{\alpha}-1}\end{array}\right)^{T}=\left( \begin{array}{cc}0&0\\ 0&\left(1-x\right)^{\frac{1}{\alpha}-1}\end{array}\right),\]
the second of whose columns is still the gradient \(f^{\left(1\right)}\) of a formal first integral (this is exactly what appears as a row \(g_{1}^{\left(1\right)}\) in (44)).
The higher order trimming (in which we discard redundant symmetric products \(f^{\left(i_{1}\right)}\odot\cdots\odot f^{\left(i_{k}\right)}\), i.e. derivatives of powers of \(f\)) is clear once we write down the dual system of equations and filter out telescopically using either Lemma 2.1.1 and the general form of that each new term \(f^{\left(k\right)}\) of our formal first integral \(f\) is expressible as the following (bear in mind that \(x=x\left(t\right)\) here): \(f^{\left(k\right)}\left(x\right)=\left(\frac{d}{dx}f^{\left(k-1\right)}\mid g _{k}\left(x\right)\right)\), where
\[\sum_{j=0}^{\frac{k-1}{2}}\left(\left(-1\right)^{j+1}\left(\alpha-1\right) \begin{pmatrix}k\\ -1-2j+k\end{pmatrix}g_{k-2j}\left(x\right)+\left(-1\right)^{j+1}\alpha \begin{pmatrix}k\\ -2j+k\end{pmatrix}xg_{k-2j}^{\prime}\left(x\right)\right)+\alpha g_{k}^{\prime} \left(x\right)=0,\]
which means \(g_{k}\left(x\right)=0\) for every even \(k\geq 2\) and e.g.
\[f_{1} = \left(\begin{array}{cc}0&\left(1-x\right)^{\frac{1}{\alpha}-1} \end{array}\right),\] \[f_{2} = \left(\begin{array}{cc}0&\left(\alpha-1\right)\left(x\right)^{ \frac{1}{\alpha}-2}&0\end{array}\right),\] \[f_{3} = \left(\begin{array}{cc}0&\left(\frac{1}{\alpha}-2\right)\left( \frac{1}{\alpha}-1\right)\left(1-x\right)^{\frac{1}{\alpha}-3}&0&\frac{\left(1-x \right)^{\frac{1}{\alpha}-2}\left(2\alpha+\left(\alpha-2\right)x-1\right)}{ \alpha-2}\end{array}\right),\] \[f_{4} = \left(\begin{array}{cc}\frac{d}{dx}f_{3}&0\end{array}\right),\] \[f_{5} = \left(\begin{array}{cc}\frac{d}{dx}f_{4}&-\frac{\left(1-x\right) ^{\frac{1}{\alpha}-3}\left(-\left(\alpha-2\right)x\left(\left(\alpha-2\right) \left(3\alpha-4\right)x+\alpha\left(39\alpha-55\right)+14\right)+\alpha\left( 2\left(53-24\alpha\right)\alpha-73\right)+16}{\left(\alpha-2\right)^{2}\left(3 \alpha-4\right)}\end{array}\right),\]
This can be achieved via (37) in Theorem 2.2.2 (the easiest option even for low values of \(k\)) or the filter matrix described in SS2.3 to compute each \(f_{k}\) in terms of the preceding \(f_{k-1},\ldots,f_{1}\). We invite the reader to apply either method and realize that the above terms appear with only minor attention paid at each step.
A tedious scrutiny of the general form of these vectors shows that they are precisely limit cases of the higher-order Taylor terms (along \(\left(x\left(t\right),0\right)\in\Gamma\)) of the following function:
\[f=\left(x\sin^{\frac{\alpha}{1-\alpha}}(y)-\left(\frac{\alpha\cos y}{\alpha-1 }\right){}_{2}F_{1}\left(\frac{1}{2},\frac{\alpha}{2(\alpha-1)}+1;\frac{3}{2} ;\cos^{2}y\right)\right){}^{\frac{1}{\alpha}-1}\]
where \({}_{2}F_{1}\) is the **Gaussian hypergeometric function**[1]. Removing the outer power yields a first integral of (50) as well, as can be easily checked:
\[f=x\sin^{\frac{\alpha}{1-\alpha}}(y)-\left(\frac{\alpha\cos y}{\alpha-1} \right){}_{2}F_{1}\left(\frac{1}{2},\frac{\alpha}{2(\alpha-1)}+1;\frac{3}{2} ;\cos^{2}y\right).\]
Undoing the transformation (49) we obtain:
**Theorem 3.1.1**.: _The following function_
\[\boxed{F=u^{\frac{\alpha}{1-\alpha}}v^{\frac{1}{\alpha-1}}\left(\frac{u^{2}}{ v^{2}}+1\right)^{\frac{1}{2(\alpha-1)}}-\frac{\alpha v\,{}_{2}F_{1}\left( \frac{1}{2},\frac{\alpha}{2(\alpha-1)}+1;\frac{3}{2};\frac{v^{2}}{u^{2}+v^{2 }}\right)}{(\alpha-1)\sqrt{u^{2}+v^{2}}}}\]
_is a first integral of the original special CDK system. \(\square\)_
### The SIR model with vital dynamics
The **SIR epidemiological model with vital dynamics**[10, 17] is given by
\[\left\{\begin{array}{rcl}\dot{S}&=&\mu(n-S)-\frac{\beta SI}{n},\\ \dot{I}&=&\frac{\beta SI}{n}-I(\gamma+\mu),\\ \dot{R}&=&\gamma I-\mu R.\end{array}\right. \tag{53}\]
The system was first introduced in [17] strictly for the modelling of infectious diseases, but has since been applied to a number of situations allowing dynamic compartmentalization, e.g. marketing [32]. The study of its integrability has been the subject of several works already ([6, 14]) and the search for a first integral is still an open problem. We will address specific examples; the general case will be addressed in the future.
First, let us start with a very simple example whose solution we already know: \(\beta=\gamma\) and \(\mu=0\) and change of variable
\[S=n\left(1+xy\right),\quad I=-ny,\quad R=\gamma nz, \tag{54}\]
we obtain system of equations \(\frac{d}{dt}\left(x,y,z\right)=\left(\gamma-\gamma(x-1)xy,\gamma xy^{2},-y\right)\), and applying Conjectures 2.3.1, 2.3.4 and (if the matrix is used) Lemma 2.1.1, the general terms of the jets telegraph a very identifiable structure: that of two first integrals for the transformed system, \(g^{\star}=\frac{\log(xy+1)}{\gamma}+z\), \(f^{\star}=-xy+\log(xy+1)+y\) which transform back by (54) into \(g=\frac{n\log\left(\frac{S}{n}\right)+R}{\gamma n}\), \(f=-\frac{S+I}{n}+\log\left(\frac{S}{n}\right)+1\), thus taking us to the already-known first-integral \(S+I+R-n\) (thus, \(S+I+R\)) and the remaining one, \(\frac{n\log\left(\frac{S}{n}\right)+R}{\gamma n}\) (alternatively, \(\frac{S}{n}e^{\frac{R}{n}}\)). This is all in keeping with what is already known about the integrability of the system for \(\mu=0\) (see e.g. [6, SS1]). The reader can fill in the blanks for filter matrices or progressive filtering (Theorem 2.2.2) and formal Taylor terms here; we will add more detail about these in the less trivial case below.
To wit, the case in which \(\gamma=0\) and \(\beta,\mu\neq 0\). To the best of our knowledge, the integrability of this case is an open problem in earlier references. Change of variables
\[\left(S,I,R\right)=\left(y^{2}\left(x^{-\frac{\beta}{\mu}}+1\right)+n,-x^{- \frac{\beta}{\mu}}y^{2},y^{2}z\right), \tag{55}\]
itself suggested by an earlier try at simplifying \(\mathrm{LVE}_{\phi}^{3}\), provides us with new system
\[\dot{x}=-\frac{\mu x\left[y^{2}\left(x^{-\frac{\beta}{\mu}}+1\right)+n\right] }{n},\quad\dot{y}=-\frac{1}{2}\mu y,\quad\dot{z}=0. \tag{56}\]
This system admits an invariant plane \(y=z=0\) containing particular solution
\[\phi\left(t\right)=\left(\widehat{x}\left(t\right),0,0\right),\qquad\text{ where }\frac{d}{dt}\widehat{x}=-\mu\widehat{x}. \tag{57}\]
This leads to the variational system \(\dot{Y}=\left(A\odot\exp_{\odot}\mathrm{Id}_{n}\right)Y\) along \(\phi\) where
\[A_{\mathrm{LVE}_{\phi}}=A\odot\exp_{\odot}\mathrm{Id}_{n}=\left(\begin{array} []{cccc}\ddots&&&\\ \ldots&4A_{1}\odot\mathrm{Id}_{n}^{\odot 3}&&&\\ \ldots&6A_{2}\odot\mathrm{Id}_{n}^{\odot 2}&3A_{1}\odot\mathrm{Id}_{n}^{\odot 2}&\\ \ldots&4A_{3}\odot\mathrm{Id}_{n}&3A_{2}\odot\mathrm{Id}_{n}&2A_{1}\odot \mathrm{Id}_{n}&\\ \ldots&A_{4}&A_{3}&A_{2}&A_{1}\end{array}\right),\]
and \(A_{i}\) defined per (27) appear as follows:
\[A_{0} = \left(\begin{array}{c}-\mu\widehat{x}(t)\\ 0\\ 0\end{array}\right)\] \[A_{1} = \left(\begin{array}{cccc}-\mu&0&0\\ 0&-\frac{\mu}{2}&0\\ 0&0&0\end{array}\right)\] \[A_{2} = \left(\begin{array}{cccc}0&0&0&\frac{2\mu\widehat{x}(t)\left(- \widehat{x}(t)^{-\frac{\beta}{\mu}}-1\right)}{n}&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\end{array}\right)\] \[A_{3} = \left(\begin{array}{cccc}0&0&0&\frac{2\left(\left(\beta-\mu \right)\widehat{x}(t)^{-\frac{\beta}{\mu}}-\mu\right)}{n}&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0\end{array}\right)\] \[A_{k} = \left(\begin{array}{cccc}0&0&0&\frac{2\left(\left(\beta-\mu \right)\widehat{x}(t)^{-\frac{\beta}{\mu}}-\mu\right)}{n}&0_{3\times d_{3,k-4}} \\ 0&0&0&0&0\end{array}\right),\qquad k\geq 4\]
Lower blocks of the quadrature-free filter matrix are built as proposed in SS2.3, let us write the first few cases
\[F_{1}\!=\!\left(\begin{array}{ccc}0&0&0\\ 0&1&0\\ 0&0&1\end{array}\right),\,F_{2}\!=\!\left(\begin{array}{ccc}0&0\\ 0&-\frac{1}{2\pi_{0}}&0_{3\times 4}\\ 0&0\end{array}\right),\,F_{3}\!=\!\left(\begin{array}{ccc}0&0\\ 0&\frac{3}{4\pi_{0}^{2}}&0_{3\times 8}\\ 0&0\end{array}\right),\,F_{4}\!=\!\left(\begin{array}{ccc}0&0&0\\ 0&-\frac{15}{8\pi_{0}^{5}}\\ 0&0\end{array}\right)\,\frac{3\left(\widehat{x}_{0}^{-\frac{\beta}{\mu}}+1 \right)}{n\pi_{0}}\]
The filtered matrix, has two linearly independent admissible solutions as its nonzero rows:
\[F_{1}Y_{1}^{-1}=\left(\begin{array}{ccc}0&0&0\\ 0&\frac{\sqrt{x}_{0}}{\sqrt{\widehat{x}}(t)}&0\\ 0&0&1\end{array}\right),\]
which (optionally) multiplied by
\[P_{1}=\left(\begin{array}{ccc}1&0&0\\ 0&\frac{1}{\sqrt{x}_{0}}&0\\ 0&0&1\end{array}\right) \tag{58}\]
removes occurrences of the initial condition \(\widehat{x}_{0}\):
\[G_{1}=\left(\begin{array}{ccc}0&0&0\\ \hline 0&\frac{1}{\sqrt{\widehat{x}}(t)}&0\\ \hline 0&0&1\end{array}\right)=\left(\begin{array}{c}0&0&0\\ \hline f^{(1)}\\ \hline g^{(1)}\end{array}\right).\]
Let us write things down for \(k=3\): once we have solved the direct variational system \(\left(\mathrm{LVE}_{\mathrm{T}}^{3}\right)\) and obtained its fundamental matrix \(\Upsilon_{3}\), the principal fundamental matrix of the dual system is nothing but the transpose of its inverse; we can either keep said transpose \(\left(\Upsilon_{3}^{-1}\right)^{T}\) and right-multiply it by proposed filter \(\Phi_{3}^{T}\) (in which case first integral jets appear as columns) or left-multiply the original \(\Upsilon_{3}^{-1}\) by \(\Phi_{3}\) in which case jets appear as rows in the bottom block:
\[\Phi_{3}\Upsilon_{3}^{-1} = \left(\begin{array}{ccc}F_{1}^{\odot 3}&0&0\\ 3F_{1}\odot F_{2}&F_{1}^{\odot 2}&0\\ F_{3}&F_{2}&F_{1}\end{array}\right)\left(\begin{array}{ccc}Y_{1}^{\odot 3}&0&0\\ 3Y_{1}\odot Y_{2}&Y_{1}^{\odot 2}&0\\ Y_{3}&Y_{2}&Y_{1}\end{array}\right)^{-1}\] \[= \left(\begin{array}{cccccccccccccccccccccccccccccccccccccccc}
where \(P_{1}\) is as in (58) and \(P_{2},P_{3}\) are successively chosen so as vanish \(\widehat{x}_{0}\) from the matrix (in this case, all terms in \(P_{2},P_{3}\) equal to zero except for \(\left(P_{3}\right)_{2,7}=\frac{3\left(\mu\left(-\widehat{x}_{0}^{-\frac{\beta}{ \mu}}-1\right)+\beta\right)}{n\sqrt{x}_{0}(\beta-\mu)}\)). This equates to weeding out copies of successive symmetric products of \(f^{(i)}\), and/or \(g^{(j)}\) and harks back to Lemma 2.1.1 and Conjecture 2.3.4. The resulting matrix retains its neat echeloned structure
\[\Pi_{3}\Phi_{3}\Upsilon_{3}^{-1}=\left(\begin{array}{ccc}G_{1}^{\odot 3}&0&0 \\ 3G_{1}\odot G_{2}&G_{1}^{\odot 2}&0\\ G_{3}&G_{2}&G_{1}\end{array}\right)\]
where lower block \(G_{3}\mid G_{2}\mid G_{1}\) has independent formal first integral jets as its non-zero rows:
\[\left(\begin{array}{c c c c c c c c c c c c c c c c c}0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0 &0\\ \hline\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span
**Theorem 3.2.1**.: _The following two functions,_
\[F = \frac{e^{\frac{-n+S+I}{2n}}\sqrt{-n+S+I}}{\left(\frac{-n+S+I}{I}- \left(\frac{\beta}{\mu n}\right)^{\beta/\mu}e^{\frac{\beta(-n+S+I)}{\mu n}}(-n+S +I)^{\beta/\mu}\Gamma\left(1-\frac{\beta}{\mu},\frac{\beta(-n+S+I)}{\mu n} \right)\right)^{\frac{\mu}{2\beta}}},\] \[G = \frac{R}{-n+S+I}.\]
_are two functionally independent first integrals of SIR with \(\gamma=0\) (53). \(\Box\)_
It is worth noting, however, that the only easy goal we can aspire to in most cases will be that of formal first integrals with a compact identifiable general term but without a straightforward convergent expression, and resummation and deep pattern recognition techniques will need to be adapted to each problem. This will also be the subject of future research in the short term but for the time being let us see a short example below.
### The van der Pol oscillator
This well-known system is usually presented as an epitome of the proverbial "simple yet chaotic" dynamical system [37]
\[\ddot{u}=\mu(1-u^{2})\dot{u}-u \tag{60}\]
Value \(\mu=2\) appears to have a significant, yet still unclarified role in potential qualitative simplifications in variable transformations. Thus let us fix this value for the present paper:
\[\dot{u}=v,\qquad\dot{v}=2(1-u^{2})v-u \tag{61}\]
Perform the change of variables
\[\left(u\left(t\right),v\left(t\right)\right)=\left(\frac{x\left(t\right)y \left(t\right)}{\sqrt{2}},\left(\frac{x\left(t\right)}{\sqrt{2}}-\frac{1}{ \sqrt{2}}\right)y\left(t\right)\right) \tag{62}\]
that will simplify the corresponding jet, and our transformed system is
\[\dot{x}=-(x-1)x^{3}y^{2}-1,\qquad\dot{y}=(x-1)x^{2}y^{3}+y. \tag{63}\]
The dual infinite variational system \((\mathrm{VE}_{\Gamma})^{\star}\) along particular solution
\[\Gamma=\left\{(x\left(t\right),0):t\in\mathbb{C}\right\},\qquad\mbox{where} \quad\dot{x}\left(t\right)=-1, \tag{64}\]
has the following very simple matrices:
\[A_{1} = \left(\begin{array}{cc}0&0\\ 0&1\end{array}\right),\] \[A_{2} = \left(\begin{array}{cc}0&0&-2(x(t)-1)x(t)^{3}\\ 0&0&0\end{array}\right),\] \[A_{3} = \left(\begin{array}{cc}0&0&2(3-4x(t))x(t)^{2}&0\\ 0&0&0&6(x(t)-1)x(t)^{2}\end{array}\right),\] \[A_{4} = \left(\begin{array}{cc}0&0&12(1-2x(t))x(t)&0&0\\ 0&0&0&6x(t)(3x(t)-2)&0\end{array}\right),\] \[A_{5} = \left(\begin{array}{cc}0&0&12-48x(t)&0&0&0\\ 0&0&0&36x(t)-12&0&0\end{array}\right),\] \[A_{6} = \left(\begin{array}{cccc}0&0&-48&0&0&0\\ 0&0&0&36&0&0\end{array}\right),\] \[A_{k} = 0_{2\times d_{2,k}},\quad k\geq 7.\]
The fact that the ODE in (64) is so trivial to solve \(\left(x\left(t\right)=-t+C\right)\) adds no simplicity to what follows below. The fundamental matrix of \(\mathrm{VE}_{1}\) and the first filter matrix are equally simple:
\[Y_{1}=\left(\begin{array}{cc}1&0\\ 0&e^{x\left(0\right)-x\left(t\right)}\end{array}\right),\qquad F_{1}=\left( \begin{array}{cc}0&0\\ 0&1\end{array}\right)\]
and the generic form of the admissible solutions after applying Theorem 2.2.2 is the following: \(f_{k}=\left(\frac{d}{dx}f_{k-1}\mid g_{k}\left(x\right)\right)\) where \(g_{k}\) is a solution to
\[-(k-1)k(x-1)x^{3}g_{k-2}^{\prime}(x)-g_{k}^{\prime}(x)+(k-2)(k-1)k(x-1)x^{2}g_{ k-2}(x)+kg_{k}(x)=0. \tag{65}\]
and moreover \(g_{k}\left(x\right)=0\) if \(k\) is even, and \(g_{1}\left(x\right)=e^{x}\). Equation (65) has a solution (which of course only merits being written if \(k\) is odd)
\[g_{k}\left(x\right)=e^{kx}\int(k-1)k(\xi-1)\xi^{2}e^{-k\xi}\left((k-2)g_{k-2} \left(\xi\right)-\xi g_{k-2}^{\prime}\left(\xi\right)\right)d\xi.\]
The formal first integral of (63) takes the form
\[F=\sum_{k=1}^{\infty}\frac{1}{k!}g_{k}\left(x\right)y^{k}=\sum_{i=0}^{\infty} \left(-1\right)^{i}e^{\left(2i+1\right)x}y^{2i+1}G_{2i+1};\]
the first terms of \(G_{\star}\) are easy to discern,
\[G_{1} = 1,\] \[G_{3} = \int_{x_{0}}^{x}e^{-2\xi}(\xi-1)^{2}\xi^{2}d\xi,\] \[G_{5} = \int_{x_{0}}^{x}\left(\xi-1\right)^{3}e^{-4\xi}\xi^{5}d\xi+3\int _{\tau_{0}}^{\tau}\left(\xi-1\right)^{2}e^{-2\xi}\xi^{2}\left(\int\left(\eta- 1\right)^{2}e^{-2\eta}\eta^{2}d\eta\right)d\xi\]
and the general term is
\[G_{2i+1}=\sum_{m=1}^{i}\sum_{j=1}^{\binom{i-1}{m-1}}D_{i-1,m-1,j}\Phi\left( \left(i-m\right)\mathbf{C}_{m,i-m-1,j}+\mathbf{1}_{m},2\left(i-m\right) \mathbf{C}_{m,i-m-1,j}+\mathbf{1}_{m},3\left(i-m\right)\mathbf{C}_{m,i-m-1,j }+\mathbf{1}_{m}\right)\]
where we define the following constant vectors: \(\mathbf{1}_{n}:=\left(1,\ldots,1\right)\in\mathbb{Z}^{n}\), and \(C_{\star,\star,\star}\) are the columns of the symmetric product
\[\mathrm{Id}_{k}\odot\mathbf{1}_{d_{k,l}}^{T}=\left(C_{k,l,1}\mid C_{k,l,2} \mid\cdots\mid C_{k,l,d_{k,l+1}}\right)\in\mathrm{Mat}_{k,d_{k,l+1}}\left( \mathbb{Z}\right);\]
(obviously \(C_{k,l,j}=0\) for \(l<0\)), \(\Phi\) is defined recursively as follows: for every \(\mathbf{a},\mathbf{b},\mathbf{c}\in\mathbb{Z}^{m}\),
\[\Phi\left(\mathbf{a},\mathbf{b},\mathbf{c}\right):=\left\{\begin{array}{ll} \int_{x_{0}}^{x}\left(\tau-1\right)^{a_{1}+1}e^{-\left(b_{1}+1\right)\tau} \tau^{c_{1}}d\tau,&m=1,\\ \int_{x_{0}}^{x}\left(\tau-1\right)^{a_{1}+1}e^{-\left(b_{1}+1\right)\tau} \tau^{c_{1}}\Phi\left(\left(a_{2},\ldots,a_{m}\right),\left(b_{2},\ldots,b_{m} \right),\left(c_{2},\ldots,c_{m}\right)\right)d\tau,&m>1\end{array}\right.\]
and \(D_{n,m,j}\) is the \(j^{\mathrm{th}}\) term in the vector \(\mathbf{D}_{n,m}\) obtained from discarding terms containing repeating factors from the reverse-order version of \(\mathbf{i}_{n}^{T}\odot\cdots\odot\mathbf{i}_{n}^{T}\) and \(\mathbf{i}_{n}\) is the reverse-order sequence of the first \(n\) odd integers \(>1\), e.g.
\[\mathbf{D}_{3,2}=\left(3\cdot 5,\ 3\cdot 7,\ 5\cdot 7\right)=\left(D_{3,2,j} \right)_{j=1,2,3,4},\quad\mathbf{D}_{4,3}=\left(3\cdot 5\cdot 7,\ 3\cdot 5\cdot 9,\ 3\cdot 7 \cdot 9,\ 5\cdot 7\cdot 9\right)\]
Further comments
Ever since it first took form in [3], the use of linearized higher variational equations to characterize, obstruct or redefine integrability is proving itself to be a welcome addition to the domain of dynamical systems. This contribution arrives at a time in which other attempts (e.g. [24, 25]) at finding methods to _integrate_ dynamical systems rather than proving them non-integrable are gradually gaining traction.
A number of facts are germane to further studies:
1. the potential shown by higher variational equations in finding adequate changes of variables that will simplify the system into tractability and, potentially, integrability (this is most exemplified in the sample systems tackled in SS3);
2. linked to the above, the possible application of a _Baker-Campbell-Hausdorff_[12] sorts of formula upon variable transformations, using the decomposition of a transformed fundamental matrix into a product of matrices to our advantage;
3. the possible amenability of results in SS2 above to machine learning [26];
4. the applicability of this automatic quadrature-algebraic method to the application of generalized, multivariate _Pade approximants_[9, 13] to first integrals, which will be tackled in an immediate future;
5. the fact that the autonomous system (1) need not be Hamiltonian, thus allowing any arbitrary system to be considered and losing the serious constraint posed by symplectic transformations, since variable transformations can now be chosen freely;
6. also related to (v): the fact that this overrides the obstacles posed by the direct study of variational systems, which only has an implementation to Hamiltonian systems and requires first integrals to be meromorphic.
|
2309.15544 | On Structures in Arrow Categories | In this article we investigate which categorical structures of a category C
are inherited by its arrow category. In particular, we show that a monoidal
equivalence between two categories gives rise to a monoidal equivalence between
their arrow categories. Moreover, we examine under which circumstances an arrow
category is rigid and pivotal. Finally, we derive what the (co)algebra,
bialgebra and Hopf algebra objects are in an arrow category. | Paulina L. A. Goedicke, Jamie Vicary | 2023-09-27T10:04:17Z | http://arxiv.org/abs/2309.15544v1 | # On Structures in Arrow Categories
###### Abstract
In this article we investigate which categorical structures of a category \(\mathcal{C}\) are inherited by its arrow category. In particular, we show that a monoidal equivalence between two categories gives rise to a monoidal equivalence between their arrow categories. Moreover, we examine under which circumstances an arrow category is rigid and pivotal. Finally, we derive what the (co)algebra, bialgebra and Hopf algebra objects are in an arrow category.
###### Contents
* 1 Introduction
* 2 Structures in Arrow Categories
* 2.1 Functors in Arrow Categories
* 2.2 Monoidal Products and Braidings in Arrow Categories
* 2.3 Duals and Pivots in Arrow Categories
* 2.4 (Co)monoids, Bialgebras, Frobenius Structures and Hopf Algebras in Arrow Categories
* A Proofs
## 1 Introduction
Given a category \(\mathcal{C}\), the category of arrows of \(\mathcal{C}\) is a very fundamental concept in category theory [3]. This article is a revision of a chapter from the first
author's master's thesis1, investigating which categorical structures an arrow category inherits from its underlying category \(\mathcal{C}\). We start by discussing functors and natural transformations in Section 2.1 and show that an equivalence between two categories gives rise to an equivalence between their arrow categories. In Section 2.2 we then extend this to braided monoidal categories and functors. In Section 2.3 we prove that the arrow category of a rigid monoidal category restricted to objects that are isomorphisms is also rigid. We further show that in that case, if the underlying category is in addition pivotal, its arrow category is also pivotal. In fact, it is a ribbon category. In Section 2.4 we then discuss (co)monoids, bialgebras, Frobenius structures and Hopf algebras in arrow categories.
Footnote 1: The thesis was submitted in October 2021 at the Department of Mathematics at University of Hamburg.
There already have been definitions of monoidal products in arrow categories [4] and the rest of our results also seem to be quite fundamental, however we still believe them to have some novelty.
We assume familiarity with the basic concepts of category theory, the graphical calculus and quantum groups and refer to [5], [3] and [2] for an introduction. Here we will only briefly review the definition of an arrow category:
**Definition 1.1**.: ([3], p. 24-25) Let \(\mathcal{C}\) be a category. The _arrow category of \(\mathcal{C}\)_ Arr\((\mathcal{C})\) is defined as follows:
* _objects_ are triples \((A,\,B,\,h)\) where \(A\), \(B\in\operatorname{obj}(\mathcal{C})\) and \(h:A\longrightarrow B\).
* _morphisms_\(\phi:(A,B,h)\longrightarrow(A^{\prime},B^{\prime},h^{\prime})\) are pairs \((\phi_{A},\,\phi_{B})\) of morphims \(\phi_{A}:A\longrightarrow A^{\prime}\) and \(\phi_{B}:B\longrightarrow B^{\prime}\) in \(\mathcal{C}\) such that the following diagram commutes:
**Example 1.2**.: The arrow category of \(\operatorname{\mathbf{Mat}}(\mathbb{N})\) has \(b\times v\)-matrices \(\chi\) as objects, where \(b\) and \(v\) are natural numbers. A morphism \(S:M\longrightarrow N\) between two matrices \(M:v\longrightarrow b\) and \(N:v^{\prime}\longrightarrow b^{\prime}\) is given by a pair of matrices \((S_{b},S_{v}):(v,b)\longrightarrow(v^{\prime},b^{\prime})\) such that the following diagram commutes:
Structures in Arrow Categories
We will now discuss what kind of categorical structures one can define in arrow categories.
### Functors in Arrow Categories
In the following we will show that a functor between two categories gives rise to a functor between their arrow categories and prove a similar statement for natural transformations.
**Proposition 2.1**.: _Let \(\mathcal{C}\) and \(\mathcal{D}\) be two categories and let \(\mathcal{F}:\mathcal{C}\longrightarrow\mathcal{D}\) be a covariant functor between those categories. Then \(\mathcal{F}\) gives rise to a covariant functor \(\tilde{\mathcal{F}}\) between \(\text{Arr}(\mathcal{C})\) and \(\text{Arr}(\mathcal{D})\)._
Proof.: Given a functor \(\mathcal{F}:\mathcal{C}\longrightarrow\mathcal{D}\) that assigns to every object \(A\) in \(\mathcal{C}\) an object \(\mathcal{F}(A)\) in \(\mathcal{D}\) and to every morphism \(f:A\longrightarrow B\) in \(\mathcal{C}\) a morphism \(\mathcal{F}(f):\mathcal{F}(A)\longrightarrow\mathcal{F}(B)\) in \(\mathcal{D}\), we can define a functor \(\tilde{\mathcal{F}}\) that maps every object \(f:A\longrightarrow B\) in \(\text{Arr}(\mathcal{C})\) to an object \(\tilde{\mathcal{F}}(f)=\mathcal{F}(f):\mathcal{F}(A)\longrightarrow\mathcal{ F}(B)\) in \(\text{Arr}(\mathcal{D})\) and every morphism \((\phi,\psi):f\longrightarrow f^{\prime}\) in \(\text{Arr}(\mathcal{C})\) to a morphism \(\tilde{\mathcal{F}}(\phi,\psi)=(\mathcal{F}(\phi),\mathcal{F}(\psi)):\mathcal{ F}(f)\longrightarrow\mathcal{F}(f^{\prime})\) in \(\text{Arr}(\mathcal{D})\). This is valid because the diagram
commutes due to functoriality of \(\mathcal{F}\). It is easy to show that \(\tilde{\mathcal{F}}\) preserves composition and the identity morphism in \(\text{Arr}(\mathcal{C})\).
Similarly, one can prove the following statement:
**Proposition 2.2**.: _A contravariant functor \(\mathcal{F}:\mathcal{C}\longrightarrow\mathcal{D}\) gives rise to a contravariant functor \(\tilde{\mathcal{F}}:\text{Arr}(\mathcal{C})\longrightarrow\text{Arr}( \mathcal{D})\)._
**Example 2.3**.: The contravariant functor \(\mathcal{T}:\mathbf{Mat}(\mathbb{N})\longrightarrow\mathbf{Mat}(\mathbb{N})\) which maps each set \(v\) to itself and each matrix \(Mi:v\longrightarrow b\) to its transpose \(M^{T}:b\longrightarrow v\) gives rise to a contravariant functor \(\tilde{\mathcal{T}}:\text{Arr}(\mathbf{Mat}(\mathbb{N}))\longrightarrow\text {Arr}(\mathbf{Mat}(\mathbb{N}))\) which maps each matrix \(M:v\longrightarrow b\) to its transpose \(M^{T}:b\longrightarrow v\) and each morphism \((S_{v},S_{b}):M\longrightarrow N\) to its transpose \((S_{v}^{T},S_{b}^{T}):M^{T}:b\longrightarrow v\). The contravariant functor \(\mathcal{T}:\mathbf{Mat}(\mathbb{N})\longrightarrow\mathbf{Mat}(\mathbb{N})\) which maps each set \(v\) to itself and each matrix \(Mi:v\longrightarrow b\) to its transpose \(M^{T}:b\longrightarrow v\) gives rise to a contravariant functor \(\tilde{\mathcal{T}}:\text{Arr}(\mathbf{Mat}(\mathbb{N}))\longrightarrow\text {Arr}(\mathbf{Mat}(\mathbb{N}))\longrightarrow\text{Arr}(\mathbf{Mat}( \mathbb{N}))\) which maps each matrix \(M:v\longrightarrow b\) to its transpose \(M^{T}:b\longrightarrow v\) and each morphism \((S_{v},S_{b}):M\longrightarrow N\) to its transpose \((S_{v}^{T},S_{b}^{T}):M^{T}:b\longrightarrow v\).
**Example 2.4**.: The contravariant functor \(\mathcal{T}:\mathbf{Mat}(\mathbb{N})\longrightarrow\mathbf{Mat}(\mathbb{N})\) which maps each set \(v\) to itself and each matrix \(Mi:v\longrightarrow b\) to its transpose \(M^{T}:b\longrightarrow v\) gives rise to a contravariant functor \(\tilde{\mathcal{T}}:\text{Arr}(\mathbf{Mat}(\mathbb{N}))\longrightarrow\text {Arr}(\mathbf{Mat}(\mathbb{N}))\longrightarrow\text{Arr}(\mathbf{Mat}( \mathbb{N}))\longrightarrow\text{Arr}(\mathbf{Mat}(\mathbb{N}))\) which maps each matrix \(M:v\longrightarrow b\) to its transpose \(M^{T}:b\longrightarrow v\) and each morphism \((S_{v},S_{b}):M\longrightarrow N\) to its transpose \((S_{v}^{T},S_{b}^{T}):M^{T}:b\longrightarrow v\).
**Example 2.5**.: The contravariant functor \(\mathcal{T}:\mathbf{Mat}(\mathbb{N})\longrightarrow\mathbf{Mat}(\mathbb{N})\) which maps each set \(v\) to itself and each matrix \(Mi:v\longrightarrow b\) to its transpose \(M^{T}:b\longrightarrow v\) gives rise to a contravariant functor \(\tilde{\mathcal{T}}:\text{Arr}(\mathbf{Mat}(\mathbb{N}))\longrightarrow\text {Arr}(\mathbf{Mat}(\mathbb{N}))\longrightarrow\text{Arr}(\mathbf{Mat}( \mathbb{N}))\longrightarrow\text{Arr}(\mathbf{Mat}(\mathbb{N}))\) which maps each matrix \(M:v\longrightarrow b\) to its transpose \(M^{T}:b\longrightarrow v\) and each morphism \((S_{v},S_{b}):M\longrightarrow N\) to its transpose \((S_{v}^{T},S_{b}^{T}):M^{T}:b\longrightarrow v\).
**Example 2.6**.: The contravariant functor \(\mathcal{T}:\mathbf{Mat}(\mathbb{N})\longrightarrow\mathbf{Mat}(\mathbb{N})\) which maps each set \(v\) to itself and each matrix \(Mi:v\longrightarrow b\) to its transpose \(M^{T}:b\longrightarrow v\) gives rise to a contravariant functor \(\tilde{\mathcal{T}}:\text{Arr}(\mathbf{Mat}(\mathbb{N}))\longrightarrow\text {Arr}(\mathbf{Mat}(\mathbb{N}))\longrightarrow\text{Arr}(\mathbf{Mat}( \mathbb{N}))\longrightarrow\text{Arr}(\mathbf{Mat}(\mathbb{N}))\) which maps each matrix \(M:v\longrightarrow b\) to its transpose \(M^{T}:b\longrightarrow v\) and each morphism \((S_{v},S_{b}):M\longrightarrow N\) to its transpose \((S_{v}^{T},S_{b}^{T}):M^{T}:b\longrightarrow v\).
**Example 2.7**.: The contravariant functor \(\mathcal{T}:\mathbf{Mat}(\mathbb{N})\longrightarrow\mathbf{Mat}(\mathbb{N})\) which maps each set \(v\) to itself and each matrix \(Mi:v\longrightarrow b\) to its transpose \(M^{T}:b\longrightarrow v\) gives rise to a contravariant functor \(\tilde{\mathcal{T}}:\text{Arr}(\mathbf{Mat}(\mathbb{N}))\longrightarrow\text{Arr}( \mathbf{Mat}(\mathbb{N}))\longrightarrow\text{Arr}(\mathbf{Mat}(\mathbb{N}))\) which maps each matrix \(M:v\longrightarrow b\) to its transpose \(M^{T}:b\longrightarrow v\) and each morphism \((S_{v},S_{b}):M\longrightarrow N\) to its transpose \((S_{v}^{T},S_{b}^{T}):M^{T}:b\longrightarrow v\).
**Example 2.8**.: The contravariant functor \(\mathcal{T}:\mathbf{Mat}(\mathbb{N})\longrightarrow\mathbf{Mat}(\mathbb{N})\) which maps each set \(v\) to itself and each matrix \(Mi:v\longrightarrow b\) to its transpose \(M^{T}:b\longrightarrow v\) gives rise to a contravariant functor \(\tilde{\mathcal{T}}:\text{Arr}(\mathbf{Mat}(\mathbb{N}))\longrightarrow\text{Arr}( \mathbf{Mat}(\mathbb{N}))\longrightarrow\text{Arr}(\mathbf{Mat}(\mathbb{N}))\) which maps each matrix \(M:v\longrightarrow b\) to its transpose \(M^{T}:b\longrightarrow v\) and each morphism \((S_{v},S_{b}):M\longrightarrow N\) to its transpose \((S_{v}^{T},S_{b}^{T}):M^{T}:b\longrightarrow v\).
**Example 2.9**.: The contravariant functor \(\mathcal{T}:\mathbf{Mat}(\mathbb{N})\longrightarrow\mathbf{Mat}(\mathbb{N})\) which maps each set \(v\) to itself and each matrix \(Mi:v\longrightarrow b\) to its transpose \(M^{T}:b\longrightarrow v\) gives rise to a contravariant functor \(\tilde{\mathcal{T}}:\text{Arr}(\mathbf{Mat}(\mathbb{N}))\longrightarrow\text{Arr}( \mathbf{Mat}(\mathbb{N}))\longrightarrow\text{Arr}(\mathbf{Mat}(\mathbb{N}))\) which maps each matrix \(M:v\longrightarrow b\) to its transpose \(M^{T}:b\longrightarrow v\) and each morphism \((S_{v},S_{b}):M\longrightarrow N\) to its transpose \((S_{v}^{T},S_{b}^{T}):M^{T}:b\longrightarrow v\).
**Example 2.10**.: The contravariant functor \(\mathcal{T}:\mathbf{Mat}(\mathbb{N})\longrightarrow\mathbf{Mat}(\mathbb{N})\) which maps each set \(v\) to itself and each matrix \(Mi:v\longrightarrow b\) to its transpose \(M^{T}:b\longrightarrow v\) gives rise to a contravariant functor \(\tilde{\mathcal{T}}:\text{Arr}(\mathbf{Mat}(\mathbb{N}))\longrightarrow\text{Arr}( \mathbf{Mat}(\mathbb{N}))\longrightarrow\text{Arr}(\mathbf{Mat}(\mathbb{N}))\) which maps each matrix \(M:v\longrightarrow b\) to its transpose \(M^{T}:b\longrightarrow v\) and each morphism \((S_{v},S_{b}):M\longrightarrow N\) to its transpose \((S_{v}^{T},S_{b}^{T}):M^{T}:b\longrightarrow v\).
\(N^{T}\longrightarrow M^{T}\) such that the following diagram commutes:
**Proposition 2.4**.: _Let \(\mathcal{C}\) and \(\mathcal{D}\) be two categories and let \(\mathcal{F},\mathcal{G}:\mathcal{C}\longrightarrow\mathcal{D}\) be two covariant functors between those categories that induce the functors \(\tilde{\mathcal{F}},\tilde{\mathcal{G}}:\operatorname{Arr}(\mathcal{C}) \longrightarrow\operatorname{Arr}(\mathcal{D})\). Consider a natural transformation \(\eta:\mathcal{F}\longrightarrow\mathcal{G}\) between \(\mathcal{F}\) and \(\mathcal{G}\). Then \(\eta\) induces a natural transformation \(\tilde{\eta}:\tilde{\mathcal{F}}\Longrightarrow\tilde{\mathcal{G}}\)._
Proof.: Let \(\eta:\mathcal{F}\Longrightarrow\mathcal{G}\) be a natural transformation that assigns to every object \(A\) in \(\mathcal{C}\) a morphism \(\eta_{A}:\mathcal{F}(A)\longrightarrow\mathcal{G}(A)\) such that for any morphism \(f:A\longrightarrow B\) in \(\mathcal{C}\) the following diagram (naturality condition) commutes:
Using this, once can assign to every object \(f:A\longrightarrow B\) in \(\operatorname{Arr}(\mathcal{C})\) a morphism \(\tilde{\eta}_{f}=(\eta_{A},\eta_{B}):\tilde{\mathcal{F}}(f)\longrightarrow \tilde{\mathcal{G}}(f)\), such that for any morphism
\((\phi,\psi):f\longrightarrow f^{\prime}\) in \(\operatorname{Arr}(\mathcal{C})\), where \(f^{\prime}:A^{\prime}\longrightarrow B^{\prime}\), the following diagram (naturality condition in the arrow category) commutes:
Here the the top, the back, the front and the bottom face commute due to naturality of \(\eta\) and the two side faces commute by definition. Hence the whole diagram commutes and we have defined a natural transformation \(\tilde{\eta}:\tilde{\mathcal{F}}\Longrightarrow\tilde{\mathcal{G}}\).
It is straightforward to verify that the following has to hold:
**Proposition 2.5**.: _If \(\eta:\mathcal{F}\Longrightarrow\mathcal{G}\) is a natural isomorphism, so is \(\tilde{\eta}:\tilde{\mathcal{F}}\Longrightarrow\tilde{\mathcal{G}}\)._
**Theorem 2.6**.: _Let \(\mathcal{C}\) and \(\mathcal{D}\) be two equivalent categories, i. e. there exist a pair of functors \(\mathcal{F}:\mathcal{C}\longrightarrow\mathcal{D}\) and \(\mathcal{G}:\mathcal{D}\longrightarrow\mathcal{C}\) and natural isomorphisms \(\mathcal{F}\circ\mathcal{G}\cong\mathrm{id}_{\mathcal{D}}\) and \(\mathcal{G}\circ\mathcal{F}\cong\mathrm{id}_{\mathcal{C}}\). Then Arr(\(\mathcal{C}\)) and Arr(\(\mathcal{D}\)) are also equivalent._
Proof.: By Proposition 2.1 the functors \(\mathcal{F}:\mathcal{C}\longrightarrow\mathcal{D}\) and \(\mathcal{G}:\mathcal{D}\longrightarrow\mathcal{C}\) give rise to functors \(\tilde{\mathcal{F}}:\mathrm{Arr}(\mathcal{C})\longrightarrow\mathrm{Arr}( \mathcal{D})\) and \(\tilde{\mathcal{G}}:\mathrm{Arr}(\mathcal{D})\longrightarrow\mathrm{Arr}( \mathcal{C})\). From Proposition 2.5 we know that the natural isomorphisms \(\mathcal{F}\circ\mathcal{G}\cong\mathrm{id}_{\mathcal{D}}\) and \(\mathcal{G}\circ\mathcal{F}\cong\mathrm{id}_{\mathcal{C}}\) give rise to natural isomorphisms \(\tilde{\mathcal{F}}\circ\tilde{\mathcal{G}}\cong\mathrm{id}_{\mathrm{Arr}( \mathcal{D})}\) and \(\tilde{\mathcal{G}}\circ\tilde{\mathcal{F}}\cong\mathrm{id}_{\mathrm{Arr}( \mathcal{C})}\). Hence we have an equivalence.
With that, we can define a dagger structure on \(\mathrm{Arr}(\mathcal{C})\), using the dagger structure in \(\mathcal{C}\):
**Proposition 2.7**.: _Let \(\mathcal{C}\) be a dagger category and let \(\mathcal{C}_{uni}\) be the subcategory where all morphisms are unitary, i. e. all \(f\in\mathrm{Hom}_{\mathcal{C}}\) are invertible with \(f^{-1}=f^{\dagger}\). Then \(\mathrm{Arr}(\mathcal{C}_{uni})\) is a dagger category._
Proof.: Let \(\mathcal{C}\) be a dagger category, i. e. there exists an involutive contravariant functor \(\dagger:\mathcal{C}\longrightarrow\mathcal{C}\) such that
\[(g\circ f)^{\dagger}=f^{\dagger}\circ g^{\dagger}\] \[\mathrm{id}^{\dagger}=\mathrm{id}\] \[(f^{\dagger})^{\dagger}=f.\]
Restricting to the subcategory where all morphisms are unitary, i. e. we have for each morphism \(\psi:A\longrightarrow A^{\prime}\) in \(\mathcal{C}\):
\[\psi\circ\psi^{\dagger}=\mathrm{id}_{A^{\prime}}\quad\text{and} \tag{1}\] \[\psi^{\dagger}\circ\psi=\mathrm{id}_{A} \tag{2}\]
and taking the arrow category of \(\mathcal{C}\), we can construct a functor \(\mathrm{Arr}^{\dagger}:\mathrm{Arr}(\mathcal{C}_{uni})\longrightarrow\mathrm{ Arr}(\mathcal{C}_{uni})\) which sends each object \(f:A\longrightarrow B\) in \(\mathrm{Arr}(\mathcal{C}_{uni})\) to itself and each morphism
to its adjoint \((\psi,\psi^{\prime})^{\dagger}=(\psi^{\dagger},\psi^{\prime\dagger})\):
The above diagram commutes because we have \(\psi^{\prime}\circ f=g\circ\psi\) per definition and \(\psi\circ\psi^{\dagger}=\operatorname{id}_{B^{\prime}}\). It is easy to verify that this construction fulfils the requirements for a dagger functor.
### Monoidal Products and Braidings in Arrow Categories
In this section we will define a braiding in arrow categories and demonstrate that a monoidal functor between two categories induces a monoidal functor between their arrow categories. Moreover, we will show that, given a symmetric monoidal category \(\mathcal{C}\), its arrow category is also symmetric.
The following statement can be found in a similar manner in [4]. We will still give its proof.
**Proposition 2.8**.: _If \(\mathcal{C}\) is a monoidal category, we can use the monoidal product in \(\mathcal{C}\) to define a pointwise monoidal product in \(\operatorname{\mathit{Arr}}(\mathcal{C})\)._
Proof.: Let \((\mathcal{C},\otimes_{\mathcal{C}},\mathbb{I}_{\mathcal{C}},\lambda_{\mathcal{ C}},\rho_{\mathcal{C}})\) be a monoidal category and \(\operatorname{\mathrm{Arr}}(\mathcal{C})\) its arrow category. A monoidal structure in \(\operatorname{\mathrm{Arr}}(\mathcal{C})\) can be defined as follows:
* On objects we have \[(f:A_{1}\longrightarrow B_{1})\otimes_{\operatorname{\mathrm{Arr}}(\mathcal{C })}(g:A_{2}\longrightarrow B_{2}):=f\otimes_{\mathcal{C}}g:A_{1}\otimes_{ \mathcal{C}}A_{2}\longrightarrow B_{1}\otimes_{\mathcal{C}}B_{2}\] where \(f:A_{1}\longrightarrow B_{1}\) and \(g:A_{2}\longrightarrow B_{2}\) are objects in \(\operatorname{\mathrm{Arr}}(\mathcal{C})\).
* On morphisms \((\phi_{A_{1}},\phi_{B_{1}})\) and \((\phi_{A_{2}},\phi_{B_{2}})\), where \(\phi_{A_{i}}:A_{i}\longrightarrow A_{i}^{\prime}\) and \(\phi_{B_{i}}:B_{i}\longrightarrow B_{i}^{\prime}\) for \(i=1,2\), we have \[(\phi_{A_{1}},\phi_{B_{1}})\otimes_{\operatorname{\mathrm{Arr}}(\mathcal{C})}( \phi_{A_{2}},\phi_{B_{2}}):=(\phi_{A_{1}}\otimes_{\mathcal{C}}\phi_{A_{2}},\phi _{B_{1}}\otimes_{\mathcal{C}}\phi_{B_{2}})\] (3) such that the following diagram commutes: \[\begin{CD}A_{1}\otimes_{\mathcal{C}}A_{2}@>{f\otimes_{\mathcal{C}}g}>{}>B_{1} \otimes_{\mathcal{C}}B_{2}\\ @V{\phi_{A_{1}}\otimes_{\mathcal{C}}\phi_{A_{2}}}V{}V@V{}V{\phi_{B_{1}^{ \prime}}\otimes_{\mathcal{C}}\phi_{B_{2}^{\prime}}}V\\ A_{1}^{\prime}\otimes_{\mathcal{C}}A_{2}^{\prime}@>{f^{\prime}\otimes_{ \mathcal{C}}g^{\prime}}>{}>B_{1}^{\prime}\otimes_{\mathcal{C}}B_{2}^{\prime} \end{CD}\]
* The unit object is given by the identity morphism on the monoidal unit in \(\mathcal{C}\): \[\operatorname{id}_{\mathbb{I}_{\mathcal{C}}}:\mathbb{I}_{\mathcal{C}} \longrightarrow\mathbb{I}_{\mathcal{C}}.\] (4)
* Since left- and right-unitors are natural isomorphisms one can use Propisition 2.5 to define \[\lambda_{f}:\operatorname{id}_{\mathbb{I}_{\mathcal{C}}}\otimes_{\mathcal{C}}f \longrightarrow f\] (5) and \[\rho_{f}:f\otimes_{\mathcal{C}}\operatorname{id}_{\mathbb{I}_{\mathcal{C}}} \longrightarrow f\] (6) where \(f:A\longrightarrow B\), i. e. the following diagrams commute: \[\begin{CD}A\otimes_{\mathcal{C}}\mathbb{I}_{\mathcal{C}}@>{\rho_{A}}>{}>A\\ @V{f\otimes_{\mathcal{C}}\operatorname{id}_{\mathbb{I}_{\mathcal{C}}}}V{}V@V{f}V{f}V\\ B\otimes_{\mathcal{C}}\mathbb{I}_{\mathcal{C}}@>{\rho_{B}}>{}>B\end{CD}\] \[\begin{CD}\mathbb{I}_{\mathcal{C}}\otimes_{\mathcal{C}}A@>{\lambda_{A}}>{}>A\\ @V{\operatorname{id}_{\mathbb{I}_{\mathcal{C}}}\otimes_{\mathcal{C}}f}V{}V@V{f}V{f}V \\ \mathbb{I}_{\mathcal{C}}\otimes_{\mathcal{C}}B@>{\lambda_{B}}>{}>B\end{CD}\]
* Finally, the associator is a natural isomorphism \[\alpha:(f_{1}\otimes_{\mathcal{C}}f_{2})\otimes_{\mathcal{C}}f_{3} \longrightarrow f_{1}\otimes_{\mathcal{C}}(f_{2}\otimes_{\mathcal{C}}f_{3})\] (7) where \(f_{i}:A_{i}\longrightarrow B_{i}\) for \(i=1,2,3\), such that the diagram commutes.
We still have to show that the pentagon and the triangle axiom are satisfied. The proof for this can be found in the appendix.
**Example 2.9**.: The monoidal product in \(\operatorname{Arr}(\mathbf{Mat}(\mathbb{N}))\) is defined on objects \(M:v_{1}\longrightarrow b_{1}\), \(N:v_{2}\longrightarrow b_{2}\) via the Kronecker product of matrices: \(M\otimes N:v_{1}\cdot v_{2}\to b_{1}\cdot b_{2}\). On morphisms the monoidal product is defined as follows: \((S_{v_{1}},S_{b_{1}})\otimes(S_{v_{2}},S_{b_{2}})=(S_{v_{1}}\otimes S_{v_{2}}, S_{b_{1}}\otimes S_{b_{2}})\), i. e. we have the commutative diagram:
\[\begin{CD}v_{1}\cdot v_{2}@>{S_{v_{1}}\otimes S_{v_{2}}}>{}V_{1}^{\prime} \cdot v_{2}^{\prime}\\ @V{M\otimes N}V{}V@V{}V{M^{\prime}\otimes N^{\prime}}V\\ b_{1}\cdot b_{2}@>{S_{b_{1}}\otimes S_{b_{2}}}>{}V_{1}^{\prime}\cdot b_{2}^{ \prime}\end{CD}\]
The tensor unit is given by the \(1\times 1\)-matrix:
\[\mathbb{I}:1\longrightarrow 1. \tag{8}\]
In a similar way one can define a braiding in \(\operatorname{Arr}(\mathcal{C})\) using the braiding in \(\mathcal{C}\).
**Proposition 2.10**.: _If \(\mathcal{C}\) is a braided monoidal category with braiding \(\sigma_{A,B}:A\otimes_{\mathcal{C}}B\longrightarrow B\otimes_{\mathcal{C}}A\), then \(\operatorname{Arr}(\mathcal{C})\) has a braiding given by \(\sigma_{f,g}=(\sigma_{A,C},\sigma_{B,D}):f\otimes_{\mathcal{C}}g \longrightarrow g\otimes_{\mathcal{C}}f\) for \(f:A\longrightarrow B\) and \(g:C\longrightarrow D\), i. e. we have the following commutative diagram:_
Proof.: We can use Prop. 2.5 to define a natural isomorphism \(\sigma_{f,g}=(\sigma_{A,C},\sigma_{B,D})\) on \(\operatorname{Arr}(\mathcal{C})\) using the natural isomorphisms \(\sigma_{A,C}\) and \(\sigma_{B,D}\). In order to define a braiding, \(\sigma_{f,g}=(\sigma_{A,C},\sigma_{B,D})\) has to satisfy the hexagon identities. Here we will only discuss one of them as the other one can be proved analogously. In the diagram below the top and the bottom face commute because \(\sigma_{A,C}\) resp. \(\sigma_{B,D}\) is a braiding in \(\mathcal{C}\). The left side face and the right square in the front commute, because the braiding in \(\mathcal{C}\) is natural and because of the definition of the monoidal product in \(\operatorname{Arr}(\mathcal{C})\). The left square in the front commutes because the associator is natural in \(\mathcal{C}\). Similarly, the back face commutes because of naturality of the associator in \(\mathcal{C}\). Finally, the right side face commutes due to naturality of the braiding in \(\mathcal{C}\) and hence the whole diagram
commutes.2
Footnote 2: We have again abbreviated the labels of the monoidal products and the associator in order to make the diagram more readable.
\((D\otimes A)\otimes X\)\((A\otimes D)\otimes X\)\((D\otimes A)\otimes X\)\((A\otimes D)\otimes X\)\((A\otimes D)\otimes X\)\((A\otimes D)\otimes X\)\((D\otimes A)\otimes
The top and the bottom face commute because \(\mathcal{C}\) is symmetric and the two side faces commute as the braiding in \(\mathcal{C}\) is natural. The back face commutes due to the definition of the identity morphism in \(\operatorname{Arr}(\mathcal{C})\) and hence the whole diagram commutes which proves the assumption.
As it is possible to define a monoidal product in arrow categories, one can ask if a monoidal functor between two categories gives rise to a monoidal functor between their arrow categories. This is indeed the case, as the following proposition shows:
**Proposition 2.13**.: _Let \((\mathcal{C},\otimes_{\mathcal{C}},\mathbb{I}_{\mathcal{C}})\) and \((\mathcal{D},\otimes_{\mathcal{D}},\mathbb{I}_{\mathcal{D}})\) be two monoidal categories and \((\mathcal{F},\mathcal{F}_{2},\mathcal{F}_{0})\) be a monoidal functor between them. Then \((\mathcal{F},\mathcal{F}_{2},\mathcal{F}_{0})\) gives rise to a monoidal functor \((\tilde{\mathcal{F}},\tilde{\mathcal{F}}_{2},\tilde{\mathcal{F}}_{0})\) between \(\operatorname{\mathit{Arr}}(\mathcal{C})\) and \(\operatorname{\mathit{Arr}}(\mathcal{D})\)._
Proof.: We have already shown that a functor \(\mathcal{F}\) between two categories \(\mathcal{C}\) and \(\mathcal{D}\) induces a functor \(\tilde{\mathcal{F}}\) between their arrow categories and that a natural transformation \(\eta:\mathcal{F}\Longrightarrow\mathcal{G}\) gives rise to a natural transformation \(\tilde{\eta}:\tilde{\mathcal{F}}\Longrightarrow\tilde{\mathcal{G}}\). Hence we can use the natural transformations \(\mathcal{F}_{2}\) and \(\mathcal{F}_{0}\) to define natural transformations
\[\tilde{\mathcal{F}}_{2}=(\mathcal{F}_{2,A,C},\mathcal{F}_{2,B,D}):\tilde{ \mathcal{F}}(f)\otimes_{\mathcal{D}}\tilde{\mathcal{F}}(g)\longrightarrow \tilde{\mathcal{F}}(f\otimes_{\mathcal{C}}g) \tag{11}\]
for \(f:A\longrightarrow B\) and \(g:C\longrightarrow D\) and
\[\tilde{\mathcal{F}}_{0}=(\mathcal{F}_{0},\mathcal{F}_{0}):\operatorname{id}_{ \mathbb{I}_{\mathcal{D}}}\longrightarrow\tilde{\mathcal{F}}(\operatorname{id} _{\mathbb{I}_{\mathcal{C}}}). \tag{12}\]
In diagrams we have
and
It is left to show that if \(\mathcal{F}:\mathcal{C}\longrightarrow\mathcal{D}\) is monoidal, \(\tilde{\mathcal{F}}:\operatorname{Arr}(\mathcal{C})\longrightarrow\operatorname {Arr}(\mathcal{D})\) preserves the monoidal product, i. e. \(\tilde{\mathcal{F}}_{2}\) and \(\tilde{\mathcal{F}}_{0}\) fulfil the associativity and the unitality conditions, respectively. The associativity condition in \(\operatorname{Arr}(\mathcal{C})\) is given by the diagram on the next page. Here the top and the bottom face
commute because \(\mathcal{F}_{2}\) satisfies the associativity condition. The left face and the right square in the front commute because \(\mathcal{F}_{2}\) is natural and because of the monoidal product in \(\operatorname{Arr}(\mathcal{D})\). The right square in the front commutes because the associator in \(\mathcal{C}\) is natural. Moreover, the left and the right face in the back commute as \(\mathcal{F}_{2}\) is natural and the middle face in the back commutes due to naturality of the associator in \(\mathcal{C}\).3
Footnote 3: Again we have abbreviated the labels of the monoidal products, associators and natural transformations \(\mathcal{F}_{2}\) to make the diagram more readable.
The unitality condition is given by the following diagram:
The top and the bottom face commute because \(\mathcal{F}_{0}\) satisfies the unitality condition. The back face commutes as \(\mathcal{F}_{0}\) is natural and due to the definition of the monoidal product in \(\operatorname{Arr}(\mathcal{D})\). Moreover, the left face commutes because of the definition of the unitor in \(\mathcal{C}\) and the right face commutes since \(\mathcal{F}_{2}\) is natural. Finally, the front face also commutes due to the definition of the unitor in \(\mathcal{C}\).
Similarly, a _braided_ monoidal functor \(\mathcal{F}:\mathcal{C}\longrightarrow\mathcal{D}\) gives rise to a braided monoidal functor between \(\text{Arr}(\mathcal{C})\) and \(\text{Arr}(\mathcal{D})\):
**Proposition 2.14**.: _Let \((\mathcal{C},\otimes_{\mathcal{C}},\mathbb{I}_{\mathcal{C}},\sigma_{\mathcal{C}})\) and \((\mathcal{D},\otimes_{\mathcal{D}},\mathbb{I}_{\mathcal{D}},\sigma_{\mathcal{ D}})\) be two braided monoidal categories and \((\mathcal{F},\mathcal{F}_{2},\mathcal{F}_{0})\) be a braided monoidal functor between them. Then \((\mathcal{F},\mathcal{F}_{2},\mathcal{F}_{0})\) gives rise to a braided monoidal functor \((\tilde{\mathcal{F}},\tilde{\mathcal{F}}_{2},\tilde{\mathcal{F}}_{0})\) between \(\text{Arr}(\mathcal{C})\) and \(\text{Arr}(\mathcal{D})\)._
Proof.: We only need to show that the following diagram commutes:
The top and the bottom face commute because \(\mathcal{F}\) is a braided monoidal functor and the two side faces commute due to naturality of \(\mathcal{F}_{2}\). The front face commutes due to naturality of the braiding in \(\mathcal{C}\). Finally, the back face commutes per definition of the braiding and hence the whole diagram commutes.
**Proposition 2.15**.: _If the functor \(\mathcal{F}:\mathcal{C}\longrightarrow\mathcal{D}\) in the above construction is symmetric, i. e. if \(\mathcal{C}\) is symmetric, then so is \(\tilde{\mathcal{F}}:\text{Arr}(\mathcal{C})\longrightarrow\text{Arr}( \mathcal{D})\)._
It is straightforward to verify that the following statement is true:
**Example 2.16**.: Let \(\mathbb{K}\) be a field. An \(n\)-dimensional topological quantum field theory is given by a symmteric monoidal functor from the cobordism category to the category of vector spaces (see [1]):
\[\mathcal{Z}:\mathbf{Cob}_{n}\longrightarrow\mathbf{Vect}_{\mathbb{K}} \tag{13}\]
This functor assigns to each closed oriented \((n-1)\)-dimensional manifold \(M\) a \(\mathbb{K}\)-vector space \(\mathcal{Z}(M)\) and to each oriented bordism \(B\) from an \((n-1)\)-dimensional manifold \(M\) to another \((n-1)\)-dimensional manifold \(N\) a \(\mathbb{K}\)-linear map \(\mathcal{Z}(B):\mathcal{Z}(M)\longrightarrow\mathcal{Z}(N)\).
Applying Prop. 2.15, we can construct a symmetric monoidal functor
\[\tilde{\mathcal{Z}}:\operatorname{Arr}(\mathbf{Cob}_{n})\longrightarrow \operatorname{Arr}(\mathbf{Vect}_{\mathbb{K}}) \tag{14}\]
which assigns to each oriented bordism \(B\) from an \((n-1)\)-dimensional manifold \(M\) to another \((n-1)\)-dimensional manifold \(N\) a \(\mathbb{K}\)-linear map \(\mathcal{Z}(B):\mathcal{Z}(M)\longrightarrow\mathcal{Z}(N)\) and to each morphism \((\beta_{M},\beta_{N}):B\longrightarrow B^{\prime}\)
where \(\beta_{M}:M\longrightarrow M^{\prime}\) and \(\beta_{N}:N\longrightarrow N^{\prime}\) are bordisms, a morphism \((\mathcal{Z}(\beta_{M}),\mathcal{Z}(\beta_{N})):\mathcal{Z}(B)\longrightarrow \mathcal{Z}(B^{\prime})\)
where \(\mathcal{Z}(\beta_{M}):\mathcal{Z}(M)\longrightarrow\mathcal{Z}(M^{\prime})\) and \(\mathcal{Z}(\beta_{N}):\mathcal{Z}(N)\longrightarrow\mathcal{Z}(N^{\prime})\) are \(\mathbb{K}\)-linear maps.
**Proposition 2.17**.: _Let \((\mathcal{C},\otimes_{\mathcal{C}},\mathbb{I}_{\mathcal{C}})\) and \((\mathcal{D},\otimes_{\mathcal{D}},\mathbb{I}_{\mathcal{D}})\) be two monoidal categories and \((\mathcal{F},\mathcal{F}_{2},\mathcal{F}_{0})\) and \((\mathcal{G},\mathcal{G}_{2},\mathcal{G}_{0})\) be two monoidal functors between those categories. A monoidal natural transformation \(\eta:\mathcal{F}\Longrightarrow\mathcal{G}\) gives rise to a monoidal natural transformation \(\tilde{\eta}:\mathcal{F}\Longrightarrow\tilde{\mathcal{G}}\), where \(\tilde{\mathcal{F}}\) and \(\tilde{\mathcal{G}}\) are the functors defined on the arrow categories._
Proof.: We only need to show that \(\tilde{\eta}\) preserves the monoidal product and the monoidal unit in \(\operatorname{Arr}(\mathcal{C})\), if \(\eta\) preserves the monoidal product and the
monoidal unit in \(\mathcal{C}\). For this, the following diagram has to commute:
The top and the bottom face commute because \(\eta\) is a monoidal natural transformation and the two side faces commute due to naturality of \(\mathcal{F}_{2}\) resp. \(\mathcal{G}_{2}\). Finally, the front and the back face commute due to naturality of \(\eta\) and thus the whole diagram commutes. Moreover, the following diagram has to commute as well:
Here the top and the bottom face commute because \(\eta\) is a monoidal natural transformation and the two side faces commute due to the naturality of \(\mathcal{F}_{0}\) resp. \(\mathcal{G}_{0}\). The back face also commutes as \(\eta\) is a natural and thus the whole diagram commutes.
### Duals and Pivots in Arrow Categories
In this section we will investigate under which circumstances an arrow category is rigid. As it turns out, we can, similarly to the constructions made earlier, use the structure of \(\mathcal{C}\) to define a evaluation and coevaluation map on the subcategory of \(\operatorname{Arr}(\mathcal{C})\) where all objects are isomorphisms, i.e. where
all objects are morphisms in the _core_ of \(\mathcal{C}\). Furthermore, we will show that a pivot in a pivotal category \(\mathcal{C}\) induces a pivot in its arrow category and that a twist in a ribbon category gives rise to a twist in its arrow category.
**Theorem 2.18**.: _Let \((\mathcal{C},\otimes_{\mathcal{C}},\mathbb{I}_{\mathcal{C}})\) be a rigid monoidal category. An object \(f:A\longrightarrow B\) in Arr(\(\mathcal{C}\)) has a dual if and only if it is an isomorphism in \(\mathcal{C}\)._
Proof.: Let \(d_{A}:A^{*}\otimes_{\mathcal{C}}A\longrightarrow\mathbb{I}\) and \(b_{A}:\mathbb{I}\longrightarrow A\otimes_{\mathcal{C}}A^{*}\), where \(A,A^{*}\in\operatorname{obj}(\mathcal{C})\) respectively, define the evaluation and coevaluation map in \(\mathcal{C}\). For a morphism \(f:A\longrightarrow B\in\operatorname{obj}(\operatorname{Arr}(\mathcal{C}))\) and its dual \(f^{*}:A^{*}\longrightarrow B^{*}\) we can now define maps in Arr(\(\mathcal{C}\)) :
\[d_{f}:f^{*}\otimes_{\operatorname{Arr}(\mathcal{C})}f\longrightarrow \operatorname{id}_{\mathbb{I}_{\mathcal{C}}} \tag{15}\]
and
\[b_{f}:\operatorname{id}_{\mathbb{I}_{\mathcal{C}}}\longrightarrow f^{*} \otimes_{\operatorname{Arr}(\mathcal{C})}f, \tag{16}\]
such that
and
commute. The first snake identity in Arr(\(\mathcal{C}\)) is given by:
The top and the bottom face commute because \(b\) and \(d\) fulfil the snake identities in \(\mathcal{C}\) for the dualities \(A\dashv A^{*}\) and \(B\dashv B^{*}\) respectively. The two side faces commute due to the definition of the monoidal product and the (co)evaluation map in \(\operatorname{Arr}(\mathcal{C})\). Finally, it is easy to see that the back face also commutes and thus the whole diagram commutes. Analogously, one can show that the second snake identity is satisfied in \(\operatorname{Arr}(\mathcal{C})\).
In terms of the graphical calculus, commutativity of the diagram corresponding to Eq. 15 means that the following equation has to hold:
Inserting this into the snake identity in \(\mathcal{C}\), yields:
For the other snake identity we get a similar expression. From this we can conclude that \(f\) has to be an isomorphism with inverse:
On the other hand, if we assume that \(f\) is invertible with inverse given by the morphism in the diagram above, it is easy to verify that the diagrams corresponding to Eq. 16 and Eq. 15 and hence also the snake identities in \(\operatorname{Arr}(\mathcal{C})\) commute. From that one can conclude that an object \(f\) in \(\operatorname{Arr}(\mathcal{C})\) has a dual if and only if it is invertible.
**Example 2.19**.: In \(\operatorname{\mathbf{Mat}}(\mathbb{N})\), only invertible matrices have duals.
In the following we will denote the core of an arbitrary category \(\mathcal{C}\) with \(\mathcal{C}_{core}\).
**Theorem 2.20**.: _Let \(\mathcal{C}\) be a pivotal category with pivot \(\pi_{A}:A\longrightarrow A^{**}\) for \(A\)\(\in\) obj(\(\mathcal{C}\)). Then Arr(\(\mathcal{C}_{core}\)) is also pivotal with pivot \(\tilde{\pi}_{f}=(\pi_{A},\pi_{B}):f\longrightarrow f^{**}\) for \(f:A\longrightarrow B\)\(\in\) obj(Arr(\(\mathcal{C}\)))._
Proof.: Since the pivot is a monoidal natural transformation between the identity functor and the functor that sends each object to its double dual, we can apply Proposition 2.13 and Proposition 2.4 to define a monoidal natural transformation between the identity functor on \(\operatorname{Arr}(\mathcal{C}_{core})\) and the functor that sends each object in \(\operatorname{Arr}(\mathcal{C}_{core})\) to its double dual.
**Example 2.21**.: The arrow category of the subcategory of \(\operatorname{\mathbf{Mat}}(\mathbb{N})\) where all matrices are invertible is a pivotal symmetric monoidal category.
**Proposition 2.22**.: _Let \((\mathcal{C},\otimes_{\mathcal{C}},\mathbb{I}_{\mathcal{C}})\) be a ribbon category, i. e. a rigid monoidal category with a twist \(\theta_{A}:A\longrightarrow A\). Then Arr(\(\mathcal{C}_{core}\)) is a ribbon category with twist \((\theta_{A},\theta_{B}):f\longrightarrow f\), i. e. we have:_
Proof.: In order to define a twist \((\theta_{A},\theta_{B}):f\longrightarrow f\), the following diagram
has to commute:
Here the bottom and the top face commute because \(\theta\) is a twist in \(\mathcal{C}\). Moreover, the front face commutes due to naturality of \(\theta\) in \(\mathcal{C}\). The left and the back face commute due to naturality of the braiding and its inverse and the right face commutes because of the definition of the monoidal product and naturality of the braiding.
The second identity that has to be fulfilled is given by
\[(\theta_{\mathbb{I}_{\mathcal{C}}},\theta_{\mathbb{I}_{\mathcal{C}}})=( \operatorname{id}_{\mathbb{I}_{\mathcal{C}}},\operatorname{id}_{\mathbb{I}_{ \mathcal{C}}})\]
This equation holds because \(\theta\) is a twist in \(\mathcal{C}\) and hence: \(\theta_{\mathbb{I}_{\mathcal{C}}}=\operatorname{id}_{\mathbb{I}_{\mathcal{C}}}\). The last condition that has to be satisfied is given by:
\[(\theta_{B^{*}},\theta_{A^{*}})=(\theta_{A},\theta_{B})^{*}=(\theta_{B}^{*}, \theta_{A}^{*})\]
which holds since \(\theta\) is a twist and hence we have: \((\theta_{B}^{*},\theta_{A}^{*})=(\theta_{B^{*}},\theta_{A^{*}})\).
**Example 2.23**.: Consider again the arrow category of the subcategory of \(\mathbf{Mat}(\mathbb{N})\) where all matrices are invertible. The pivot induces a twist on this category via \(\theta_{A}=\operatorname{id}_{A}\) and hence this category is a ribbon category. In particular, it is a compact category.
### (Co)monoids, Bialgebras, Frobenius Structures and Hopf Algebras in Arrow Categories
In this section we will show that the (co)monoids in arrow categories are given by (co)monoid-morphisms. With that, we can then derive a notion of
bialgebras, Frobenius algebras and Hopf algebras in arrow categories. In the following we will use \(\eta\) to denote the unit of a monoid in a category. This should not be confused with a natural transformation.
**Theorem 2.24**.: _Let \(\mathcal{C}\) be a monoidal category and (A, \(\mu_{A}\), \(\eta_{A}\)) and (B, \(\mu_{B}\), \(\eta_{B}\)) be monoids in \(\mathcal{C}\). Then a morphism of monoids \(f:A\longrightarrow B\) in \(\mathcal{C}\) is a monoid in Arr(\(\mathcal{C}\)). If the monoids \(A\) and \(B\) are commutative, so is the monoid \(f:A\longrightarrow B\) in Arr(\(\mathcal{C}\))._
Proof.: Let \(\mathcal{C}\) be a monoidal category and (\(A\), \(\mu_{A}\), \(\eta_{A}\)) and (\(B\), \(\mu_{B}\), \(\eta_{B}\)) be monoids in \(\mathcal{C}\). We can then define a monoid object \(f:A\longrightarrow B\) in Arr(\(\mathcal{C}\)), where \(f\) is a morphism of monoids, via the following construction:
* the multiplication is given by a morphism \(\tilde{\mu}=(\mu_{A},\mu_{B})\) such that \[\begin{CD}A\otimes_{\mathcal{C}}A@>{\mu_{A}}>{}>A\\ f\otimes_{C}f\Big{\downarrow}\\ B\otimes_{\mathcal{C}}B@>{\mu_{B}}>{}>B\end{CD}\] commutes.
* the unitor is a morphism \(\tilde{\eta}=(\eta_{A},\eta_{B})\) such that \[\begin{CD}\mathbb{I}_{\mathcal{C}}@>{\eta_{A}}>{}>A\\ \mathbb{I}_{\mathcal{C}}@>{\eta_{B}}>{}>B\end{CD}\] commutes.
We still need to show that the pentagon axiom and the unitor diagrams commute. This will be done in Appendix 2.4.
Analogously, one can show that the following statement is true:
**Theorem 2.25**.: _Given a category \(\mathcal{C}\), then a comonoid morphism in \(\mathcal{C}\) is a comonoid in Arr(\(\mathcal{C}\)). If the comonoid in \(\mathcal{C}\) is cocommutative, then this also holds for the comonoid in Arr(\(\mathcal{C}\))._
Having defined both monoid and comonoid objects in a Arr(\(\mathcal{C}\)), we can now define bialgebra objects in Arr(\(\mathcal{C}\)). In the following, \(\Delta\) denotes the comultiplication and \(\epsilon\) is the counit of a comonoid in \(\mathcal{C}\).
**Theorem 2.26**.: _Let \((\mathcal{C},\otimes_{\mathcal{C}},\mathbb{I}_{\mathcal{C}})\) be a monoidal category and (\(A\), \(\mu_{A}\), \(\eta_{A}\), \(\Delta_{A}\), \(\epsilon_{A}\)) and (\(B\), \(\mu_{B}\), \(\eta_{B}\), \(\Delta_{B}\), \(\epsilon_{B}\)) be both a monoid and a comonoid in \(\mathcal{C}\) such that the bialgebra axiom is satisfied. Then a morphim \(f:A\longrightarrow B\) in \(\mathcal{C}\), which is both a monoid and comonoid morphism, is a bialgebra object in Arr(\(\mathcal{C}\))._
Proof.: Let (\(A\), \(\mu_{A}\), \(\eta_{A}\), \(\Delta_{A}\), \(\epsilon_{A}\)) and (\(B\), \(\mu_{B}\), \(\eta_{B}\), \(\Delta_{B}\), \(\epsilon_{B}\)) be two objects which are both a monoid and a comonoid and both satisfy the bialgebra axiom. We then have that each \(f:A\longrightarrow B\) which is both a monoid and a comonoid morphism is a monoid and a comonoid in Arr(\(\mathcal{C}\)). The morphism \(f\) is a bialgebra object in Arr(\(\mathcal{C}\)), if it satisfies the two bialgebra axioms in Arr(\(\mathcal{C}\)). The first axiom requires that the following diagram commutes:
The left face commutes because \(f\) is a morphism of comonoids and because of the definition of the monoidal product in Arr(\(\mathcal{C}\)). Analogously, the right face commutes since \(f\) is a morphism of comonoids. Because \(A\) and \(B\) are bialgebra objects in \(\mathcal{C}\), the top and the bottom face commute. The back face and the right square in the front commute because \(f\) is a monoid morphism and because of the definition of the monoidal product in Arr(\(\mathcal{C}\)). Finally, the left square in the front also commutes because of the definition of the braiding in \(\mathcal{C}\) and the monoidal product in Arr(\(\mathcal{C}\)) and thus the whole diagram commutes.
The second bialgebra axiom requires that the following diagram commutes:
Here the back face commutes because of the definition of the monoidal product in \(\operatorname{Arr}(\mathcal{C})\) and because \(f\) is a morphism of comonoids. The left and the front face also commute, because \(f\) is a morphism of monoids. Finally, the top and bottom face commute as \(A\) and \(B\) are bialgebra objects in \(\mathcal{C}\) and thus the whole diagram commutes.
**Proposition 2.27**.: _Let \((\mathcal{C},\otimes_{\mathcal{C}},\mathbb{I}_{\mathcal{C}})\) be a monoidal category and \(A\), \(B\) be two Frobenius structures in \(\mathcal{C}\) (in particular, \(A\) and \(B\) are both monoids and comonoids). Then an object \(f:A\longrightarrow B\) in \(\operatorname{Arr}(\mathcal{C})\), which is both a monoid and a comonoid morphism is a Frobenius structure in \(\operatorname{Arr}(\mathcal{C})\). If \(A\) and \(B\) are special Frobenius algebras in \(\mathcal{C}\), then \(f:A\longrightarrow B\) is a special Frobenius structure in \(\operatorname{Arr}(\mathcal{C})\)._
Proof.: The Frobenius law in \(\operatorname{Arr}(\mathcal{C})\) is given by the following diagram:
\(A\otimes A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(B\otimes B\otimes B\)\(A\otimes A\otimes A\)\(A\otimes A\otimes A\)\(A\otimes A\otimes A\)\(A\otimes A\otimes A\)\(A\otimes A\otimes A\)\(A\otimes A\otimes A\)\(A\otimes A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\)\(A\otimes A\otimes A\)\(A\otimes A\)\(A\otimes A\
The top and the bottom face of the diagram commutes because \(A\) and \(B\) are Frobenius algebras in \(\mathcal{C}\). The left face commutes because of the definition of the monoidal product in \(\operatorname{Arr}(\mathcal{C})\) and because \(f\) is a morphism of comonoids. Analogously, the back face commutes. Finally, the front and the right face commute because of the definition of the monoidal product in \(\mathcal{C}\) and because \(f\) is a morphism of monoids and thus the whole diagram commutes.
If \(A\) and \(B\) are both special, then the following diagram also commutes as \(f\) is both a morphism of monoids and comonoids:
and hence \(f\) is also special.
**Proposition 2.28**.: _Let \((\mathcal{C},\otimes_{\mathcal{C}},\mathbb{I}_{\mathcal{C}})\) be a monoidal dagger category and \(A\), \(B\) be two objects in \(\mathcal{C}\) which are dagger Frobenius algebras in \(\mathcal{C}\). Then a unitary morphism of both monoids and comonoids \(f:A\longrightarrow B\) is a dagger Frobenius structure in \(\operatorname{Arr}(\mathcal{C})\)._
Proof.: Let \(f:A\longrightarrow B\) be a morphism of bialgebras, i. e. a moprhism of monoids and comonoids that is also unitary, i. e.: \(f^{\dagger}=f^{-1}\). We can define a multiplication and a comultiplication in \(\operatorname{Arr}(\mathcal{C})\) via
\[\mu_{f} =(\mu_{A},\mu_{B}):f\otimes_{\mathcal{C}}f\longrightarrow f\quad \text{ and }\] \[\Delta_{f} =(\Delta_{A},\Delta_{B}):f\longrightarrow f\otimes_{\mathcal{C}}f\]
respectively. Applying the dagger functor from Prop. 2.7 on the comultiplication then gives
\[\Delta_{f}^{\dagger}=(\Delta_{A},\Delta_{B})^{\dagger}=(\Delta_{A}^{\dagger}, \Delta_{B}^{\dagger})=(\mu_{A},\mu_{B})=\mu_{f}\]
where we used the fact that \(A\) and \(B\) are dagger Frobenius algebras. Similarly, we get that
\[\epsilon_{f}^{\dagger}=(\epsilon_{A}^{\dagger},\epsilon_{B}^{\dagger})=(\eta_{A}, \eta_{B})=\eta_{f}\]
and hence \(f:A\longrightarrow B\) is a dagger Frobenius structure in \(\operatorname{Arr}(\mathcal{C})\).
This leads us to the following theorem:
**Theorem 2.29**.: _Let \((\mathcal{C},\otimes_{\mathcal{C}},\mathbb{I}_{\mathcal{C}})\) be a braided monoidal dagger category and let \(A\) and \(B\) be two objects in \(\mathcal{C}\) which are commutative special dagger Frobenius algebras in \(\mathcal{C}\). Then a unitary morphism of Frobenius algebras \(f:A\longrightarrow B\) is a commutative special dagger Frobenius structure in \(\operatorname{Arr}(\mathcal{C})\)._
Proof.: We only need to apply Theorem 2.8, Proposition 2.27 and Proposition 2.28.
**Theorem 2.30**.: _Let \(\mathcal{C}\) be a monoidal category and (\(A\), \(\mu_{A}\), \(\eta_{A}\), \(\Delta_{A}\), \(\epsilon_{A}\), \(S_{A}\)) and (\(B\), \(\mu_{B}\), \(\eta_{B}\), \(\Delta_{B}\), \(\epsilon_{B}\), \(S_{B}\)) be Hopf algebra objects in \(\mathcal{C}\) where \(S_{A}:A\longrightarrow A\) and \(S_{B}:B\longrightarrow B\) are antipodes in \(\mathcal{C}\). Then morphisms of Hopf algebras are Hopf algebra objects in \(\operatorname{Arr}(\mathcal{C})\)._
Proof.: Let \(\mathcal{C}\) be a monoidal category and (\(A\), \(\mu_{A}\), \(\eta_{A}\), \(\Delta_{A}\), \(\epsilon_{A}\), \(S_{A}\)) and (\(B\), \(\mu_{B}\), \(\eta_{B}\), \(\Delta_{B}\), \(\epsilon_{B}\), \(S_{B}\)) be Hopf algebra objects in \(\mathcal{C}\), where \(S_{A}:A\longrightarrow A\) and \(S_{B}:B\longrightarrow B\) are antipodes in \(\mathcal{C}\). We can then define an antipode \(S=(S_{A},S_{B})\)
where \(f\) is a morphism of antipodes and thus a Hopf algebra object in \(\operatorname{Arr}(\mathcal{C})\).
We now have to prove that the Hopf algebra axiom is satisfied if \(f\) is a morphism of Hopf algebras. In \(\operatorname{Arr}(\mathcal{C})\) this means that the following diagram
has to commute:
Because \(f\) is a morphism of comonoids, the left and the right side face commute. Analogously, the second square of the front face and the back face commute, since \(f\) is a morphism of monoids. The first square of the front face commutes because of the definition of the monoidal product in \(\operatorname{Arr}(\mathcal{C})\) and the definition of the antipode in \(\operatorname{Arr}(\mathcal{C})\). Finally, the top and the bottom face commute as \(A\) and \(B\) are Hopf algebras in \(\mathcal{C}\) and hence the whole diagram commutes.
Proofs
**Proof of 2.8 continued:**
Proof.: We want to show that the pentagon and the triangle axiom are satisfied for the monoidal product defined in the beginning of this proof. The pentagon axiom in \(\operatorname{Arr}(\mathcal{C})\) is given by the diagram on the next page where the front and the back face commute because \(\alpha\) satisfies the pentagon axiom (\(\alpha\) is the associator in \(\mathcal{C}\)). The two side faces commute due to the definition of the monoidal product and naturality of the associator in \(\mathcal{C}\). The two top faces and the bottom face also commute due to naturality of the associator in \(\mathcal{C}\) and hence the whole diagram commutes.4
Footnote 4: We have abbreviated the labels of the monoidal products and the associators trying to make the diagram more readable.
The triangle axiom is given by the diagram:
Here the top and the bottom face commute because \(\rho\) and \(\alpha\) satisfy the triangle identity in \(\mathcal{C}\) and the two side faces commute due to naturality of the left and right unitors in \(\mathcal{C}\). Finally, the back face commutes because of the naturality of the associator in \(\mathcal{C}\).
**Proof of Theorem 2.24 continued:**
Proof.: We will first show that the multiplication in the above construction satisfies the pentagon axiom. For an arbitrary morphism \(f:A\longrightarrow B\) between two monoid objects \(A\), \(B\in\operatorname{obj}(\mathcal{C})\) the pentagon axiom in \(\mathcal{C}\) gives rise to the following diagram:
The front and the back face commute because \(A\) and \(B\) are monoids in \(\mathcal{C}\). Moreover, the bottom and the right face commute because \(f\) is a morphism of monoids. The left face commutes because of the definition of the monoidal product in \(\operatorname{Arr}(\mathcal{C})\) and because \(f\) is a morphism of monoids. The left square in the top commutes because of naturality the associator in \(\mathcal{C}\) and the right square in the top commutes because of the definition of the monoidal product in \(\operatorname{Arr}(\mathcal{C})\) and because \(f\) is a morphism of monoids. Hence the whole diagram commutes and the pentagon axiom is satisfied.
Moreover, we need to show that the unitor diagram commutes:
The top and the bottom face commute because \(A\) and \(B\) are monoids in \(\mathcal{C}\). The left and the right face commute due to the definition of left and right unitors in \(\mathcal{C}\) and the definition of the monoidal product in \(\mathrm{Arr}(\mathcal{C})\). Because \(f\) is a morphism of monoids, the inner face also commutes. The two squares in the back commute as well, because of the definition of the monoidal product in \(\mathrm{Arr}(\mathcal{C})\) and because \(f\) is a monoid morphism. Hence the whole diagram commutes.
Finally, commutativity means that the following diagram has to commute
which holds if both \(A\) and \(B\) are commutative monoids in \(\mathcal{C}\). |
2309.13653 | Probabilistic Bounds for Data Storage with Feature Selection and
Undersampling | In this paper we consider data storage from a probabilistic point of view and
obtain bounds for efficient storage in the presence of feature selection and
undersampling, both of which are important from the data science perspective.
First, we consider encoding of correlated sources for nonstationary data and
obtain a Slepian-Wolf type result for the probability of error. We then
reinterpret our result by allowing one source to be the set of features to be
discarded and other source to be remaining data to be encoded. Next, we
consider neighbourhood domination in random graphs where we impose the
condition that a fraction of neighbourhood must be present for each vertex and
obtain optimal bounds on the minimum size of such a set. We show how such sets
are useful for data undersampling in the presence of imbalanced datasets and
briefly illustrate our result using~\(k-\)nearest neighbours type
classification rules as an example. | Ghurumuruhan Ganesan | 2023-09-24T14:40:35Z | http://arxiv.org/abs/2309.13653v1 | # Probabilistic Bounds for Data Storage with Feature Selection and Undersampling
###### Abstract
In this paper we consider data storage from a probabilistic point of view and obtain bounds for efficient storage in the presence of feature selection and undersampling, both of which are important from the data science perspective. First, we consider encoding of correlated sources for nonstationary data and obtain a Slepian-Wolf type result for the probability of error. We then reinterpret our result by allowing one source to be the set of features to be discarded and other source to be remaining data to be encoded. Next, we consider neighbourhood domination in random graphs where we impose the condition that a fraction of neighbourhood must be present for each vertex and obtain optimal bounds on the minimum size of such a set. We show how such sets are useful for data undersampling in the presence of imbalanced datasets and briefly illustrate our result using \(k-\)nearest neighbours type classification rules as an example.
Keywords:Data Storage, Probabilistic Bounds, Correlated Encoding, Feature Selection, Neighbourhood Domination, Data Undersampling. **AMS 2000 Subject Classification:** Primary: 94A15, 94A24.
## 1 Introduction
Data feature selection and undersampling are important topics from both theoretical and practical perspectives and in this paper, we use a probabilistic approach to obtain theoretical bounds for data storage after feature selection and/or undersampling. Feature selection broadly fall into two categories: supervised and unsupervised. In supervised feature selection, the target variable is used as a guide in deciding which features of the data to retain and which features to discard. On the other hand, as the name suggests, unsupervised feature selection is done without using the target variable. There are many statistical methods available for both supervised and unsupervised feature selection and for more details, we refer to Chapter 19 of [9].
We are interested in the problem of efficient data storage _after_ feature selection using Slepian-Wolf coding. Apart from traditional source coding, Slepian-Wolf coding has also been extensively used in wireless networks. For example [12]
described distributed compression techniques for dense microsensor networks using syndromes and later [8] proposed Slepian-Wolf cooperation to improve inter-user outage performance. In a related work [17] used Slepian-Wolf encoding for data aggregation in cluster-based wireless networks and [14] studied the performance of broadcasting with base-station cooperation and Slepian-Wolf coding.
In this paper, we obtain a Slepian-Wolf type encoding result for non-stationary data and illustrate how information from the discarded features could be used to store the remaining data in a more efficient manner.
In the next part of our paper, we study neighbourhood domination in random graphs with applications to data undersampling. Graph domination is important from both theoretical and application perspectives and many variants of graph domination have been studied in different contexts. For domination in random graphs, [16] obtains two point concentration for the domination number of \(G\) when \(p\) is essentially a constant and then extended for a wider range of \(p\) in [6]. Since then many other variants of domination have also been studied (see for e.g. [2][15]) and recently, [5] studies robustness of dominating sets when a small deterministic set of edges is removed from the parent complete graph.
In this paper, we study neighbourhood domination in random graphs where we impose the condition that a fraction of the neighbourhood of each vertex is present in the resulting dominating set. We use the probabilistic method to obtain sufficient conditions for the existence of neighbourhood dominating sets of minimum possible size and demonstrate optimality of our estimates by obtaining a lower bound of the same order, for homogenous random graphs with common edge probability \(p\). We also briefly illustrate a data undersampling methodology using neighbourhood domination and explain its applications to imbalanced learning.
The paper is organized as follows: In Section 2, we state and prove our first result regarding a Slepian-Wolf type bound for non-stationary data. As a consequence, we also demonstrate how savings in data storage could be achieved by using information from the discarded features. Next in Section 3, we describe our second result involving the minimum size of neighbourhood dominating sets in random graphs. We also illustrate an application of our methodology in constrained undersampling of imbalanced data. Finally in Section 4, we state our conclusion and propose potential future directions.
## 2 Dependent Encoding based on Feature Selection
Let \(\mathcal{X}\) and \(\mathcal{Y}\) be arbitrary finite sets and for \(1\leq i\leq n\), let \((X_{i},Y_{i})\in\mathcal{X}\times\mathcal{Y}\) be a random element with distribution \(\mathbb{P}(X_{i}=a,Y_{i}=b)=p_{i}(a,b)\) and corresponding marginals
\[p_{i,X}(.):=\sum_{b\in\mathcal{Y}}p_{i}(.,b)\text{ and }p_{i,Y}(.):=\sum_{a\in \mathcal{X}}p(x,.).\]
The tuples \((X_{i},Y_{i}),1\leq i\leq n\) are independent but not necessarily identically distributed.
An \(n-\)length binary code of rate \(R\) is a deterministic set \(\mathcal{C}\subset\{0,1\}^{n}\) of size \(2^{nR}\).
Definition 1: An \(n-\)length, \(Y-\)dependent encoder based on \(\mathcal{C}\) is a set of one-to-one maps \(f:=\{f_{y}(.)\}_{y\in\mathcal{Y}^{n}}\) such that \(f_{y}:\mathcal{C}\to\mathcal{X}^{n}\) for each \(y\). We define the probability of error for the encoding scheme \((f,\mathcal{C})\) as
\[q(f,\mathcal{C}):=\mathbb{P}\left(U\notin f_{V}(\mathcal{C})\right), \tag{2.1}\]
where \(U:=(X_{1},\ldots,X_{n})\) and \(V:=(Y_{1},\ldots,Y_{n})\).
In the context of data analysis, \((X_{i},Y_{i})\) represents the \(i^{th}\) data point where \(X_{i}\) represents the part of the data to be encoded and \(Y_{i}\) is the part of the data that is to be discarded based on feature selection. The goal is to use the information from \((Y_{1},\ldots,Y_{n})\) to encode \((X_{1},\ldots,X_{n})\) as efficiently as possible and we explain this in more detail at the end of this section.
We also remark that we do not assume that \((X_{i},Y_{i})\) are identically distributed to allow for the possibility of measurement errors in data due to inherent statistical noise and/or human errors.
Defining
\[H(X_{i},Y_{i}):=-\sum_{x\in\mathcal{X},y\in\mathcal{Y}}p_{i}(x,y)\log p_{i}(x, y)\text{ and }H(Y_{i}):=-\sum_{y\in\mathcal{Y}}p_{i,Y}(y)\log p_{i,Y}(y) \tag{2.2}\]
to be the respective entropies of \((X_{i},Y_{i})\) and \(Y_{i}\), we have the following result. Throughout, logarithms are to the base \(2\) and constants do not depend on \(n\).
Theorem 2.1: _Suppose_
\[\epsilon_{1}\leq\min_{i,x,y}p_{i}(x,y)\leq\max_{i,x,y}p_{i}(x,y)\leq\epsilon_{ 2},\ \ \ \epsilon_{1}\leq\min_{i,y}p_{i,Y}(y)\leq\max_{i,y}p_{i,Y}(y)\leq\epsilon_{2} \tag{2.3}\]
_and_
\[\frac{1}{n}\sum_{i=1}^{n}H(X_{i},Y_{i})\longrightarrow H_{XY}\text{ and }\frac{1}{n}\sum_{i=1}^{n}H(Y_{i})\longrightarrow H_{Y} \tag{2.4}\]
_for some positive finite constants \(\epsilon_{1},\epsilon_{2},H_{XY}\) and \(H_{Y}\). Let \(\epsilon>0\) be an arbitrary constant and let \(\mathcal{C}\) be any deterministic \(n-\)length binary code of size \(2^{nR}\) for some constant \(0<R<1\). \((a)\) If \(R>H_{XY}-H_{Y}+5\epsilon\) then there exists an encoding scheme \((f_{0},\mathcal{C})\) such that the error probability_
\[q(f_{0},\mathcal{C})\leq 4\epsilon. \tag{2.5}\]
\((b)\) If \(R<H_{XY}-H_{Y}-3\epsilon\) then for any encoding scheme \((f,\mathcal{C})\) we have
\[q(f,\mathcal{C})\geq 1-4\epsilon. \tag{2.6}\]
If the \((X_{i},Y_{i})\) are independent and identically distributed (i.i.d.) then \(H_{XY}=H(X,Y)\) and \(H_{Y}=H(Y)\) are entropies as in (2.2) and so \(H_{XY}-X_{Y}=H(X\mid Y)\) is the conditional entropy of \(X\) given \(Y\). This is the usual Slepian-Wolf encoding
bound for correlated sources and for more details in this setting, we refer to Chapter 15, pp. 549 of [3].
In general, for non-i.i.d. distributions, Theorem 1 obtains a sharp threshold for the probability of encoding error for rates above and below the threshold value of \(H_{XY}-H_{Y}\).
_Proof of Theorem 1_: We begin with some preliminary computations regarding the size of certain typical sets used in our encoding process. Let \(U=(X_{1},\ldots,X_{n}),V=(Y_{1},\ldots,Y_{n})\) where \(\{(X_{i},Y_{i})\}\) are as described prior to (2.1) and let \(p_{UV}(.,.)\) be the distribution of \((X,Y)\) with corresponding marginals
\[p_{U}(.):=\sum_{y\in\mathcal{Y}^{n}}p_{UV}(,y)\text{ and }p_{V}(.):=\sum_{x\in \mathcal{X}^{n}}p_{UV}(x,.).\]
For \(\epsilon>0\) arbitrary, we define the typical set
\[E_{Y}:=\{y\in\mathcal{Y}^{n}:2^{-n(H_{Y}+\epsilon)}\leq p_{V}(y)\leq 2^{-n(H_{Y} -\epsilon)}\} \tag{2.7}\]
where \(H_{Y}\) is as in (2.4). To estimate the probability of the event \(E_{Y}\) we first use the fact that \(\{Y_{i}\}\) are independent to get that
\[\mathbb{E}\left(-\frac{1}{n}\sum_{i=1}^{n}(\log p_{i,Y}(Y_{i})-H(Y_{i}))\right) ^{2}=\frac{1}{n^{2}}\sum_{i=1}^{n}\mathbb{E}(\log p_{i,Y}(Y_{i})-H(Y_{i}))^{2}. \tag{2.8}\]
Now from (2.3), we know that \(\mathbb{E}(\log p_{i,Y}(Y_{i})-H(Y_{i}))^{2}\leq C\) for some constant \(C>0\) and so
\[\mathbb{E}\left(-\frac{1}{n}\sum_{i=1}^{n}(\log p_{i,Y}(Y_{i})-H(Y_{i}))\right) ^{2}\leq\frac{C}{n}\longrightarrow 0 \tag{2.9}\]
as \(n\rightarrow\infty\). Combining (2.9) with (2.4), we then get that
\[\mathbb{E}\left(-\frac{1}{n}\log p_{V}(V)-H_{Y}\right)^{2}\longrightarrow 0\]
as \(n\rightarrow\infty\). Thus \(-\frac{1}{n}\log p_{V}(V)\longrightarrow H_{Y}\) in probability and so
\[\mathbb{P}(E_{Y})\geq 1-\epsilon \tag{2.10}\]
for all \(n\) large.
Similarly defining
\[E_{XY}:=\{(x,y)\in\mathcal{X}^{n}\times\mathcal{Y}^{n}:2^{-n(H_{XY}+\epsilon) }\leq p_{UV}(x,y)\leq 2^{-n(H_{XY}-\epsilon)}\} \tag{2.11}\]
and performing an analogous analysis as above, we have for all \(n\) large that \(\mathbb{P}(E_{XY})\geq 1-\epsilon\). From (2.10) and the union bound, we therefore get that
\[\mathbb{P}(E_{XY}\cap E_{Y})\geq 1-2\epsilon. \tag{2.12}\]
If \(\#E_{Y}\) denotes the cardinality of \(E_{Y}\), then by definition
\[1\geq\mathbb{P}(E_{Y})=\sum_{y\in E_{Y}}p_{V}(y)\geq\#E_{Y}\cdot\frac{1}{2^{n(H_ {Y}+\epsilon)}}\]
and so \(\#E_{Y}\leq 2^{n(H_{Y}+\epsilon)}\). Similarly using (2.10) we also have that
\[1-\epsilon\leq\mathbb{P}(E_{Y})=\sum_{y\in E_{Y}}p_{V}(y)\leq\#E_{Y}\cdot\frac{ 1}{2^{n(H_{Y}-\epsilon)}}\]
and so \(\#E_{Y}\geq(1-\epsilon)2^{n(H_{Y}-\epsilon)}\). Combining we get
\[(1-\epsilon)2^{n(H_{Y}-\epsilon)}\leq\#E_{Y}\leq 2^{n(H_{Y}+\epsilon)} \tag{2.13}\]
and arguing similarly we also get that
\[(1-\epsilon)2^{n(H_{XY}-\epsilon)}\leq\#E_{XY}\leq 2^{n(H_{XY}+\epsilon)}. \tag{2.14}\]
This completes the preliminary computations part of our proof.
For obtaining Theorem 1\((a)\), we argue as follows. For \(y\in E_{Y}\) define
\[A_{y}:=\{x\in\mathcal{X}^{n}:(x,y)\in E_{XY}\} \tag{2.15}\]
to be the slice of \(E_{XY}\) for a given \(y\). If
\[B_{Y}:=\{y\in E_{Y}:\#A_{y}\geq 2^{n(H_{XY}-H_{Y}+5\epsilon)}\},\]
then necessarily \(\#B_{Y}\leq 2^{n(H_{Y}-3\epsilon)}\); for otherwise, we would get
\[\#E_{XY}\geq\#B_{Y}\cdot 2^{n(H_{XY}-H_{Y}+5\epsilon)}\geq 2^{n(H_{XY}+2 \epsilon)}\]
and this contradicts (2.14). Thus \(\#B_{Y}\leq 2^{n(H_{Y}-3\epsilon)}\) and so from the definition of \(E_{Y}\) in (2.7) we get that
\[\mathbb{P}(B_{Y})\leq 2^{n(H_{Y}-3\epsilon)}\cdot 2^{-n(H_{Y}-\epsilon)}=\frac{ 1}{2^{2n\epsilon}}.\]
Since \(B_{Y}\subset E_{Y}\), we get from (2.10) that
\[\mathbb{P}(E_{Y}\setminus B_{Y})\geq 1-\epsilon-\frac{1}{2^{2n\epsilon}}. \tag{2.16}\]
For each \(y\in E_{Y}\setminus B_{Y}\), we have that \(\#A_{y}\leq 2^{n(H_{XY}-H_{Y}+5\epsilon)}\) and so letting \(\mathcal{C}\) be any \(n-\)length binary code of size \(2^{n(H_{XY}-H_{Y}+5\epsilon)}\) we define a one-to-one map \(f_{y}:\mathcal{C}\to A_{y}\), for each \(y\in E_{Y}\setminus B_{Y}\). If \((U,V)\) is the random tuple as described in the first paragraph of this proof, then from (2.12), (2.16) and the union bound, we see that the probability of error \(q(f,\mathcal{C})\) is at most
\[\mathbb{P}((U,V)\notin E_{XY})+\mathbb{P}(V\notin E_{Y}\setminus B_{Y})\leq 3 \epsilon+\frac{1}{2^{2n\epsilon}}\leq 4\epsilon\]
for all \(n\) large. This obtains the upper bound (2.5) and therefore completes the proof of part \((a)\).
For the lower bound (2.6) in part \((b)\), we argue as follows. Again let \((U,V)\) be the random tuple as described in the first paragraph of this proof and for \(y\in\mathcal{Y}^{n}\), let \(A_{y}\) be the "slice" of the set \(E_{XY}\) as defined in (2.15). We have that
\[\mathbb{P}(A_{V}\cap E_{Y})=\mathbb{P}\left(\{(U,V)\in E_{XY}\}\cap\{V\in E_{ Y}\}\right)\geq 1-2\epsilon, \tag{2.17}\]
as seen from (2.12). Expanding in terms of the elements of \(E_{Y}\), we also get
\[\mathbb{P}(A_{V}\cap E_{Y})=\sum_{y\in E_{Y}}\mathbb{P}_{y}(A_{y})p_{V}(y)\]
where
\[\mathbb{P}_{y}(A_{y}):=\mathbb{P}(A_{V}\mid V=y)=\sum_{x\in A_{y}}p_{U\mid V }(x\mid y) \tag{2.18}\]
and \(p_{U\mid V}(x\mid y):=\frac{p_{UV}(x,y)}{p_{V}(y)}\). Therefore defining
\[C_{Y}:=\{y\in E_{Y}:\mathbb{P}_{y}(A_{y})\geq\epsilon\}\]
we also get the upper bound
\[\mathbb{P}(A_{V}\cap E_{Y}) \leq\sum_{y\in C_{Y}}p_{V}(y)+\epsilon\sum_{y\in E_{Y}\setminus C _{Y}}p_{V}(y)\] \[\leq\mathbb{P}(C_{Y})+\epsilon. \tag{2.19}\]
Combining (2.17) and (2.19), we get that
\[\mathbb{P}(C_{Y})\geq 1-3\epsilon. \tag{2.20}\]
For any \(y\in C_{Y}\subset E_{Y}\) and \(x\in A_{y}\), we have by definition of \(E_{Y}\) and \(E_{XY}\) (see (2.7) and (2.11)) that
\[p_{U\mid V}(x\mid y) =\frac{p_{UV}(x,y)}{p_{V}(y)}\] \[\leq\frac{2^{-n(H_{XY}-\epsilon)}}{2^{-n(H_{Y}+\epsilon)}}\] \[=2^{-n(H_{XY}-H_{Y}-2\epsilon)}. \tag{2.21}\]
Thus
\[\epsilon\leq\mathbb{P}_{y}(A_{y})=\sum_{x\in A_{y}}p_{U\mid V}(x\mid y)\leq \#A_{y}\cdot 2^{-n(H_{XY}-H_{Y}-2\epsilon)}\]
and so we get that \(\#A_{y}\geq\epsilon\cdot 2^{n(H_{XY}-H_{Y}-2\epsilon)}\) for each \(y\in C_{Y}\).
Suppose now that \(\mathcal{C}\) is any deterministic \(n-\)length code of size at most \(2^{n(H_{XY}-H_{Y}-3\epsilon)}\) and let \(f\) be any \(Y-\)dependent encoding scheme based on \(\mathcal{C}\) as in Definition 1.
As in (2.18), we have for each \(y\in C_{Y}\) that
\[\mathbb{P}_{y}\left(f_{y}(\mathcal{C})\right) =\sum_{x\in f_{y}(\mathcal{C})}p_{U|V}p(x\mid y)\] \[\leq 2^{n(H_{XY}-H_{Y}-3\epsilon)}\cdot 2^{-n(H_{XY}-H_{Y}-2 \epsilon)}\] \[=\frac{1}{2^{n\epsilon}} \tag{2.22}\]
where the inequality in (2.22) follows from (2.21). Therefore
\[\mathbb{P}\left(U\in f_{V}(\mathcal{C})\right) \leq\mathbb{P}(V\notin C_{Y})+\sum_{y\in C_{Y}}p(y)\frac{1}{2^{n \epsilon}}\] \[\leq 3\epsilon+\frac{1}{2^{n\epsilon}}\] \[\leq 4\epsilon \tag{2.23}\]
for all \(n\) large. Thus the error probability \(q(f,\mathcal{C})\) for any encoding scheme \((f,\mathcal{C})\) is at least \(1-4\epsilon\) for all \(n\) large. This obtains the lower bound in (2.6) and therefore completes the proof of the Theorem.
#### Data Storage With Feature Selection
Let \(Z=(Z(1),\ldots,Z(m))\) be a random element chosen from some set \(\mathcal{Z}\). We refer to the indices \(i=1,2,\ldots,m\) as features and use statistical tests [9] to determine whether \(Z(i)\) and \(Z(j)\) are correlated and obtain the dependency graph \(G_{dep}\). The vertex set of \(G_{dep}\) is \(\{1,2,\ldots,m\}\) and an edge with endvertices \(i\) and \(j\) belongs to \(G_{dep}\) if and only if \(Z(i)\) and \(Z(j)\) are determined to be _correlated_.
The rough idea behind the dependency graph is that if \(i\) and \(j\) are neighbours in \(G_{dep}\), then \(Z(i)\) and \(Z(j)\) are highly correlated and we should lose little information if we were to discard either feature \(i\) or feature \(j\). In effect, given \(G_{dep}\) and an integer \(1\leq d\leq m\), we would like to obtain a subset \(S\) containing at most \(d\) features that satisfy the following properties:
\((i)\) The features present in \(S\) are nearly uncorrelated.
\((ii)\) Each feature not present in \(S\) is highly correlated with some feature present in \(S\).
For simplicity assume that using standard statistical tests (like for e.g. filters or wrappers, see Chapter 19, [9]), we have determined \(S=\{1,2,\ldots,d\}\) to be "best" feature set and set \(X:=(Z(1),\ldots,Z(d))\) and \(Y=(Z(d+1),\ldots,Z(m))\). Using information from both \(X\) and \(Y\), we would now like to find an encoding scheme that allows us to store \(X\) using as few bits as possible. Indeed, suppose we have \(n\) data points \((X_{i},Y_{i}),1\leq i\leq n\) and (2.4) holds together with \(\frac{1}{n}\sum_{i=1}^{n}H(X_{i})\longrightarrow H_{X}\) for some \(H_{X}>0\). Applying Theorem 1 with \(\mathcal{Y}=\emptyset\) we see that the relevant information \((X_{1},\ldots,X_{n})\) can be stored using a binary code of length roughly \(nH_{X}\), with very small encoding error and without using any information from \((Y_{1},\ldots,Y_{n})\).
On the other hand, if we use the "side" information \((Y_{1},\ldots,Y_{n})\), then we can store \((X_{1},\ldots,X_{n})\) using a binary code of length roughly \(n(H_{XY}-H_{Y})\). Using the fact that \(H(X_{i},Y_{i})\leq H(X_{i})+H(Y_{i})\), we see that \(H_{XY}\leq H_{X}+H_{Y}\) and so \(1-\frac{H_{XY}-H_{Y}}{H_{X}}\) could be interpreted as the "savings" obtained via dependent encoding resulting in "smart" storage of data.
## 3 Neighbourhood Domination based Undersampling
We begin with the definition and properties of neighbourhood domination in graphs and at the end of this section, explain its applications to data undersampling.
Let \(K_{n}\) be the complete graph with vertex set \(\{1,2,\ldots,n\}\) and let \(H\) be any deterministic subgraph of \(K_{n}\). We say that \(\mathcal{T}\subset\{1,2,\ldots,n\}\) is a _dominating_ set of \(H\) if each vertex \(v\) is either present in \(\mathcal{T}\) or is adjacent in \(H\) to some vertex \(u\in\mathcal{T}\). Let \(N(v)\) be the set of neighbours of \(v\) in the graph \(H\) and let \(d(v)=\#N(v)\). For \(0<\theta<1\), we say that \(\mathcal{T}\) is a \(\theta-\)_neighbourhood dominating_ set of \(H\) if for each vertex \(v\), there are at least \(\theta d(v)\) vertices of \(N(v)\) present in \(\mathcal{T}\).
Let \(M_{n}=M_{n}(\theta)\) be the minimum size of a \(\theta-\)neighbourhood dominating set of \(H\). In the following result below, we obtain an upper bound for \(M_{n}\) with conditions on the minimum and maximum vertex degree of \(H\) and then show that the bound is essentially optimal, using random graphs. Formally, let \(Y(f),f\in K_{n}\) be independent random variables indexed by the edge set of \(K_{n}\) and having distribution
\[\mathbb{P}(Y(f)=1)=p=1-\mathbb{P}(Y(f)=0)\]
where \(0<p=p(n)<1\). Let \(G\) be the homogenous random subgraph of \(K_{n}\) formed by the set of all edges \(e\) satisfying \(Y(f)=1\).
We have the following bounds for \(M_{n}\) in terms of the degree parameters of the deterministic graph \(H\) and the random graph realization \(G\).
Theorem 3.1: _We have: \((a)\) Let \(\Delta\) and \(\delta\) respectively denote the maximum and minimum vertex degree in \(H\) and for \(\eta>0\) let \(z:=\frac{\eta^{2}(\theta+\eta)}{4}\). Any \(\theta-\)neighbourhood dominating set of \(H\) has size at least \(\frac{\theta\delta n}{\Delta^{2}}\) and if_
\[4\Delta^{2}e^{-z\delta}\leq 1\text{ and }e^{-z}+2e^{-z\delta}<1 \tag{3.1}\]
_strictly, then there is a \(\theta-\)neighbourhood dominating set of size at most \((\theta+2\eta)n\). \((b)\) For every \(\epsilon>0\) there are constants \(M,D>0\) such that if \(p\geq\frac{M\log n}{n}\), then_
\[\mathbb{P}\left((\theta-\epsilon)n\leq M_{n}\leq(\theta+\epsilon)n\right)\geq 1 -e^{-Dnp}. \tag{3.2}\]
In other words, with high probability, i.e. with probability converging to one as \(n\to\infty\), we see that the size \(M_{n}\) of a \(\theta-\)neighbourhood dominating set is roughly of the order of \(\theta n\).
In the context of data analysis, the vertices of \(K_{n}\) represent data points and for practical reasons explained at the end of this section, it is often important to undersample or reduce the data set size to a fraction \(\theta n\) while allowing for enough neighbours per data point. Theorem 2.2 ensures that there exists such a set under the sufficient condition (3.1) which usually holds for most cases of interest as demonstrated at the section end.
In our proof of Theorem 2.2 below, we use the following results regarding the deviation estimates of sums of independent Bernoulli random variables and the Lovasz Local Lemma, which we state together as a Lemma for convenience.
Lemma 1: \((a)\) _Let \(\{W_{j}\}_{1\leq j\leq r}\) be independent Bernoulli random variables satisfying \(\mathbb{P}(W_{j}=1)=1-\mathbb{P}(W_{j}=0)>0\). If \(S_{r}:=\sum_{j=1}^{r}W_{j},\theta_{r}:=\mathbb{E}S_{r}\) and \(0<\gamma\leq\frac{1}{2}\), then_
\[\mathbb{P}\left(|S_{r}-\theta_{r}|\geq\theta_{r}\gamma\right)\leq 2\exp\left( -\frac{\gamma^{2}}{4}\theta_{r}\right) \tag{3.3}\]
_for all \(r\geq 1\). \((b)\) Let \(A_{1},\ldots,A_{t}\) be events in an arbitrary probability space. Let \(\Gamma\) be the dependency graph for the events \(\{A_{i}\},\) with vertex set \(\{1,2,\ldots,t\}\) and edge set \(\mathcal{E};\) i.e. assume that each \(A_{i}\) is independent of the family of events \(A_{j},(i,j)\notin\mathcal{E}\). If there are reals \(0\leq y(i)<1\) such that \(\mathbb{P}(A_{i})\leq y(i)\prod_{(i,j)\in\mathcal{E}}(1-y(j)),\) for each \(i,\) then_
\[\mathbb{P}\left(\bigcap_{i}A_{i}^{c}\right)\geq\prod_{1\leq i\leq n}(1-y(i))>0.\]
For proofs of Lemma 1\((a)\) and \((b)\), we refer respectively to Corollary A.1.14, pp. 312 and Lemma 5.1.1, pp. 64 of [1].
_Proof of Theorem 2\((a)\)_: Say that a set \(V\) of vertices is \(3-\)far if the graph distance (i.e. the number of edges in the shortest path) between any two vertices is at least \(3\). For any vertex \(v\), there are at most \(\Delta^{2}\) vertices at a distance \(2\) from \(v\) where \(\Delta\) is the maximum vertex degree in \(H\). Therefore from Theorem 3.2.1, pp. 27 of [1], we know that the graph \(H\) contains a \(3-\)far set \(\mathcal{T}\) of size at least \(\frac{n}{2\Delta^{2}}\). For any two vertices \(u,v\in\mathcal{T}\), the corresponding neighbourhoods are disjoint; i.e. \(N(u)\cap N(v)=\emptyset\) and so
\[M_{n}\geq\sum_{v\in\mathcal{T}}\theta d(v)\geq\frac{\theta\delta n}{2\Delta^{ 2}}.\]
For the upper bound on \(M_{n}\), we use the probabilistic method. Select each vertex of \(H\) with probability \(x\), independent of other vertices as follows. Let \(Z_{j},1\leq j\leq n\) be independent and identically distributed (i.i.d.) Bernoulli random variables, with
\[\mathbb{P}_{Z}(Z_{j}=1)=x=1-\mathbb{P}_{Z}(Z_{j}=0)\]
and let \(S=\{v:Z_{v}=1\}\) be the random set of chosen vertices. From the standard deviation estimate (3.3), we get that
\[\mathbb{P}_{Z}\left(\#S\leq(x+\eta)n\right)\geq 1-\exp\left(-\frac{\eta^{2}}{4} xn\right), \tag{3.4}\]
where \(\#S\) is the cardinality of \(S\).
Our goal is to show that if \(x\) and \(\eta\) are chosen appropriately, then \(S\) is a \(\theta-\)neighbourhood dominating set with positive \(\mathbb{P}_{Z}-\)probability. Let \(N_{Z}(v)=\{u\sim v:Z_{u}=1\}\) be the set of all vertices adjacent to \(v\) in \(G\), that are also present in \(S\) and let \(A_{v}\) be the event that \(\#N_{Z}(v)\geq(x-\eta)d(v)\). From the standard deviation estimate (3.3) we get that
\[\mathbb{P}_{Z}(A_{v})\geq 1-\exp\left(-\frac{\eta^{2}xd(v)}{4}\right)\geq 1-\alpha \tag{3.5}\]
where \(\alpha:=\exp\left(-\frac{\eta^{2}x}{4}\delta\right)\) and \(\delta\) is the minimum vertex degree in \(G\).
The events \(A_{v}\) and \(A_{u}\) are independent if the graph distance between \(u\) and \(v\) satisfies \(d_{G}(u,v)\geq 3\) and so if \(\Delta\) denotes the maximum vertex degree in \(H\), then each \(A_{u}\) depends on at most \(\Delta^{2}\) of the events in \(\{A_{v}\}\), which we denote as \(\mathcal{E}(v)\). This allows us to use Lovasz Local Lemma in Lemma 1\((b)\) under the assumption that
\[4\Delta^{2}\alpha\leq 1. \tag{3.6}\]
Indeed, setting \(y(v)=2\alpha\) we see that
\[y(v)\prod_{u\in\mathcal{E}(v)}(1-y(u))=2\alpha(1-2\alpha)^{\Delta^{2}}\geq 2 \alpha(1-2\alpha\Delta^{2})\geq\alpha\geq\mathbb{P}_{Z}(A_{v}^{c}) \tag{3.7}\]
where the third relation in (3.7) is true by (3.6) and the final relation in (2.8) follows from (3.5).
From Lemma 1\((b)\), we therefore see that \(\mathbb{P}_{Z}\left(\bigcap_{v}A_{v}\right)\geq\left(1-2\alpha\right)^{n}.\) Choosing \(x-\eta=\theta\) and combining with (3.4), we then obtain that \(S\) is a \(\theta-\)neighbourood dominating set of size at most \((\theta+2\eta)n\), with \(\mathbb{P}_{Z}-\)probability at least
\[(1-2\alpha)^{n}-\exp\left(-\frac{\eta^{2}}{4}xn\right)>0\]
if
\[\exp\left(-\frac{\eta^{2}}{4}x\right)\leq 1-2\alpha-y \tag{3.8}\]
for some small constant \(y>0\). The conditions (3.6) and (3.8) complete the proof of part \((a)\) of the Theorem. 2
_Proof of Theorem 2\((b)\)_: The neighbourhood \(N(v)\) of a vertex \(v\) is the set of all vertices adjacent to \(v\) in \(G\) and the degree of \(v\) is defined as \(d(v)=\#N(v)\), the cardinality of \(N(v)\).
For the upper bound, we use the conditions in Theorem statement to show that \(G\) satisfies both the conditions (3.6) and (3.8) with high probability, i.e. with probability converging to one as \(n\rightarrow\infty\). The expected degree of any vertex \(v\) is \((n-1)p\) and so using the deviation estimate (3.3), we see that
\[\mathbb{P}\left(d(v)\geq\frac{np}{2}\right)\geq 1-e^{-Cnp}\]
for some constant \(C>0\). Setting \(E_{deg}:=\bigcap_{v=1}^{n}\left\{d(v)\geq\frac{np}{2}\right\}\) and using the union bound, we then get that
\[\mathbb{P}\left(E_{deg}\right)\geq 1-ne^{-Cnp}\geq 1-ne^{-CM\log n}\geq 1-\frac{1}{n}, \tag{3.9}\]
provided \(M\) is chosen sufficiently large. We fix such an \(M\) henceforth and see that if \(E_{deg}\) occurs, then the minimum vertex degree in \(G\) is at least \(\frac{np}{2}\geq\frac{M\log n}{2}\). For all \(n\) large, this implies that (3.8) holds, provided we fix \(y\) small enough. Also, since the maximum vertex degree \(\Delta\leq n\), we can choose \(M\) larger if necessary and ensure that (3.6) also holds. In effect, we see that if \(E_{deg}\) holds, then there is a \(\theta-\)neighbourhood dominating set of \(G\) containing at most \((\theta+2\eta)n\) vertices and this obtains the upper bound in (3.2).
We prove the lower bound as follows. For any vertex \(v\) the expected degree \(\mathbb{E}d(v)=(n-1)p\) and so using the standard deviation estimate (3.3), we get for \(\epsilon>0\) that
\[\mathbb{P}\left(d(v)\geq np(1-\epsilon)\right)\geq 1-e^{-Cnp}\]
for some constant \(C>0\). Letting \(E_{deg}:=\bigcap_{v=1}^{n}\left\{d(v)\geq np(1-\epsilon)\right\},\) we then get from the union bound that
\[\mathbb{P}(E_{deg})\geq 1-ne^{-C_{1}np} \tag{3.10}\]
for some constant \(C_{1}>0\).
Next, let \(S\) be any set containing \(\zeta n\) vertices with \(\zeta<\theta\) strictly and let \(m(S)\) be the number of edges of \(G\) having both endvertices in \(S\). We know that
\[\frac{\zeta^{2}n^{2}p}{2}\geq\mathbb{E}m(S)=\binom{\zeta n}{2}p\geq\frac{ \zeta^{2}n^{2}p}{4}\]
and so by the deviation estimate (3.3), we get that
\[\mathbb{P}\left(m(S)\leq\frac{\zeta^{2}n^{2}p}{2}\right)\geq 1-e^{-C_{2}n^{2}p} \tag{3.11}\]
for some constant \(C_{2}>0\). The number of choices for \(S\) is at most \(2^{n}\) and so letting \(E_{edge}\) be the event that \(m(S)\leq\frac{\zeta^{2}n^{2}p}{2}\) for each \(S\) containing \(\zeta n\) vertices, we get from the union bound and (3.11) that
\[\mathbb{P}(E_{edge})\geq 1-2^{n}e^{-C_{2}n^{2}p}. \tag{3.12}\]
We assume henceforth that \(E_{deg}\cap E_{edge}\) occurs, which by the union bound (3.10) and (3.12) happens with probability
\[\mathbb{P}(E_{deg}\cap E_{edge})\geq 1-ne^{C_{1}np}-2^{n}e^{-C_{2}n^{2}p}\geq 1-e ^{-C_{3}np} \tag{3.13}\]
for some constant \(C_{3}>0\) provided \(np>C_{4}\log n\) for some sufficiently large constant \(C_{4}>0\). Let \(S\) be any set of \(\zeta n\) vertices. Because \(m(S)\leq\frac{\zeta^{2}n^{2}p}{2}\) and
the sum of the vertex degrees equals twice the number of edges, there exists a vertex \(z\in S\) that is adjacent to at most
\[\zeta np=\frac{\zeta}{1-\epsilon}np(1-\epsilon)\leq\frac{\zeta}{1-\epsilon}d(z)\]
vertices of \(S\), since \(E_{deg}\) also occurs. We can choose \(\epsilon\) small enough so that \(\frac{\zeta}{1-\epsilon}<\theta\) strictly and obtain that no set containing \(\zeta n\) vertices is a \(\theta-\)neighbourhood dominating set of \(G\). Thus \(M_{n}\geq\zeta n\) and from (3.13) we then get the lower bound in (3.2). This completes the proof of the Theorem.
#### 3.1.2 Data Undersampling
In many data classification problems involving two classes, there is often a large imbalance in the size of the majority and minority classes and this adversely affects the classwise accuracy of common predictive models like \(k-\)Nearest Neighbour (\(kNN\)), Random Forest, Logistic Regression or Neural Networks [7][10][4]. Therefore it is important to undersample or reduce the size of the majority class and make it comparable to the size of the minority class. Many different undersampling methods have been proposed in literature like random undersampling, near miss, condensed neighbour etc. and for more on undersampling techniques in data analysis, we refer to [13], [11].
In this paper, we use neighbourhood domination as an undersampling methodology to obtain a balanced data set of given size. Indeed, suppose we are given \(n\) data points from the majority class and \(\theta n\) data points from the minority class for some \(0<\theta<1\). Instead of randomly selecting \(\theta n\) points from the majority class, we could extract a more "representative" set with the additional assumption that there is a distance metric \(d(u,v)\) between two majority class data points \(u\) and \(v\).
Henceforth any vertex represents a data point belonging to the majority class. For a vertex \(v\), let \(N_{k}(v)\) be the set of \(k-\)nearest neighbours of \(v\), according to the metric \(d\). Let \(H\) be the directed graph obtained by representing the data points as vertices and drawing a directed edge from vertex \(u\) to vertex \(v\) if \(v\in N_{k}(u)\). The same analysis as in Theorem 2(\(a\)) holds with \(\delta=\Delta=k\) and so if (3.1) holds then there is a \(\theta-\)neighbourhood dominating set of size at most \((\theta+2\eta)n\).
Setting \(k=M\log n\) for some large enough constant \(M\), we see that (3.1) is satisfied and so there exists a \(\theta-\)neighbourhood dominating set \(S\) of size
\((\theta+2\eta)n\). The advantage of using \(S\) is that any vertex \(v\) still has at least \(\theta k\) neighbours from the set of its \(k-\)nearest neighbours and this would aid greatly while performing classification, say for example, using \(kNN\) type rules.
## 4 Conclusion
In this paper we have used a probabilistic approach towards data storage with feature selection and undersampling. We first obtained a Slepian-Wolf type result for nonstationary data and thereby demonstrated savings in data storage
obtained by using information from the discarded features. We then considered neighbourhood domination in random graphs and explained how our methodology could be used to perform constrained undersampling in imbalanced datasets.
For the future we plan to apply our undersampling method to real world datasets and study the resulting minority class accuracy. We also would like to develop practical encoding schemes that use information from the discarded features and therefore result in smart data storage.
#### 5.0.1 Acknowledgements
I thank Professors Rahul Roy, Thomas Mountford, Federico Camia, Alberto Gandolfi, Lasha Ephremidze and C. R. Subramanian for crucial comments that led to an improvement of the paper. I also thank IMSc and IISER Bhopal for my fellowships.
|
2309.03440 | Punctate White Matter Lesion Segmentation in Preterm Infants Powered by
Counterfactually Generative Learning | Accurate segmentation of punctate white matter lesions (PWMLs) are
fundamental for the timely diagnosis and treatment of related developmental
disorders. Automated PWMLs segmentation from infant brain MR images is
challenging, considering that the lesions are typically small and low-contrast,
and the number of lesions may dramatically change across subjects. Existing
learning-based methods directly apply general network architectures to this
challenging task, which may fail to capture detailed positional information of
PWMLs, potentially leading to severe under-segmentations. In this paper, we
propose to leverage the idea of counterfactual reasoning coupled with the
auxiliary task of brain tissue segmentation to learn fine-grained positional
and morphological representations of PWMLs for accurate localization and
segmentation. A simple and easy-to-implement deep-learning framework (i.e.,
DeepPWML) is accordingly designed. It combines the lesion counterfactual map
with the tissue probability map to train a lightweight PWML segmentation
network, demonstrating state-of-the-art performance on a real-clinical dataset
of infant T1w MR images. The code is available at
\href{https://github.com/ladderlab-xjtu/DeepPWML}{https://github.com/ladderlab-xjtu/DeepPWML}. | Zehua Ren, Yongheng Sun, Miaomiao Wang, Yuying Feng, Xianjun Li, Chao Jin, Jian Yang, Chunfeng Lian, Fan Wang | 2023-09-07T01:46:17Z | http://arxiv.org/abs/2309.03440v1 | Punctate White Matter Lesion Segmentation in Preterm Infants Powered by Counterfactually Generative Learning
###### Abstract
Accurate segmentation of punctate white matter lesions (PWMLs) are fundamental for the timely diagnosis and treatment of related developmental disorders. Automated PWMLs segmentation from infant brain MR images is challenging, considering that the lesions are typically small and low-contrast, and the number of lesions may dramatically change across subjects. Existing learning-based methods directly apply general network architectures to this challenging task, which may fail to capture detailed positional information of PWMLs, potentially leading to severe under-segmentations. In this paper, we propose to leverage the idea of counterfactual reasoning coupled with the auxiliary task of brain tissue segmentation to learn fine-grained positional and morphological representations of PWMLs for accurate localization and segmentation. A simple and easy-to-implement deep-learning framework (i.e., DeepPWML) is accordingly designed. It combines the lesion counterfactual map with the tissue probability map to train a lightweight PWML segmentation network, demonstrating state-of-the-art performance on a real-clinical dataset of infant T1w MR images. The code is available at [https://github.com/ladderlab-xjtu/DeepPWML](https://github.com/ladderlab-xjtu/DeepPWML).
## 1 Introduction
Punctate white matter lesion (PWML) is a typical type of cerebral white matter injury in preterm infants, potentially leading to psychomotor developmental delay, motor delay, and cerebral palsy without timely treatment [2]. The early detection and quantitative analysis of PWMLs are critical for diagnosis and treatment, especially considering that some PWML subtypes are only detectable by magnetic resonance imaging (MRI) shortly after birth (e.g., around the third week) and will become invisible thereafter [6]. The PWMLs are small
targets that typically locate anterior or adjacent to the ventricles [8]. Manually annotating them in MR images is very time-consuming and relies on expertise. Thus there is an urgent need, from neuroradiologists, to develop reliable and fully automatic methods for 3D PWML segmentation.
Automated localization and delineation of PWML are practically challenging. This is mainly because that PWMLs are isolated small objects, with typically only dozens of voxels for a lesion and varying numbers of lesions across different subjects. Also, due to underlying immature myelination of infant brains [1], the tissue-to-tissue and lesion-to-tissue contrasts are both very low, especially in T1w MR images commonly used in clinical practice. In addition to conventional methods based on thresholding [3] or stochastic likelihood estimation [4], recent works attempted to apply advanced deep neural networks in the specific task of PWML segmentation [12, 10, 9]. For example, Liu _et al._[10] extended Mask R-CNN [7] to detect and segment PWMLs in 2D image slices. Li _et al._[9] implemented a 3D ResU-Net to segment diffuse white matter abnormality from T2w images. Overall, these existing learning-based methods usually use general network architectures. They may fail to completely capture fine-grained positional information to localize small and low-contrast PWMLs, potentially resulting in high under-segmentations.
Counterfactual reasoning,explained by our task, studies how a real clinical brain image appearance (factual) changes in a hypothetical scenario (whether lesion exist or not). This idea has been applied as structural causal models (SCMs) in a deep learning way in recent years. At the theoretical level, Monteiro _et al.[11]_ have presented a theoretically grounded framework to evaluate counterfactual inference models. Due to the advantage of being verifiable, this idea appeared in many medical scenarios. Pawlowski _et al.[14]_ proposed a general framework for building SCMs and validated on a MNIST-like dataset and a brain MRI dataset. Reinhold _et al.[15]_ developed a SCM that generates images to show what an MR image would look like if demographic or disease covariates are changed. In this paper, we propose a fully automatic deep-learning framework (DeepPWML) that leverages counterfactual reasoning coupled with location information from the brain tissue segmentation to capture fine-grained positional information for PWML localization and segmentation. Specifically, based on patch-level weak-supervision, we design a counterfactual reasoning strategy to learn voxel-wise residual maps to manipulate the classification labels of input patches (i.e., containing PWMLs or not). In turn, such fine-grained residual maps could initially capture the spatial locations and morphological patterns of potential lesions. In this article, we define this residual map as counterfactual map which may be different from the meaning of counterfactual map in other articles, hereby declare. And to refine the information learned by the counterfactual part, we further include brain tissue segmentation as an auxiliary task. Given the fact that PWMLs have specific spatial correlations with different brain tissues, the segmentation probability maps (and inherent location information) could provide a certain level anatomical contextual information to assist lesion identification. Finally,
by using the counterfactual maps and segmentation probability maps as the auxiliary input, we learn a lightweight sub-network for PMWL segmentation.
Overall, our DeepPWML is practically easy to implement, as the counterfactual part learns simple but effective linear manipulations, the tissue segmentation part can adopt any off-the-shelf networks, and the PWML segmentation part only needs a lightweight design. On a real-clinical dataset, our method led to a state-of-the-art performance in the infant PWML segmentation task.
## 2 Method
As shown in Fig. 1, our DeepPMWL consists of four parts, i.e., the tissue segmentation module (T-SEG), the classification module (CLS), the counterfactual map generator (CMG), and the PWML segmentation module (P-SEG). Specifically, in the training stage, T-SEG is learned on control data, while other modules are
Figure 1: Overview of the training and test steps of our DeepPWML framework that consists of four components, i.e., T-SEG, CLS, CMG, and P-SEG modules.
learned on PWML data. Given an image patch as the input, CLS is trained to distinguish positive (containing PWML) or negative (no PWML) cases, based on which CMG is further trained to produce a counterfactual map to linearly manipulate the input to change the CLS result.
The high-resolution counterfactual maps (CF maps) and segmentation probability maps (SP maps) are further combined with the input patch to train a lightweight P-SEG for PWML segmentation. In the test stage, an input patch is first determined by the CLS module whether it is positive. Positive inputs will pass through the T-SEG, CMG, and P-SEG modules to get the PWML segmentation results. It is worth noting that the test patches are generated by sliding windows, and the overlapping results are averaged to get the final segmentation results for the image, which reduces the impact of incorrect classification of the CLS module. In our experiments, T-SEG used the voxel-wise Cross-Entropy Loss. CLS used the Categorical Cross-Entropy Loss, CMG combined the sparsity loss (L1 and L2 norms) with the classification loss. Finally, P-SEG used the Dice Loss. In the following subsections, we will introduce every module in our design.
### Tissue Segmentation Module
The task is to mark every pixel of the brain as cerebrospinal fluid (CSF), gray matter (GM), or white matter (WM). The choice of this module can be flexible, and there are many off-the-shelf architecture designs available. We adopt a simple Dense-Unet architecture [16] for the T-SEG module. It is trained on control premature infants' images. This module will output the SP map in which segmentation result can be obtained. Therefore, this SP map naturally contains some level anatomy information. Moreover, when an input with PWML goes through a network that has only been trained on control data, the segmentation mistake is partly due to the existence of PWML. Therebefore, this module can output a SP map carrying both potential location and anatomy guidance for PWML localization and segmentation.
### Classification Module and Counterfactual Map Generator
The CLS and the CMG are trained sequentially. The CLS is trained to determine whether the current patches have lesions. The CMG is a counterfactual reasoning step for the CLS. Based on the characteristic of PWML, CMG learns a simple linear sparse transform shown as the CF map. This map aims to offset the bright PWML pixels of the image patches, which are classified as positive, or seed PWML on the patches judged as negative. In other words, CMG is learning a residual activation map for conversion between control and PWML. We adopt the T-SEG module's encoder with two fully connected layers as the CLS module. Furthermore, the architecture of CMG is a simple U-net adding a "switch" state in its skip-connection parts according to the method of Oh _et al._[13]. Based on the nature of PWMLs, the last layer of CMG is Relu activation to ensure that the generated CF map is a positive activation.
The state of the "switch" is determined by the classifier's result on the current patch. If the judgement is positive, correspondingly, the "switch" status is 0. In this condition, the activated areas in the CF map should be where PWMLs exist. Then the pseudo patches in Fig. 1, obtained by subtracting the CF map from the input patches, should be judged as negative by the fixed CLS. Another state of the "switch" is used to generate PWMLs. When the CLS judges are negative, the "switch" status is 1. in this situation, the input patches combining the CF map should be classified as positive. This training strategy is to make CMG learn PWML features better. When it comes to the test phase, the switch status will be fixed to 0. Because in the test phase, the CF map only needs to capture PWML.
The CMG module is summarised as follows: Firstly, PWML patches \(C_{P}\) and control patches \(C_{N}\) are fed to the encoder to obtain encoded representations \(F_{P}\) and \(F_{N}\):
\[F_{P} =\text{Encoder}(C_{P}), \tag{1}\] \[F_{N} =\text{Encoder}(C_{N}). \tag{2}\]
Secondly, "switch" filled with zeros/ones with the same size as PWML/normal representations \(F_{P}/F_{N}\) are added to these representations and then pass through the decoder to obtain the CF maps \(M_{P}/M_{N}\):
\[M_{P} =\text{Decoder}(F_{P}+Zeros), \tag{3}\] \[M_{N} =\text{Decoder}(F_{N}+Ones). \tag{4}\]
Finally, the original patches \(C_{P}/C_{N}\) are added/subtracted to the CF maps \(M_{P}/M_{N}\) to yield the transformed patches \(\widetilde{C}_{P}/\widetilde{C}_{P}\), which are classified by the CLS module as the opposite classes:
\[\widetilde{C}_{P} =C_{P}+M_{P}, \tag{5}\] \[\widetilde{C}_{N} =C_{N}-M_{N}. \tag{6}\]
### PWML Segmentation Module
The SP map includes the potential PWML existence, but also a lot of tissue segmentation uncertainty. The CF map directly shows the PWML location, but due to the accuracy of the CLS module, the CF map itself will also carry some false positives fault. If we synthesize the CF map, the SP map and the original input patches for appearance information, the best segmentation result can be achieved by allowing the network to verify and filter out each information in a learnable way.
The P-SEG module is implemented as a lightweight variant of the Dense-Unet. Different simplified versions have been tested, with the results summarized in Section 3.2. After getting the PWML segmentation result, we use the tissue segmentation result to filter out PWMLs mis-segmented at the background and CSF.
## 3 Experiments and Results
### Dataset and Experimental Setting
**Dataset:** Experiments were performed on a dataset with two groups (control and PWML), where control included 52 subjects without PWML observed, and PWML included 47 subjects with PWMLs. All infants in this study were born with gestational age (GA) between 28 to 40 weeks and scanned at post-menstrual age (PMA) between 37 to 42 weeks. Two neuroscientists manually labeled PWML areas and corrected tissue labels generated by iBeat [5]. Written Consent was obtained from all parents under the institutional review board, and T1-weighted MR images were collected using a 3T MRI scanner, resampling the resolution of all images into 0.9375 \(\times\) 0.9375 \(\times\) 1 mm\({}^{3}\). All images are cropped to 130 \(\times\) 130 \(\times\) 170.
**Experimental Setting:** Our method was implemented using Tensorflow. All modules were trained and tested on an NVIDIA GeForce RTX 3060 GPU. We adopted Adam as the optimizer, with the learning rate varying from 0.001 to 0.00001 according to modules. The inputs were fixed-size patches (\(32\times 32\times 32\)) cut from the T1w images. The train/validation/test ratio was 0.7/0.15/0.15 and divided on subject-level. We didn't use any data augmentation during training. We used Dice, True Positive Rate (TPR), and Positive Predictive Value (PPV) to quantitatively evaluate the segmentation performance.
Figure 2: Visual comparisons of the representative PWML segmentation results.
### Results
First, the T-SEG module is trained using a fully supervised way. Its tissue segmentation accuracy on the test set is about 93% in terms of Dice. Second, the CLS and other modules are trained with PWML group data. We defined the input training patches' class labels by whether they contain PWMLs or not. In other words, if any patch has at least one lesion voxel, it is positive. The accuracy of the test set can reach around 90%. Third, we train the CMG module based on the well-trained and fixed CLS module. Finally, based on T-SEG and CMG, we train P-SEG. We combine the SP map, CF map, and T1w image in a channel-wise way as the input of the module without any additional processing of these features.
**Comparison Results:** We compared our method with the state-of-the-art method [10]. As is shown in Table 1, our method outperforms the state-of-the-art method and the baseline model in all three indexes. The visualization results are shown in Fig. 2, from which it can be seen that our method can segment small-size PWMLs more accurately and segment PWMLs with different severities more completely.
**Ablation Studies:** We further evaluated the effectiveness of our design by comparing the results of the pipelines with and without SP maps and CF maps. The ablation results are shown in the last six rows of Table 1. The baseline model using the same dense-Unet is trained to segment the PWML from T1w images. Other settings are consistent with our final module. Then we will add our designed modules step by step to verify the effectiveness of two kinds of auxiliary information.
By comparing "baseline", "SP map", and "CF map", we can find that the two kinds of information individually are not good for segmenting PWMLs. The reason is as follows. The SP map mainly focuses on tissue segmentation task. The CF map has some false activation due to the offset of the highlighted areas for PWML. Fusing these two kinds of information has reduced their respective defects ("SP map + CF map"). The icing on the cake is that when the appearance features of T1w are used again, the accuracy will be significantly improved ("SP map + T1" and "CF map + T1"). This means "SP map" and "CF map" each can
\begin{table}
\begin{tabular}{c|c|c c c} \hline \multicolumn{2}{c|}{Methods} & Dice & TPR & PPV \\ \hline \multirow{5}{*}{Ours} & Baseline [16] & 0.649(0.239) & 0.655(0.244) & 0.704(0.281) \\ & RS R-CNN [10] & 0.667(0.172) & 0.754(0.250) & 0.704(0.187) \\ \hline \multirow{5}{*}{Ours} & SP map & 0.649(0.142) & 0.726(0.210) & 0.677(0.213) \\ & CF map & 0.507(0.169) & 0.543(0.286) & 0.647(0.180) \\ \cline{1-1} & SP map + T1 & 0.680(0.178) & 0.794(0.211) & 0.699(0.237) \\ \cline{1-1} & CF map + T1 & 0.672(0.198) & 0.741(0.249) & 0.719(0.178) \\ \cline{1-1} & SP map + CF map & 0.670(0.184) & 0.781(0.181) & 0.684(0.251) \\ \cline{1-1} & SP map + CF map + T1 & **0.721(0.177)** & **0.797(0.185)** & **0.734(0.211)** \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of our method (and its variants) with the state-of-the-art method and the baseline model. All metrics are presented as “mean (std)”.
be an auxiliary information but not sufficient resource for this task. Finally, after combining the three together, all indicators have been significantly improved ("SP map + CF map + T1").
**Visual Analysis:** Fig. 3 shows the T1w images, tissue segmentation maps, CF maps, labels, and segmentation results. By selecting the most likely category from the SP map as the label, the tissue segmentation map can be obtained. As shown in the tissue segmentation maps, PWML voxels tend to be classified as gray matter surrounded by white matter which obviously does not conform to the general anatomy knowledge. The reason of this phenomenon may be that the intensity of gray matter and PWML are higher than white matter in T1w image at this age. It also can be seen from the CF maps that these maps have a preliminary localization of PWML. The last row shows the situation without PWML. It can be seen that the tissue segmentation is reasonable. The CF map has a small amount of activation and the intensity is significantly lower than the first three rows. In conclusion, these two maps complementarily provide the anatomical and morphological information for the segmentation of the PWML.
**Comparison of different backbones of the P-SEG Module:** We test from simple several layers to the whole dense-Unet to determine the required complexity in Table 2. We compared six designs with different network sizes in Table 2. The first three methods are several convolution layers with the same resolution.
Figure 3: Example visualization of T1w images, tissue segmentation maps, CF maps, labels, and segmentation results. Tissue segmentation maps are the final segmentation output of the T-SEG module. CF maps are the output of the CMG module.
The latter three reduce the number of down-samplings in the original dense-Unet. By comparing the Dice index, it is obvious that the simple convolution operation cannot integrate the three kinds of input information well. The results show that the encoder-decoder can better fuse information. Perhaps because of the small size of PWML, it does not require too much down-sampling to get a similar result as the optimal result. The result also indicates that a certain degree of network size is needed to learn the PWMI characteristics.
## 4 Conclusion
In this study, we designed a simple and easy-to-implement deep learning framework (i.e. DeepPWML) to segment PWMLs. Leveraging the idea of generative counterfactual inference combined with an auxiliary task of brain tissue segmentation, we learn fine-grained positional and morphological representations of PWMLs to achieve accurate localization and segmentation. Our lightweight PWML segmentation network combines lesion counterfactual maps with tissue segmentation probability maps, achieving state-of-the-art performance on a real clinical dataset of infant T1w MR images. Moreover, our method provides a new perspective for the small-size segmentation task.
|
2309.00149 | TurboGP: A flexible and advanced python based GP library | We introduce TurboGP, a Genetic Programming (GP) library fully written in
Python and specifically designed for machine learning tasks. TurboGP implements
modern features not available in other GP implementations, such as island and
cellular population schemes, different types of genetic operations (migration,
protected crossovers), online learning, among other features. TurboGP's most
distinctive characteristic is its native support for different types of GP
nodes to allow different abstraction levels, this makes TurboGP particularly
useful for processing a wide variety of data sources. | Lino Rodriguez-Coayahuitl, Alicia Morales-Reyes, Hugo Jair Escalante | 2023-08-31T21:50:23Z | http://arxiv.org/abs/2309.00149v1 | # TurboGP: A flexible and advanced python based GP library
###### Abstract
We introduce _TurboGP_, a Genetic Programming (GP) library fully written in Python and specifically designed for machine learning tasks. TurboGP implements modern features not available in other GP implementations, such as island and cellular population schemes, different types of genetic operations (migration, protected crossovers), online learning, among other features. TurboGP's most distinctive characteristic is its native support for different types of GP nodes to allow different abstraction levels, this makes TurboGP particularly useful for processing a wide variety of data sources.
G (c)2023 Rodriguez-Coayahuitl, Morales-Reyes & Escalante, HJ License: CC-BY 4.0, see [https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/).
Genetic Programming, Symbolic Regression, Evolutionary machine learning, On-line learning.
## 1 Introduction
Genetic Programming (GP) is an evolutionary computation (EC) framework to automatically generate models and (simple) computer programs (Koza, 1992). GP has been widely used for a variety of tasks including machine learning (ML), see e.g., (Guo and Nandi, 2006; Shao et al., 2013; Cano and Krawczyk, 2019). In GP, models are commonly represented by abstract syntax trees, such as the one depicted in Fig. 1. GP's representation flexibility makes it appropriate to codify most models considered in machine learning problems, from classifiers (Espejo et al., 2009) to reinforcement learning agents (Co-Reyes et al., 2021). GP works by initializing a population of randomly generated models, called individuals, then selects and transforms some of the best individuals in the population to generate a next generation of individuals that are better at solving the problem at hand. This process of selection \(\rightarrow\) mutation \(\rightarrow\) survival of the fittest, is repeated in an iterative cycle that mimics natural evolution.
This paper introduces TurboGP, a GP library based on Python to target machine learning modeling problems. TurboGP implements standard components and techniques commonly used in GP, as well as recent developments in the field. We emphasize that many of these features are not found, or are difficult to re-implement, in available GP libraries (see Sec. 2.3). Thus, we consider TurboGP a _modern_ GP implementation. The library is available at [https://github.com/l1n0b1/TurboGP](https://github.com/l1n0b1/TurboGP), where the code, notebook tutorials, as well as links to sample datasets needed for running the demos can be found.
## 2 TurboGP
TurboGP is designed in a modular fashion: core modules define classes and methods for basic building blocks, such as tree objects and fundamental genetic operations (e.g. subtree crossover, mutation); another core module defines the population dynamics, i.e. methods to perform evolutionary cycles under different survival/replacement policies, such as steady state; it also contains methods to implement cellular populations, where individuals are assigned a _spatial location_ property, to interact only (generate offspring, compete for survival) with individuals located within their neighborhood. GP individual classes define objects such as regression or classification models, i.e. the type of GP individuals to be evolved. These objects contain one or more GP trees, methods to evaluate them against a dataset, and variables to store their associated fitness value. GP individual classes also define wrappers for genetic operations that rely on methods implemented in core modules, but may also provide additional logic to implement more complex GP operations, such as crossovers that ensure offspring generated do not exceed max tree size limit, or that they are semantically valid (i.e., _protected_ crossovers). TurboGP also ships with a module that encapsulates and abstracts all its internal workings into a Scikit-alike interface, for rapid prototyping and experimentation. This modularity allows to easily modify and extend its functionality.
### Usage
Code 1 illustrates how TurboGP can be used to launch GP processes while combining standard GP steps and a Scikit-learn alike workflow: (lns.6-7) primitive sets declaration; (ln.8) GP-individual parameters setup, (lns.14-22) GP process instantiating and parameters declaration, i.e., individual class to evolve, genetic operations probabilities, pop size, selection mechanisms, etc.; (ln.28) GP evolutionary/learning run launch for a given training dataset.
```
1fromgenetic_programimportGeneticProgram#Modulewithscikit-alikeinterface
2fromRegressorimportRegressorLES#GPindividualwewilluse(Regressor)
#...(datasetpreparation)
3level=['ADD','SUB','MUL','DIV',''RELU','MAX','MEAN','MIN','K2','SQRT']
5GeneticProgram.set_primitives(lowlevel=lowlevel)#Primitives
6ind_params={'input_vector_size':2,'complexity':12}#parametersforindividualstoevolve
7#Geneticoperationstouse.
8oper=[RegressorLS.mutation,RegressorLS.protected_crossover,RegressorLS.mutation_i2]
9oper_prob=[.4,.4,.2]#operationsprobabilities
```
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline
**Feature** & **DEAP** & **gplearn** & **TurboGP** \\ \hline Parallel processing & X & X & X \\ \hline Protected genetic operations & X & implicit & X \\ \hline Different primitives & partial & - & X \\ \hline Online Learning & - & partial & X \\ \hline Spatially Distributed populations & - & - & X \\ \hline \end{tabular}
\end{table}
Table 1: Features provided by different GP suites.
### Main features
TurboGP supports the following characteristics at processing and algorithmic levels:
* **Parallel processing.** TurboGP can use multiple CPU cores/threads in different ways: parallel individuals evaluation, and multiple populations evaluation.
* **Different primitives layers.** TurboGP allows trees representation that may contain different types of nodes/primitives, e.g. scalar (_low_ level) functions, vector-to-scalar (_mezzanine_) functions, vector-to-vector functions, etc. These kind of models are useful in high dimensional learning problems (Al-Sahaf et al., 2012; Evans et al., 2018).
* **Explicit support for on-line learning** (also called incremental or mini-batch learning). TurboGP explicitly emphasises whether on-line learning mode is turned on, to increase efficiency and improve convergence (Rodriguez-Coayahuitl et al., 2019).
* **Spatially distributed populations**. TurboGP supports models to allocate individuals in toroidal grid arrangements (cellular) (Petty, 1997), as well as migration operations to implement multi-population (island) models (Martin et al., 1997).
### Related available libraries
Several available GP libraries cannot be compared to TurboGP due to critical differences, such as not being written in Python (ECJ, Scott and Luke (2019)) or require proprietary software to run (GPLAB, Silva and Almeida (2003)). Among those available as Python libraries, some are too minimalistic (TinyGP, Sipper (2019)), not modular nor designed to be modified (Karoo GP, Staats et al. (2017)) or are implemented in older versions of Python (pySTEP, Khoury and Liu (2010)). DEAP (Fortin et al., 2012) and gplearn (Stephens, 2019) are two recent Python GP libraries that are comparable to TurboGP. DEAP, gplearn and TurboGP, support basic features, such as easy primitives declaration and graphic individuals visualization. However, TurboGP provides new features and strengthen others partially supported by those libraries. Table 1 summarizes the differences.
## 3 Benchmarking
The benefits of TurboGP features highlighted previously demonstrate the performance gains from on-line learning, multi-population models, and _mezzanine_ type of primitives. We se
lected three different ML tasks: classification, regression and image denoising. For classification we used the banknote authentication dataset (Lohweg and Doerksen, 2012) from the UCI repository (Dua and Graff, 2017), with 1372 samples where we used 1200 (172) for training (testing). For regression, we generated 5000 (500) training (testing) samples using "Keijzer 12" function (Keijzer, 2003), \(f(z)=xy+\sin{((x-1)(y-1))}\). For the denoising task, 14,000 image patches of \(21\times 21\) pixels in size were extracted from BSDS (Martin et al., 2001), and contaminated with additive noise; we used 12,000 (2000) patches as training (testing) set, and setup a GP to find a model capable of cleaning the image patches. Table 2 lists in detail the different parameters for each GP run. Fig. 2 shows the results obtained by the different GP setups. Results show that online learning allows to decrease GP runtime without taking any toll on classification performance; multi population schemes drastically reduce both, convergence time, as well as regression error; and vector-to-scalar primitives allow GP to reach better solutions than scalar-only GP primitives.
## 4 Forthcoming features
The library is being constantly updated. The next major features we plan to incorporate into TurboGP are: automatically defined functions (Koza, 1994), memetic GP representations (Emigdio et al., 2014) and co-evolutionary algorithms. We already have a Cooperative Coevolutionary framework in the development branch (Rodriguez-Coayahuitl et al., 2020).
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline & **Classification** & \multicolumn{2}{c|}{**Regression**} & \multicolumn{2}{c|}{**Denoising**} \\ \hline & _Batched_ & _Online_ & _Pannictic_ & _Multi-Pop_ & _Low_ & _Mezzanine_ \\ \hline
**Total Pop size** & 500 & 4000 & 1000 \\ \hline
**\# Populations** & 1 & 1 & 16 & 1 \\ \hline
**Dataset(Batch) size** & 1200(1200) & 1200(60) & 5000(100) & 12000(200) \\ \hline
**Generations** & 20 & 40 & 100 & 60 \\ \hline
**Max tree depth** & 6 & 12 & 9 \\ \hline
**Genetic Operations** & subtree mutation, subtree crossover (protected), numeric mutation \\ \hline
**Operations Probs** & (.5,.5,.0) & (.4,.4,.2) & (.5,.5,.0) \\ \hline
**CPU Threads** & 2 & 16 & 8 \\ \hline
**Primitives (scalar)** & & +, \(-\), \(\times\), \(\div\), max, min, mean, ReLU, \(a^{2}\), \(\sqrt{a}\) \\ \hline
**Primitives (vector)** & & N/A & mean, min, max \\ \hline \end{tabular}
\end{table}
Table 2: Parameters used for different tasks and GP setups tested with TurboGP.
Figure 2: Average results from 30 independent runs and different setups. For a) vertical axis is test accuracy (error for b and c), hence higher (lower) is better. Mean exec time expressed in seconds, lower is better for all cases. All experiments performed on a AMD Ryzen 7 1700 CPU on Debian GNU/Linux 11.
This work was supported by project grant CONACYT CB-S-26314. Lino Rodriguez acknowledges support for this project from Consejo Nacional de Ciencia y Tecnologia (CONACYT) grant No. 436184, Consejo de Ciencia y Tecnologia del Estado de Puebla (CONCYTEP) grant 2019-52D, and Instituo Nacional de Astrofisica, Optica y Electronica "Beca de Colaboracion 2020" grant.
|
2309.07626 | Asymptotic growth of translation-dilation orbits | By studying some Clausen-like multiple Dirichlet series, we complete the
proof of Manin's conjecture for sufficiently split smooth equivariant
compactifications of the translation-dilation group over the rationals.
Secondary terms remain elusive in general. | Victor Y. Wang | 2023-09-14T11:46:09Z | http://arxiv.org/abs/2309.07626v1 | # Asymptotic growth of translation-dilation orbits
###### Abstract.
By studying some Clausen-like multiple Dirichlet series, we complete the proof of Manin's conjecture for sufficiently split smooth equivariant compactifications of the translation-dilation group over the rationals. Secondary terms remain elusive in general.
Key words and phrases:Manin conjectures, automorphic functions, special divisors, biases, cancellation 2020 Mathematics Subject Classification: Primary 14G05; Secondary 11F72, 11G25, 11G50, 14M27
## 1. Introduction
Even for rational projective varieties over \(\mathbb{Q}\) (e.g. the surface \(x^{3}+y^{3}+z^{3}+w^{3}=0\)[1]), Manin's conjecture (see [12]) remains extremely difficult in general, due to the arithmetic complexity of height functions governing point counts. Hope increases in the presence of symmetry or other favorable structure. The present paper lies in the the rather fresh world of _one-sided_ equivariant compactifications of a _non-abelian_ algebraic group \(G\); these are defined to be proper \(G\)-schemes \(Y\) equipped with a \(G\)-equivariant embedding \(G\to Y\) of dense image. As explained in [12, 13], _two-sided_ cases (e.g. [12, 13, 14]) and especially _abelian_ cases (e.g. [13, 14]) have already received fairly comprehensive treatments, thanks to the simpler (yet deep) representation-theoretic quantities involved. For us, a certain infinite-dimensional representation of Shalika [12, Proposition 3.1] enjoys a subtle significance first explored, but not fully isolated, in [12].
We build on [12]. From now on, let \(G=\{[\begin{smallmatrix}a&b\\ 0&1\end{smallmatrix}]\}\subseteq\operatorname{GL}_{2}\) be the \(ax+b\) group, viewed as an algebraic group over \(\mathbb{Q}\). Explicitly, the group law on \((a,b),(u,v)\in G\) is
\[(a,b)\cdot(u,v)=(au,av+b). \tag{1.1}\]
Let \(X\) be a smooth, projective, right-sided equivariant compactification of \(G\) over \(\mathbb{Q}\); so \(G\) acts on \(X\) from the right, and the inclusion \(G\to X\) is \(G\)-equivariant on the right. See SS7 for some examples and constructions of \(X\). In this paper, we resolve key issues from [12], and establish Manin's conjecture (via the dense open set \(G\)) for all sufficiently split \(X\).
**Theorem 1.1**.: _Assume \(X\) is strictly split (Definition 2.1). If \(H\) is a standard Weil height (Definition 2.4) associated to the anticanonical line bundle \(K_{X}^{-1}\), then for any \(w\in C_{c}^{\infty}(\mathbb{R})\),_
\[\sum_{x\in G(\mathbb{Q})}w\bigg{(}\frac{H(x)}{B}\bigg{)}=B(\log B)^{\operatorname {rank}(\operatorname{Pic}(X))-1}\left(\mathcal{A}\int_{0}^{\infty}w(t)\,dt+O \bigg{(}\frac{1}{\log B}\bigg{)}\right)\ \text{as $B\to\infty$}, \tag{1.2}\]
_where \(\mathcal{A}=\mathcal{A}_{X,H}>0\) is Peyre's constant (2.8). If we replace \(O(\frac{1}{\log B})\) with \(o(1)\), then this extends to all functions \(w\in\bigcup_{p,q\in\mathbb{R}:\,p<q}C[p,q]\), including \(w=\mathbf{1}_{t\in[0,1]}\in C[0,1]\)._
_Remark 1.2_.: We work in "geometric generality, realized arithmetically over \(\mathbb{Q}\)" to keep ideas clean and clear. But our arguments may well extend to arbitrary number fields. The splitness condition on \(X\) would then always be satisfiable after base change; alternatively, it might be directly removable with enough extra notation and combinatorial effort.
_Remark 1.3_.: Generalizing our work to equivariant compactifications of homogeneous spaces of \(G\) (see e.g. [10] for the distinction) may require serious new ideas, because (1) a quotient \(H\backslash G\) of \(G\) need not be a group (unlike in the case of abelian groups); and (2) in general, \((H\backslash G)(\mathbb{Q})\) includes Galois-invariant elements of \(H(\overline{\mathbb{Q}})\backslash G(\overline{\mathbb{Q}})\), not just \(H(\mathbb{Q})\backslash G(\mathbb{Q})\).
To compare our Theorem 1.1 with [12, Theorem 5.1], we need to elaborate on the geometry of \(X\). Certainly \(X\) is a rational surface, since it is birational to \(G\). View the coordinates \(a\), \(b\) on \(G\) as rational functions on \(X\). For \(f\in\mathbb{Q}(X)\), let \(\operatorname{div}_{0}(f)\), \(\operatorname{div}_{\infty}(f)\) be the zero and polar Weil divisors of \(f\), respectively; so \(\operatorname{div}(f)=\operatorname{div}_{0}(f)-\operatorname{div}_{\infty}(f)\).
The boundary \(D:=X\setminus G\) coincides with the indeterminacy locus of the rational map \((a,b)\colon X\dashrightarrow G\). So by the algebraic Hartogs lemma [11, Tag 0BCS], we have
\[D=\operatorname{Supp}(\operatorname{div}(a))\cup\operatorname{Supp}( \operatorname{div}_{\infty}(b)). \tag{1.3}\]
Write \(D=\bigcup_{j\in J}D_{j}\), where the \(D_{j}\) are irreducible over \(\mathbb{Q}\). Then [12, Theorem 5.1] proves our Theorem 1.1 under the following additional conditions, which we list in roughly increasing order of significance: (i) \(D\cup\operatorname{Supp}(\operatorname{div}_{0}(b))\) has strict normal crossings, (ii) \(\operatorname{div}(a)\) is reduced, and (iii) the mysterious condition (for each \(j\in J\))
\[\operatorname{ord}_{D_{j}}(a)<0\Rightarrow\operatorname{ord}_{D_{j}}(b)< \operatorname{ord}_{D_{j}}(a). \tag{1.4}\]
Condition (1.4) is related to the positivity of \(K_{X}^{-1}\); see Proposition 3.1 below. Similar conditions, involving variables and degrees, are familiar in the circle method. Later (in SS3) we will prove the following result, which we state now to give numerical context for (1.4):
**Proposition 1.4**.: _Let \(j\in J\) and \(c\in\mathbb{Q}\). Then \(\operatorname{ord}_{D_{j}}(b-c)\leq\operatorname{ord}_{D_{j}}(a)\)._
When (1.4) fails we seem to need a new idea. The main culprit (revealed by geometric calculations going beyond [12]) turns out to be pairs of divisors at which (1.4) and a counterpart for \(\operatorname{ord}_{D_{j}}(a)>0\) fail. Relevant here are the _special_ divisors we now define.
**Definition 1.5**.: Given \(j\in J\), call \(D_{j}\)_special_ if \(\max_{c\in\mathbb{Q}}\operatorname{ord}_{D_{j}}(b-c)=\operatorname{ord}_{D_{j }}(a)\).
Suppose there are \(k\geq 0\) special divisors with \(\operatorname{ord}_{D_{j}}(a)<0\), and \(l\geq 0\) special divisors with \(\operatorname{ord}_{D_{j}}(a)>0\). Then the main issue, after new "complexity-lowering" non-archimedean calculations in SS4 (in the spirit of [20, Chapter 6] or [20]) relying on a new \(G\)-related source (Proposition 3.5) of local coordinates and cancellation in complete exponential integrals, is to appropriately bound a class of multiple Dirichlet series including (roughly)
\[\sum_{\begin{subarray}{c}\alpha=m_{1}\cdots m_{k}/n_{1}\cdots n_{l}:\\ \text{pairwise coprime }m_{1},\ldots,m_{k},n_{1},\ldots,n_{l}\geq 1 \end{subarray}}\frac{f(\alpha)e(c\alpha)}{m_{1}^{\beta_{1}}\cdots m_{k}^{ \beta_{k}}}\prod_{1\leq j\leq l}\frac{e(-c_{j}\alpha\ \text{mod}\ \mathbb{Z}_{n_{j}})}{n_{j}^{\gamma_{j}}}, \tag{1.5}\]
for some constants \(c,c_{1},\ldots,c_{l}\in\mathbb{Q}\) and a hybrid additive-multiplicative Fourier transform \(f\colon\mathbb{R}_{>0}\to\mathbb{C}\) of a reciprocal archimedean height function. (See SS5 for details.)
To analyze (1.5), we first use a change of variables and a height derivative bound (Lemma 5.1) to prove \(f\) smooth, with uniformity (Lemma 5.2), and in fact a decay bound \((\alpha\,\frac{\partial}{\partial\alpha})^{r}f(\alpha)\ll_{r}(\alpha^{\xi}+ \alpha^{-\xi})^{-1}\) for some small \(\xi>0\). We then decompose (1.5) into regions according to the quality of expected oscillation as \(m_{1},\ldots,m_{k},n_{1},\ldots,n_{l}\) vary in dyadic intervals.
When \(\#\{c_{1},\ldots,c_{l}\}\geq 2\), we find cancellation in certain correlations of Kloosterman fractions; [13, 14] need not apply, but reciprocity plus a multivariate Weyl-type inequality (Proposition 5.5) for monomials suffices, though just barely when in the most
lopsided ranges. Ultimately, for certain \(\beta_{1},\dots,\beta_{k},\gamma_{1},\dots,\gamma_{l}=1+O(s-1)\), the series (1.5) _morally_ has a pole of order \(\leq\operatorname{rank}(\operatorname{Pic}(X))-1\) at \(s=1\). Theorem 1.1 then follows.
In our setting, one could express (1.5) as a weighted average of Clausen zeta functions \(\sum_{m\geq 1}e(m\theta)m^{-s}\) over a family of angles \(\theta\). Clausen functions for fixed \(\theta\) can be studied in depth (see e.g. [10]), but obtaining uniform results may be difficult, since the complexity of \(\theta\) may vary wildly. Our work smooths away such difficulties by averaging.
It remains open to obtain a power-saving asymptotic expansion of (1.2) in general; secondary terms (cf. [11, 12]) elude us when \(D\) is sufficiently complicated. In our approach, the key missing ingredient would seem to be to meromorphically continue (not just _bound_) \(\sum_{m,n\geq 1}e(m/n)m^{-r}n^{-s}\) and natural monomial generalizations thereof. Could ideas from [1] or other work (on multiple Dirichlet series or automorphic forms) help?
Before proceeding with details and proofs, let us give some further context for our work. In geometric areas of analytic number theory, one often needs to estimate sums resembling
\[\sum_{\alpha\in\mathcal{F}}\int_{t\in\mathbb{R}}(\operatorname{integral}/ \mathbb{R})\prod_{p|R(\alpha)}(\operatorname{geometry}/\mathbb{F}_{p}\,+ \,\operatorname{analysis}/\mathbb{Z}_{p})\prod_{p\nmid R(\alpha)}(\operatorname {local}\,\,L\text{-factors})^{\pm 1}\,dt\]
running over a family \(\mathcal{F}\) and involving a discriminant \(R\colon\mathcal{F}\to\mathbb{Z}\). For examples based on abelian harmonic analysis, see e.g. [1, 10] or (for hypersurfaces) [11, 12, 13, 14]. For [16] and us, sums over \(\mathcal{F}=\mathbb{Q}^{\times}\) of _non-abelian_ origin play a decisive role.
### Conventions
We say a reduced, effective Weil divisor on \(X\) has _strict normal crossings_ if its irreducible components \(C_{i}\) are smooth, the pairwise intersections \(C_{i}\cap C_{j}\) are smooth of dimension \(\leq 0\), and the triple intersections \(C_{i}\cap C_{j}\cap C_{k}\) are empty.
Let \(\mathbb{Z}_{n}:=\prod_{p\mid n}\mathbb{Z}_{p}\) and \(\mathbb{Q}_{n}:=\prod_{p\mid n}\mathbb{Q}_{p}\). For \(t\in\mathbb{R}\), let \(e(t):=e^{2\pi it}\). For \(u\in\mathbb{Q}_{n}\), let \(e(u\bmod\mathbb{Z}_{n}):=e(v)\), for any \(v\in\mathbb{Z}[1/n]\) with \(v\equiv u\bmod\mathbb{Z}_{n}\). Define the additive automorphic character \(\psi=\prod_{v}\psi_{v}\colon\mathbf{A}_{\mathbb{Q}}\to\mathbb{C}^{\times}\) by \(\psi_{\infty}(x_{\infty})=e(-x_{\infty})\) and \(\psi_{p}(x_{p})=e(x_{p}\bmod\mathbb{Z}_{p})\).
For a condition \(A\), let \(\mathbf{1}_{A}:=1\) if \(A\) holds, and \(\mathbf{1}_{A}:=0\) otherwise.
We write \(f\ll_{S}g\), or \(f=O_{S}(g)\), to mean \(|f|\leq Ag\) for a constant \(A>0\) depending on \(S\).
## 2. Background
Recall the setting of SS1, with \(G\), \(X\) specified as in the paragraph before Theorem 1.1. We need some background on geometry, heights, and analysis, drawn mostly from [16, SSSS1-3].
### Geometry
Let \(g\in G\) act on \(f\in\Gamma(U,\mathcal{O}_{X})\) from the right in the usual way: \((fg)(x):=f(xg^{-1})\). There is an analogous left action on \(\Gamma(G,\mathcal{O}_{X})\), but by default on \(\mathbb{Q}(X)\) we act from the right. These actions on functions induce corresponding actions on differentials.
Let \(K_{X}\) be the canonical line bundle on \(X\). It has two local sections of particular interest: the right-invariant top form \(\omega:=db\,da/a\) on \(G\), and the left-invariant top form \(\omega/a\) on \(G\). For each \(j\in J\), let \(\mathsf{d}_{j}:=-\operatorname{ord}_{D_{j}}(\omega)\); then \(-\operatorname{div}(\omega)=\sum_{j\in J}\mathsf{d}_{j}D_{j}\).
\(\operatorname{Pic}(X_{\overline{\mathbb{Q}}})\) is a finite free \(\mathbb{Z}\)-module, since \(X_{\overline{\mathbb{Q}}}\) is a rational surface over an algebraically closed field. Also, \(\operatorname{Pic}(X)=\operatorname{Pic}(X_{\overline{\mathbb{Q}}})^{ \operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})}\) by [16, Remark 3.2(ii)], because \(X(\mathbf{A}_{\mathbb{Q}})\neq\emptyset\); so in particular, \(\operatorname{Pic}(X)\) is free. Say \(X\) is _split_ if \(\operatorname{Pic}(X)=\operatorname{Pic}(X_{\overline{\mathbb{Q}}})\)[15, Definition 2.2(a)]. By Proposition 2.2, \(X\) is split if and only if each \(D_{j}\) is geometrically irreducible.
**Definition 2.1**.: Say a split \(X\) is _strictly split_ if there exists a composition \(Y\to X\) of blowups with smooth \(G\)-invariant centers, such that \(Y\) is split and \(Y\setminus G\) has strict normal crossings.
We can take \(Y=X\) if \(D\) has strict normal crossings. Split \(Y\) help in the final portion (SS6) of the proof of Theorem 1.1. We do not know if all split \(X\) are strictly split (over \(\mathbb{Q}\)).
For any \(X\), a useful bookkeeping tool is the _equivariant Picard group_\(\operatorname{Pic}^{G}(X)=\bigoplus_{j\in J}\mathbb{Z}D_{j}\); see [17, SS1] or [11, Proposition 2.12(2)] for details, noting \(D_{j}G=D_{j}\).
**Proposition 2.2**.: \(\operatorname{Pic}(X_{K})=\operatorname{Pic}^{G}(X_{K})/\mathbb{Z}\operatorname {div}(a)\) _for any field \(K\supseteq\mathbb{Q}\)._
Proof.: Since \(\operatorname{Pic}(G_{K})=0\), the irreducible components of \(D_{K}\) generate \(\operatorname{Pic}(X_{K})\). Relations correspond to \(K\)-morphisms \(X_{K}\setminus D_{K}\to(\mathbb{G}_{m})_{K}\), namely \(ca^{k}\) for \(c\in K^{\times}\), \(k\in\mathbb{Z}\).
Let \(\Lambda(X)\) be the closed cone \(\sum\mathbb{R}_{\geq 0}E\subseteq\operatorname{Pic}(X)\otimes\mathbb{R}\) generated by _effective_ divisors \(E\). Ultimately by the Borel fixed-point theorem, \(\Lambda(X)=\sum_{j\in J}\mathbb{R}_{\geq 0}D_{j}\)[17, Proposition 1.1(3)]. By the following result, then, \(K_{X}^{-1}\) lies in \(\Lambda^{\circ}(X)\), the interior of \(\Lambda(X)\).
**Proposition 2.3** ([17, Proposition 1.2]).: _We have \(-\operatorname{ord}_{D_{j}}(\omega/a)\geq 1\) for all \(j\in J\)._
### Heights
General Weil heights (see [10, SSB.3]) are only unique up to bounded factors, so (1.2) requires reasonable restrictions on \(H\). Our \(K_{X}^{-1}\) need not be ample, but standard \(H\) are available. For convenience, we impose smoothness, as in [10]. Call \(f\colon X(\mathbf{A}_{\mathbb{Q}})\to\mathbb{R}_{>0}\)_simple_ if there exist smooth \(f_{v}\colon X(\mathbb{Q}_{v})\to\mathbb{R}_{>0}\), with \(f_{v}=1\) at all but finitely many places \(v\), such that \(f=\prod_{v}f_{v}\). For finite \(v\), _smooth_ means _locally constant_, as in [10, SS2.1.2].
**Definition 2.4**.: Let \(L\in\operatorname{Pic}(X)\). Call \(H_{L}\colon X(\mathbb{Q})\to\mathbb{R}_{>0}\) a _standard Weil height associated to \(L\)_ if there exist (1) \(L_{1},L_{2}\in\operatorname{Pic}(X)\) globally generated by \(s_{1},\ldots,s_{i}\in\Gamma(X,L_{1})\), \(t_{1},\ldots,t_{j}\in\Gamma(X,L_{2})\); (2) \(k\in\mathbb{Z}_{\geq 1}\) with \(L_{1}-L_{2}=kL\); and (3) \(f\colon X(\mathbf{A}_{\mathbb{Q}})\to\mathbb{R}_{>0}\) simple; such that
\[H_{L}(x)=f(x)\cdot H_{\ell^{2}}([s_{1}(x):\cdots:s_{i}(x)])^{1/k}/H_{\ell^{2}} ([t_{1}(x):\cdots:t_{j}(x)])^{1/k}\quad\text{for all $x\in X(\mathbb{Q})$.}\]
Here \(H_{\ell^{2}}([y_{1}:\cdots:y_{n}]):=(y_{1}^{2}+\cdots+y_{n}^{2})^{1/2}\) whenever \(y_{1},\ldots,y_{n}\) are coprime integers.
The ratio of any two standard Weil heights associated to \(L\) is the restriction to \(X(\mathbb{Q})\) of a simple function \(X(\mathbf{A}_{\mathbb{Q}})\to\mathbb{R}_{>0}\); cf. [1, Definition 2.1.1]. So Definition 2.4 is natural.
To tackle (1.2) analytically, we want local heights \(H_{D_{j},v}\) that are roughly inversely proportional to "\(v\)-adic distance" to \(D_{j}\). Usually these are constructed via adelic metrics (see e.g. [17, SS3.2.1] or [17, SS2]), but we proceed more directly.
Embed \(X\) in some \(\mathbb{P}^{\mathbb{N}}_{\mathbb{Q}}\); let \(\mathscr{X}\) be the closure of \(X\) in \(\mathbb{P}^{\mathbb{N}}_{\mathbb{Z}}\). Scheme \(\mathscr{X}\) is integral, and the closure of any prime divisor on \(X\) is a prime divisor on \(\mathscr{X}\). Also, \(\mathscr{X}\) is smooth over a dense open subset \(O\subseteq\operatorname{Spec}\mathbb{Z}\); and if \(p\in O\), then any point \(x\in X(\mathbb{Q}_{p})=\mathscr{X}(\mathbb{Z}_{p})\) is a \(\mathbb{Z}_{p}\)-point of an arbitrarily small regular (and thus locally factorial) open subscheme of \(\mathscr{X}\).
**Definition 2.5**.: Let \(W\in\bigoplus_{j\in J}\mathbb{Z}D_{j}\), with closure \(\mathscr{W}\) in \(\mathscr{X}\). Call \((H_{W,v}\colon G(\mathbb{Q}_{v})\to\mathbb{R}_{>0})_{v}\)_good_ if there exists \(\phi\colon X(\mathbf{A}_{\mathbb{Q}})\to\mathbb{R}_{>0}\) simple such that for each \(t\in\mathbb{Q}(\mathscr{X})\), regular open \(\mathscr{U}\subseteq\mathscr{X}\) with \(\operatorname{div}(t|_{\mathscr{U}})=\mathscr{W}|_{\mathscr{U}}\), and place \(v\), we have (1) if \(v\in O\) and \(x\in G(\mathbb{Q}_{v})\cap\mathscr{U}(\mathbb{Z}_{v})\), then \(H_{W,v}(x)=\phi(x)/|t(x)|_{v}\); and (2) in general, for each compact \(K\subseteq\mathscr{U}(\mathbb{Q}_{v})\), there exists \(\varphi\colon K\to\mathbb{R}_{>0}\) smooth such that \(H_{W,v}(x)=\varphi(x)/|t(x)|_{v}\) for all \(x\in G(\mathbb{Q}_{v})\cap K\).
_Remark 2.6_.: If we fix \(x\in G(\mathbb{Q}_{v})\), and let \(\mathscr{U}\), \(t\) vary, then in condition (1), the value of \(|t(x)|_{v}\) is independent of the choice of \(\mathscr{U}\), \(t\) (provided \(v\) is finite and \(x\in G(\mathbb{Q}_{v})\cap\mathscr{U}(\mathbb{Z}_{v})\)). But condition (2) is more flexible, since \(\mathscr{U}(\mathbb{Q}_{v})\) is in general larger than \(\mathscr{U}(\mathbb{Z}_{v})\).
**Proposition 2.7**.: _Let \(\mathscr{H}\) be a standard Weil height associated to \(K_{X}^{-1}\)._
1. _There exists a homomorphism_ \(\operatorname{Pic}(X)\to\operatorname{Fun}(X(\mathbb{Q}),\mathbb{R}_{>0})\)_, assigning each_ \(L\in\operatorname{Pic}(X)\) _to a standard Weil height_ \(H_{L}\) _associated to_ \(L\)_, such that_ \(H_{K_{X}^{-1}}=\mathscr{H}\)
_(2) There exists a homomorphism \(\operatorname{Pic}^{G}(X)\to\prod_{v}\operatorname{Fun}(G(\mathbb{Q}_{v}),\mathbb{R }_{>0})\), assigning each \(W\in\operatorname{Pic}^{G}(X)=\bigoplus_{j\in J}\mathbb{Z}D_{j}\) to a good \((H_{W,v})_{v}\), such that_
\[|a(g)|_{v}=\prod_{j\in J}H_{D_{j},v}(g)^{-\operatorname{ord}_{D_{j}}(a)}\quad \text{ for all }g=(a(g),b(g))\in G(\mathbb{Q}_{v}),\,\text{for all }v,\,\text{and} \tag{2.1}\]
\[H_{W}|_{G(\mathbb{Q})}=\prod_{v}H_{W,v}|_{G(\mathbb{Q})}\qquad\qquad\qquad \text{ (where }H_{W}:=H_{\mathcal{O}_{X}(W)}). \tag{2.2}\]
Proof.: (1): Use the fact that \(\operatorname{Pic}(X)\) is a finite free \(\mathbb{Z}\)-module.
(2): Use Proposition 2.2 to choose very ample Weil divisors \(W_{1},\dots,W_{|J|-1}\subseteq\sum_{j\in J}\mathbb{Z}D_{j}\) that generate \(\operatorname{Pic}(X)\otimes\mathbb{Q}\). Take \(L=L_{1}=\mathcal{O}_{X}(W_{l})\), \(L_{2}=\mathcal{O}_{X}\), \(k=1\) in Definition 2.4, and for \(x\in X(\mathbf{A}_{\mathbb{Q}})\), let \(H_{W_{l},\infty}(x):=f_{\infty}(x)(|s_{1}(x)|_{\infty}^{2}+\dots+|s_{i}(x)|_{ \infty}^{2})^{1/2}\) and \(H_{W_{l},p}(x):=f_{p}(x)\max(|s_{1}(x)|_{p},\dots,|s_{i}(x)|_{p})\). Then in particular, (2.2) holds for \(W=W_{l}\).
Given \((H_{W_{l},v})_{v}\), the relation (2.1) then uniquely specifies a homomorphism \(\operatorname{Pic}^{G}(X)\to\prod_{v}\operatorname{Fun}(G(\mathbb{Q}_{v}), \mathbb{R}_{>0})\). The resulting \((H_{W,v})_{v}\) are good. Also, (2.1) implies (2.2).
From now on, fix heights \(H_{L}\), \((H_{W,v})_{v}\) satisfying Proposition 2.7.
### Polar combinatorics
The main term in (1.2) will come from certain shifted integrals
\[\mathscr{S}(f)(\boldsymbol{z}):=(2\pi)^{-1}\int_{\mathbb{R}}f(\boldsymbol{z}+ it\operatorname{div}(a))\,dt, \tag{2.3}\]
whose general structure we now describe in terms of the combinatorics of \(\operatorname{div}(a)\). Recall (1.3). Let \(J_{1}:=\{j\in J:\operatorname{ord}_{D_{j}}(a)>0\}\) and \(J_{2}:=\{j\in J:\operatorname{ord}_{D_{j}}(a)<0\}\), and let
\[J_{3}:=\{j\in J:\operatorname{ord}_{D_{j}}(a)=0\}\subseteq\{j\in J: \operatorname{ord}_{D_{j}}(b)<0\}.\]
(Clearly \(J_{1}\), \(J_{2}\), \(J_{3}\) partition \(J\).) For convenience, let \(\mathfrak{u}_{j}:=\operatorname{ord}_{D_{j}}(a)\) for each \(j\in J\).
The \(\mathcal{X}\)-function framework of [11, SS3] is crucial. For any finite set \(I\subseteq J\), let
\[\mathcal{X}_{\mathbb{R}_{\geq 0}^{I}}(\boldsymbol{z}):=\int_{\mathbb{R}_{\geq 0 }^{I}}e^{-\boldsymbol{y}\cdot\boldsymbol{z}}\,d\boldsymbol{y}=\prod_{j\in I}z_ {j}^{-1}\quad\text{for }\Re(\boldsymbol{z})\in\mathbb{R}_{>0}^{I},\]
with meromorphic continuation to all \(\boldsymbol{z}\in\mathbb{C}^{I}\). Let \(\Lambda_{I}(X):=\sum_{j\in I}\mathbb{R}_{\geq 0}D_{j}\subseteq\Lambda(X)\), so that \(\Lambda_{I}(X)\) is a cone in \((\bigoplus_{j\in I}\mathbb{R}D_{j})/\mathbb{R}\operatorname{div}(a)=\mathbb{R }^{I}/\mathbb{R}\operatorname{div}(a)\). The dual cone \(\Lambda_{I}^{*}(X)\) of \(\Lambda_{I}(X)\) is the set of functions \(\boldsymbol{y}\in\operatorname{Hom}(\mathbb{R}^{I},\mathbb{R})=\mathbb{R}^{I}\) such that \(\boldsymbol{y}\in\mathbb{R}_{\geq 0}^{I}\) and \(\boldsymbol{y}\cdot\operatorname{div}(a)=0\). Let
\[\mathcal{X}_{\Lambda_{I}(X)}(\boldsymbol{z}):=\int_{\Lambda_{I}^{*}(X)}e^{- \boldsymbol{y}\cdot\boldsymbol{z}}\,d\boldsymbol{y}\quad\text{for }\Re(\boldsymbol{z})\in\mathbb{R}_{>0}^{I}+\mathbb{R} \operatorname{div}(a), \tag{2.4}\]
where \(d\boldsymbol{y}\) is the Haar dual to the quotient measure on \(\mathbb{R}^{I}/\mathbb{R}\operatorname{div}(a)\) (induced by assigning unit length to \(\operatorname{div}(a)\in\mathbb{R}\operatorname{div}(a)\)); \(\mathcal{X}_{\Lambda_{I}(X)}\) is invariant under translation by \(\mathbb{C}\operatorname{div}(a)\).
_Remark 2.8_.: If \(\gcd_{j\in I}(\mathfrak{u}_{j})=1\), then \(d\boldsymbol{y}\) is the unique Haar measure on \((\mathbb{R}^{I}/\mathbb{R}\operatorname{div}(a))^{*}=\operatorname{div}(a)^{ \perp}\subseteq\mathbb{R}^{I}\) such that the dual lattice \((\mathbb{Z}^{I}/\mathbb{Z}\operatorname{div}(a))^{*}=\operatorname{div}(a)^{ \perp}\cap\mathbb{Z}^{I}\) has covolume \(1\). In particular, this applies if \(I=J\), since \(\operatorname{Pic}(X)\) is free (and \(\operatorname{Pic}(X)=\mathbb{Z}^{J}/\mathbb{Z}\operatorname{div}(a)\)).
_Remark 2.9_.: The following are equivalent: (1) \(I\setminus J_{3}\) is a subset of \(J_{1}\) or \(J_{2}\) of size \(\geq 2\); (2) \(\operatorname{vol}(\Lambda_{I}^{*}(X))=0\); and (3) \(\mathcal{X}_{\Lambda_{I}(X)}=0\). Meanwhile, if (1) fails, then \(\mathcal{X}_{\Lambda_{I}(X)}(\boldsymbol{z})\) is a rational function of \(\boldsymbol{z}\in\mathbb{C}^{I}\), computable in terms of any triangulation of \(\Lambda_{I}^{*}(X)\) generated by sets of \(\dim\Lambda_{I}^{*}(X)\) linearly independent elements \(\boldsymbol{y}\in\Lambda_{I}^{*}(X)\) with \(\boldsymbol{y}\cdot\boldsymbol{z}\neq 0\); see e.g. [1, SS5].
Let \(\|\boldsymbol{z}\|:=\max_{j}|z_{j}|\). Let \(\mathcal{H}_{I}(p,q)\) be the ring of holomorphic \(f(\boldsymbol{z})\) on \(\Re(\boldsymbol{z})\in(p,q)^{I}\) that are polynomially bounded in vertical strips, i.e. that satisfy \(f(\boldsymbol{z})\ll_{K}(1+\|\boldsymbol{z}\|)^{O_{K}(1)}\) for every compact set \(K\subseteq(p,q)\). Let \(\mathcal{H}_{I}^{\mathbb{C}}(p,q)\) be the ring of \(\mathbb{C}\operatorname{div}(a)\)-invariant functions in \(\mathcal{H}_{I}(p,q)\). Let \(\mathcal{H}_{\dagger,I}(p,q)\) be the set of \(f\in\mathcal{H}_{I}(p,q)\) such that for each compact \(K\subseteq(p,q)\), we have
\[f(\boldsymbol{z}+it\operatorname{div}(a))\ll_{K}(1+\|\boldsymbol{z}\|)^{O_{K}(1 )}/(1+t^{2})\quad\text{for }(\Re(\boldsymbol{z}),t)\in K\times\mathbb{R}. \tag{2.5}\]
Let \(\mathcal{H}_{I}=\bigcap_{\delta>0}\mathcal{H}_{I}(-\delta,\delta)\), and similarly define \(\mathcal{H}_{I}^{\mathbb{C}}\), \(\mathcal{H}_{\dagger,I}\). Let \(\mathcal{M}_{\dagger,I}\) be the set of meromorphic \(f\) near \(\Re(\boldsymbol{z})=\boldsymbol{0}\) such that \(f(\boldsymbol{z})\prod_{j\in I}z_{j}/(1+z_{j})\in\mathcal{H}_{\dagger,I}\). Both \(\mathcal{H}_{\dagger,I}\), \(\mathcal{M}_{\dagger,I}\) are \(\mathcal{H}_{I}^{\mathbb{C}}\)-modules.
The following lemma expresses \(\mathscr{S}(f)\) uniformly in terms of \(\mathcal{X}\)-functions, for arbitrary \(f\in\mathcal{M}_{\dagger,J}\) (in the spirit of [1, (3.5.2)], which builds on [1, Theoreme 3.1.14]). For \(I\subseteq J\), let \(\mathbf{g}_{I}:=\mathcal{X}_{\mathbb{R}_{\geq 0}^{I}}(\boldsymbol{z})\) if \(|I\setminus J_{3}|\neq 1\), and \(\mathbf{g}_{I}:=\mathcal{X}_{\mathbb{R}_{\geq 0}^{I}}(\boldsymbol{z})/\prod_{j \in I}(1+z_{j})\) if \(|I\setminus J_{3}|=1\).
**Lemma 2.10**.: _Suppose \(f\in\mathcal{M}_{\dagger,J}\) and \(f\prod_{j\in P}z_{j}/(1+z_{j})\in\mathcal{H}_{\dagger,J}\), where \(P\subseteq J\)._
1. _There exists a vector_ \((h_{I})_{I\subseteq P}\in\prod_{I\subseteq P}\mathcal{H}_{J}\)_, with_ \(h_{I}\in\mathcal{H}_{\dagger,J}\) _for_ \(I\subseteq J_{3}\) _and_ \(h_{I}\in\mathcal{H}_{J}^{\mathbb{C}}\) _for all other_ \(I\)_, such that_ \(f=\sum_{I\subseteq P}h_{I}\mathbf{g}_{I}\)_._
2. \(h_{P}(\boldsymbol{0})=(f\prod_{j\in P}z_{j})(\boldsymbol{0})\) _and_ \(\mathscr{S}(f)=\sum_{I\subseteq P:\,|I\setminus J_{3}|\geq 1}h_{I}\mathscr{S}( \mathbf{g}_{I})+\sum_{I\subseteq P\cap J_{3}}\mathbf{g}_{I}\mathscr{S}(h_{I})\)_._
3. \(\mathscr{S}(\mathbf{g}_{I})=0\) _if_ \(|I\setminus J_{3}|=1\)_, and_ \(\mathscr{S}(\mathbf{g}_{I})=\mathcal{X}_{\Lambda_{I}(X)}\) _if_ \(|I\setminus J_{3}|\geq 2\)_._
4. \(\mathscr{S}(\mathcal{H}_{\dagger,J}(p,q))\subseteq\mathcal{H}_{J}^{\mathbb{C} }(p,q)\) _for any_ \(p,q\in[-\infty,\infty]\) _with_ \(p<q\)_._
Proof.: (1): We construct \((h_{I})_{I\subseteq P}\) by the following recursive algorithm:
1. Let \(h_{P}:=f/\mathbf{g}_{P}\in\mathcal{H}_{J}\), and \(h_{I}:=0\in\mathcal{H}_{J}\) for all \(I\in 2^{P}\setminus\{P\}\); and let \(\mathscr{I}:=2^{P}\). Note that \(h_{I}\mathbf{g}_{I}\subseteq\mathcal{M}_{\dagger,J}\) for all \(I\in 2^{P}\); we will maintain this property at each step.
2. If \(\mathscr{I}=\emptyset\), terminate the algorithm. Otherwise, choose an \(I_{\star}\in\mathscr{I}\) of maximal size.
3. Suppose first that \(|I_{\star}\setminus J_{3}|\geq 1\). Choose an element \(i_{\star}\in I_{\star}\setminus J_{3}\), and replace the functions \(h_{I_{\star}},h_{I_{\star}\setminus\{i_{\star}\}}\in\mathcal{H}_{J}\) with, respectively, \[h_{I_{\star}}^{\prime}((z_{j})_{j\in J}):=h_{I_{\star}}((z_{j}-\mathbf{u}_{j}z_ {i_{\star}}/\mathbf{u}_{i_{\star}})_{j\in J})\in\mathcal{H}_{J}^{\mathbb{C}}\] and \(h_{I_{\star}\setminus\{i_{\star}\}}^{\prime}:=h_{I_{\star}\setminus\{i_{\star} \}}+(h_{I_{\star}}-h_{I_{\star}}^{\prime})\mathbf{g}_{I_{\star}}/\mathbf{g}_{I _{\star}\setminus\{i_{\star}\}}\in\mathcal{H}_{J}\).1 Note that \(\mathbf{g}_{I_{\star}}\in\mathcal{M}_{\dagger,J}\), so \[h_{I_{\star}}^{\prime}\mathbf{g}_{I_{\star}}\in\mathcal{M}_{\dagger,J},\quad h _{I_{\star}\setminus\{i_{\star}\}}^{\prime}\mathbf{g}_{I_{\star}\setminus\{i_{ \star}\}}=h_{I_{\star}\setminus\{i_{\star}\}}\mathbf{g}_{I_{\star}\setminus\{i_{ \star}\}}+h_{I_{\star}}\mathbf{g}_{I_{\star}}-h_{I_{\star}}^{\prime}\mathbf{g}_ {I_{\star}}\in\mathcal{M}_{\dagger,J}.\] Now replace \(\mathscr{I}\) with \(\mathscr{I}^{\prime}:=\mathscr{I}\setminus\{I_{\star}\}\), and go back to step (b). Footnote 1: One can use Cauchy’s integral formula to prove \((h_{I_{\star}}-h_{I_{\star}}^{\prime})/z_{i_{\star}}\in\mathcal{H}_{J}\), which implies \(h_{I_{\star}}^{\prime}\backslash\{i_{\star}\}\in\mathcal{H}_{J}\).
4. Suppose instead that \(I_{\star}\subseteq J_{3}\). Then \(1/\mathbf{g}_{I_{\star}}\in\mathcal{H}_{J}^{\mathbb{C}}\), which together with \(h_{I_{\star}}\mathbf{g}_{I_{\star}}\in\mathcal{M}_{\dagger,J}\) implies \(h_{I_{\star}}\in\mathcal{M}_{\dagger,J}\cap\mathcal{H}_{J}=\mathcal{H}_{ \dagger,J}\).2 Replace \(\mathscr{I}\) with \(\mathscr{I}^{\prime}:=\mathscr{I}\setminus\{I_{\star}\}\), and repeat (b).
Footnote 2: One can use Cauchy’s integral formula to show that \(\mathcal{M}_{\dagger,J}\cap\mathcal{H}_{J}=\mathcal{H}_{\dagger,J}\).
The algorithm terminates after \(1+|2^{P}|\) occurrences of (b). For the final output \((h_{I})_{I\subseteq P}\), the conditions of (1) hold by inspection of (c), (d).
(2): For the first part, multiply \(f=\sum_{I\subseteq P}h_{I}\mathbf{g}_{I}\) by \(\prod_{j\in P}z_{j}\) and plug in \(\boldsymbol{z}=\boldsymbol{0}\). For the second part, use the \(\mathbb{C}\operatorname{div}(a)\)-invariance of \(h_{I}\) when \(|I\setminus J_{3}|\geq 1\), and of \(\mathbf{g}_{I}\) when \(I\subseteq J_{3}\).
(3): This follows from [1, Proposition 3.1.9] if \(|I\setminus J_{3}|\geq 2\) and \(\mathcal{X}_{\Lambda_{I}(X)}\neq 0\) (in which case \(\mathbb{R}_{\geq 0}^{I}\cap\mathbb{R}\operatorname{div}(a)=0\), by Remark 2.9), and from the identity
\[\int_{\mathbb{R}}dt/\prod_{1\leq j\leq r}(\varrho_{j}+it)=0\quad\text{for $r \geq 2$ and $\Re(\varrho_{j})>0$} \tag{2.6}\]
if \(|I\setminus J_{3}|=1\) or \(\mathcal{X}_{\Lambda_{I}(X)}=0\). (To prove (2.6), shift the line \(\Re(it)=0\) to \(\Re(it)=\kappa\to\infty\).)
(4): This is clear, since \(\int_{\mathbb{R}}(1+\|\boldsymbol{z}\|)^{A}/(1+t^{2})\,dt\ll(1+\|\boldsymbol{z} \|)^{A}\).
### Peyre's constant
Cf. [10, 11]. First, \(\alpha_{\mathrm{Pey}}(X):=\mathcal{X}_{\mathrm{Pic}(X)}(K_{X}^{-1})=\mathcal{X}_ {\Lambda_{J}(X)}(K_{X}^{-1})\); by Remark 2.8, the measure \(d\boldsymbol{y}\) in \(\mathcal{X}_{\Lambda_{J}(X)}\) is correctly normalized in terms of \(\mathrm{Pic}(X)\). Here \(\alpha_{\mathrm{Pey}}(X)>0\) by Proposition 2.3. Second, \(\beta_{\mathrm{Pey}}(X)=1\) since \(X\) is rational; see e.g. [13, two lines before Theorem 48]. Finally, define the local Tamagawa measure
\[d\tau_{v}=d\tau_{X,H,v}:=H_{\operatorname{div}(\omega),v}\cdot|\omega| \tag{2.7}\]
on \(X(\mathbb{Q}_{v})\); initially the formula (2.7) only makes sense on \(G(\mathbb{Q}_{v})\), but it extends to \(X(\mathbb{Q}_{v})\) by Definition 2.5. Let \(L(s,\operatorname{Pic}(X_{\overline{\mathbb{Q}}}))\) be the Artin \(L\)-function associated to the representation \(\operatorname{Pic}(X_{\overline{\mathbb{Q}}})\otimes\mathbb{C}\) of \(\operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\). It is known that \(L(s,\operatorname{Pic}(X_{\overline{\mathbb{Q}}}))\) converges absolutely on \(\Re(s)>1\), is meromorphic on \(\mathbb{C}\), and has a pole of order \(\operatorname{rank}(\operatorname{Pic}(X))\geq 1\) at \(s=1\); let \(L^{*}(1,\operatorname{Pic}(X_{\overline{\mathbb{Q}}})):=\lim_{s\to 1}L(s, \operatorname{Pic}(X_{\overline{\mathbb{Q}}}))/(s-1)^{\operatorname{rank}( \operatorname{Pic}(X))}\). Define on \(X(\mathbf{A}_{\mathbb{Q}})\) the measure
\[d\tau=d\tau_{X,H}:=L^{*}(1,\operatorname{Pic}(X_{\overline{\mathbb{Q}}}))\,d \tau_{\infty}\prod_{p}d\tau_{p}/L_{p}(1,\operatorname{Pic}(X_{\overline{ \mathbb{Q}}})).\]
Let \(\tau(X,H):=\int_{X(\mathbf{A}_{\mathbb{Q}})}d\tau\), which (by the Weil conjectures) factors as an absolutely convergent Euler product; see e.g. [13, Corollary 2.4 and Theorem 2.5]. Peyre's constant is
\[\alpha_{\operatorname{Pey}}(X)\tau(X,H)/(\operatorname{rank}(\operatorname{ Pic}(X))-1)!>0. \tag{2.8}\]
(By (2.2), since \(G(\mathbb{Q})\) is dense in \(X(\mathbf{A}_{\mathbb{Q}})\), our \(d\tau\) depends only on \(X\), \(H_{K_{X}^{-1}}\), not on \(\omega\), \(H_{\operatorname{div}(\omega),v}\). This would also follow from (1.2), by an equidistribution result of Peyre.)
### Spectral expansion
For \(\boldsymbol{s}=\sum_{j\in J}s_{j}D_{j}\in\operatorname{Pic}^{G}(X)\otimes \mathbb{C}\) and \(g\in G(\mathbf{A}_{\mathbb{Q}})\), let
\[H_{v}(\boldsymbol{s},g):=\prod_{j\in J}H_{D_{j},v}(g)^{s_{j}},\quad H( \boldsymbol{s},g):=\prod_{v}H_{v}(\boldsymbol{s},g),\quad\mathsf{Z}( \boldsymbol{s},g):=\sum_{\gamma\in G(\mathbb{Q})}H(\boldsymbol{s},\gamma g)^ {-1}.\]
For each prime \(p\), choose a maximal open subgroup \(\mathbf{K}_{p}\) of \(G(\mathbb{Z}_{p})\) such that \(H_{D_{j},p}\) is right \(\mathbf{K}_{p}\)-invariant for all \(j\in J\); then \(\mathbf{K}_{p}=G(\mathbb{Z}_{p})\) for all but finitely many \(p\). Let \(\mathbf{K}:=\prod_{p}\mathbf{K}_{p}\). Following [14, SS5], we will decompose \(\mathsf{Z}(\boldsymbol{s},g)\) using the automorphic machinery of [14, SS3]. Since we work more generally than [14, SS5], it seems appropriate to provide some details; but the key formula (2.20) is not new (only our subsequent analysis of it is).
Let \(dg\) be the measure on \(G(\mathbf{A}_{\mathbb{Q}})\) given by \(|\omega|\) at \(v=\infty\), and \(|\omega|/(1-p^{-1})\) at \(v=p\). Then \(\int_{G(\mathbb{Z}_{p})}dg=1\); and for \(p\) sufficiently large, Definition 2.5 implies \(\int_{G(\mathbb{Z}_{p})}dg/H_{p}(\boldsymbol{s},g)=1\).
For the rest of SS2, let \(A\geq 9\) be a large constant. If \(\Re(\boldsymbol{s})\in[A,2A]^{J}\), then
\[|H(\boldsymbol{s},g)|\gg H(\operatorname{div}_{0}(a)+\operatorname{div}_{ \infty}(a)+\operatorname{div}_{\infty}(b),g)^{A^{1/2}}\gg\prod_{v}(1+|a|_{v}^{ -1}+|a|_{v}+|b|_{v})^{A^{1/2}}, \tag{2.9}\]
by Definition 2.5; cf. [14, proof of Lemma 5.2]. For unitary characters \(\chi\) of \(\mathbb{Q}^{\times}\backslash\mathbf{A}_{\mathbb{Q}}^{\times}\), let
\[H^{*}(\boldsymbol{s},\chi):=\int_{G(\mathbf{A}_{\mathbb{Q}})}H(\boldsymbol{s},g)^{-1}\overline{\chi}(g)\,dg,\quad H^{*}_{v}(\boldsymbol{s},\chi):=\int_{G( \mathbb{Q}_{v})}H_{v}(\boldsymbol{s},g)^{-1}\overline{\chi}_{v}(g)\,dg.\]
If \(\Re(\boldsymbol{s})\in[A,2A]^{J}\), then by (2.9), we have \(H^{*}_{v}(\boldsymbol{s},\chi)\ll 1\) for all \(v\), and \(H^{*}_{p}(\boldsymbol{s},\chi)=1+O(p^{-2})\) for all \(p\) large enough in terms of the conductor of \(\chi\); so \(H^{*}(\boldsymbol{s},\chi)=\prod_{v}H^{*}_{v}(\boldsymbol{s},\chi)\ll_{\chi}1\).
For the rest of SS2, assume \(X\) is split. (Splitness simplifies many formulas.)
**Lemma 2.11**.: _Let \(\delta>0\) be small. Then each of the following implies the next:_
1. \(D\) _has strict normal crossings (see SS1.1)._
2. _Let_ \(\Re(\boldsymbol{s}+\operatorname{div}(\omega))\in[-\delta,\delta^{-1}]^{J}\)_. Then for each_ \(v\)_, the integral_ \(H^{*}_{v}(\boldsymbol{s},1)\) _converges absolutely uniformly over_ \(\boldsymbol{s}\)_. Moreover,_ \(H^{*}_{p}(\boldsymbol{s},1)/\prod_{j\in J}\zeta_{p}(s_{j}-\mathsf{d}_{j}+1)=1+O (p^{-1-\delta})\)_._
3. \(\lim_{\boldsymbol{s}\to-\operatorname{div}(\omega)}H^{*}(\boldsymbol{s},1) \prod_{j\in J}(s_{j}-\mathsf{d}_{j})=\tau(X,H)\)_._
Proof.: (1)\(\Rightarrow\)(2): See [13, SS4.3.2 (or Lemma 4.1)] for the convergence of individual integrals, and [13, SS4.3.3 with \(\mathscr{B}=\emptyset\)] (based on a formula of Denef) for the asymptotic.
(2)\(\Rightarrow\)(3): Write \(H^{*}=\prod_{v}H^{*}_{v}\); switch lim, \(\prod\) using (2). Then express \(\tau(X,H)\) in terms of \(H^{*}_{v}\) via (2.7). Cf. [13, proof of Proposition 6.2] or [13, proof of Proposition 4.10].
For the rest of SS2, assume \(D\) has strict normal crossings. (This condition makes some technical calculations more uniform.) To decompose \(\mathsf{Z}(\boldsymbol{s},g)\) requires not just \(H^{*}(\boldsymbol{s},\chi)\) but other integral transforms as well, which we now define alongside other relevant notions.
First, let \(S=S(X,\mathscr{X},H)\subseteq\operatorname{Spec}\mathbb{Z}\) be a finite set such that the following hold:
1. If \(p\notin S\) and \(j\in J\), then \(H_{D_{j},p}\) satisfies Definition 2.5 with \(\phi_{p}=1\).
2. If \(p\notin S\) and \(j\in J\), then \(H_{D_{j},p}\) is \(G(\mathbb{Z}_{p})\)-invariant (on the right), i.e. \(\mathbf{K}_{p}=G(\mathbb{Z}_{p})\).
3. If \(I\subseteq J\), then \(\bigcap_{i\in I}\mathscr{D}_{i}\) is smooth over \(\mathbb{Z}_{S}\), the ring of \(S\)-integers. Here \(\mathscr{D}_{i}\) denotes the closure of \(D_{i}\) in \(\mathscr{X}\), and for \(I=\emptyset\) we let \(\bigcap_{i\in I}\mathscr{D}_{i}:=\mathscr{X}\).
(We can arrange for (3) to hold since \(X\) is smooth and \(D\) has strict normal crossings.)
For \(p\in S\), let \(r_{p}=r_{p}(\mathbf{K}):=\min\left\{r\in\mathbb{Z}_{\geq 1}:(1+p^{r}\mathbb{Z}_{ p})\times p^{r}\mathbb{Z}_{p}\subseteq\mathbf{K}_{p}\right\}\). Let \(N=N(\mathbf{K}):=\prod_{p\in S}p^{r_{p}}\), and let \(\mathbf{M}\) be the set of characters \(\mathbb{R}^{\times}/(\mathbb{R}^{\times})^{2}\times\prod_{p\in S}\mathbb{Z}_{ p}^{\times}/(1+p^{r_{p}}\mathbb{Z}_{p})\to\mathbb{C}^{\times}\). For \((m,\lambda,t,g)\in\mathbb{Z}_{\geq 1}\times\mathbf{M}\times\mathbb{R}\times G( \mathbf{A}_{\mathbb{Q}})\), let \(\theta_{m,\lambda,t}(g):=\sum_{\alpha\in\mathbb{Q}^{\times}}\psi(\alpha b) \mathbf{v}_{m,\lambda}(\alpha a)|\alpha a_{\infty}|_{\infty}^{it}\), where
\[\mathbf{v}_{m,\lambda}(a):=\mathbf{1}_{Na/m\in\prod_{p}\mathbb{Z}_{p}^{\times }}\cdot\lambda_{\infty}(a_{\infty})\cdot\prod_{p\in S}\lambda_{p}(a_{p}/p^{v_ {p}(a_{p})}). \tag{2.10}\]
By additive reciprocity (\(\psi(\mathbb{Q})=1\)), the function \(\theta_{m,\lambda,t}\) is left \(G(\mathbb{Q})\)-invariant. Furthermore, \(|\theta_{m,\lambda,t}(g)|\leq 2\) for all \(g\) (due to \(\mathbf{v}_{m,\lambda}(\alpha a)\)), and
\[\theta_{m,\lambda,t}(1_{G})=|m/N|_{\infty}^{it}(1+\lambda(-1))\prod_{p\in S} \lambda_{p}((m/N)/p^{v_{p}(m/N)})=\frac{(1+\lambda(-1))\lambda_{S}(m/N)}{\prod _{p}|m/N|_{p}^{it}}, \tag{2.11}\]
where \(\lambda_{S}(a):=\prod_{p}\prod_{q\in S\setminus\{p\}}\lambda_{q}(p^{v_{p}(a_ {p})})\) for \(a\in\mathbf{A}_{\mathbb{Q}}^{\times}\). Let
\[H^{*}(\boldsymbol{s},m,\lambda,t):=\int_{G(\mathbf{A}_{\mathbb{Q}})}H( \boldsymbol{s},g)^{-1}\overline{\theta}_{m,\lambda,t}(g)\,dg. \tag{2.12}\]
For \(\alpha\in\mathbb{Q}^{\times}\), define \(H_{p}^{\vee}(\boldsymbol{s},\lambda,t,\alpha)\) to be (letting \(\operatorname{sgn}_{p}(u):=u/p^{v_{p}(u)}\) for \(u\in\mathbb{Q}_{p}^{\times}\))
\[\int_{G(\mathbb{Q}_{p}):\,N\alpha a\in\mathbb{Z}_{p}}H_{p}(\boldsymbol{s},g)^{ -1}e(-\alpha b\bmod\mathbb{Z}_{p})\overline{\lambda}_{p}(\operatorname{sgn}_{ p}(\alpha a))\lambda_{S}(\alpha a)|a|_{p}^{-it}\,dg, \tag{2.13}\]
where \(\lambda_{p}:=1\) if \(p\notin S\). Let \(H_{\infty}^{\vee}(\boldsymbol{s},\lambda,t,\alpha):=\int_{G(\mathbb{R})}H_{ \infty}(\boldsymbol{s},g)^{-1}e(\alpha b)\lambda_{\infty}(\alpha a)|a|_{ \infty}^{-it}\,dg\).
Let \(H_{\infty}^{\prime}(\boldsymbol{s},\lambda,t,\alpha):=|\alpha|_{\infty}^{-it}H _{\infty}^{\vee}(\boldsymbol{s},\lambda,t,\alpha)\). For \(m\in\mathbb{Z}_{\geq 1}\), define \(H_{p}^{\prime}(\boldsymbol{s},\lambda,m,\alpha)\) to be
\[\int_{G(\mathbb{Q}_{p}):\,N\alpha a\in m\mathbb{Z}_{p}^{\times}}H_{p}( \boldsymbol{s},g)^{-1}e(-\alpha b\bmod\mathbb{Z}_{p})\overline{\lambda}_{p}( \operatorname{sgn}_{p}(\alpha a))\,dg. \tag{2.14}\]
Before proceeding, we need some foundational bounds. Let \(\|\boldsymbol{s}\|:=\max_{j}|s_{j}|\).
**Proposition 2.12** (Cf. [12, Lemma 5.2]).: _Let \(\Re(\boldsymbol{s})\in[A,2A]^{J}\) and \(g\in G(\mathbf{A}_{\mathbb{Q}})\). Then \(\mathsf{Z}(\boldsymbol{s},g)\) converges absolutely to a bounded function of \((\boldsymbol{s},g)\), holomorphic in \(\boldsymbol{s}\) and smooth in \(g\). Also, \(H(\boldsymbol{s},g)^{-1}\in L^{1}(G(\mathbf{A}_{\mathbb{Q}}))\) and \(\mathsf{Z}(\boldsymbol{s},g)\in L^{q}(G(\mathbb{Q})\backslash G(\mathbf{A}_{ \mathbb{Q}}))^{\mathbf{K}}\) for all \(\boldsymbol{s}\), for all \(q\in[1,\infty]\)._
Proof.: Except for smoothness in \(g\), this follows from (2.9) as in [12, proof of Lemma 5.2]. To establish smoothness in \(g\), use in addition that \(T(H_{D_{j},\infty}^{-1})\ll_{T}H_{D_{j},\infty}^{-1}\) for every composition \(T\) of operators in \(\{a\,\frac{\partial}{\partial a},a\,\frac{\partial}{\partial b}\}\) (by Lemma 5.1(4) or [12, proof of Lemma 5.9]); to handle arbitrarily long compositions, use left invariance (i.e. \(T(f(\gamma g))=(Tf)(\gamma g)\) for \(\gamma\in G(\mathbb{R})\)).
**Lemma 2.13**.: _Suppose \(\Re(\boldsymbol{s})\in[A,2A]^{J}\) and \(\alpha,t\in\mathbb{R}\). Then the following hold:_
1. \(\int_{G(\mathbb{R}):\,\operatorname{sgn}(a)=\varsigma}H_{\infty}(\boldsymbol{s},g)^{-1}e(\alpha b)|a|^{-it}\,dg\ll(1+\|\boldsymbol{s}\|^{6})/(1+\alpha^{4})(1+t ^{2})\) _for_ \(\varsigma\in\{\pm 1\}\)_._
2. \(\int_{\mathbb{R}^{\times}}\lvert\!\int_{\mathbb{R}}H_{\infty}(\boldsymbol{s},g)^{- 1}e(\alpha b)\,db\,|\,da\ll(1+\|\boldsymbol{s}\|^{4})/(1+\alpha^{4})\)_._
Proof.: For (1)-(2), integrate by parts over \(b\) four times if \(|\alpha|\geq 1\). For (1), further integrate by parts over \(\log|a|\) twice if \(|t|\geq 1\). To bound derivatives, use \(T(H_{D_{j},\infty}^{-1})\ll_{T}H_{D_{j},\infty}^{-1}\) from the proof of Proposition 2.12; to bound \(H^{-1}\) itself, use (2.9) and the largeness of \(A\)
**Lemma 2.14**.: _Suppose \(\Re(\mathbf{s})\in[A,2A]^{J}\) and \((m,\alpha)\in\mathbb{Z}_{\geq 1}\times\mathbb{Q}^{\times}\). Then the following hold:_
1. \(H^{\prime}_{p}(\mathbf{s},\lambda,m,\alpha)=\mathbf{1}_{v_{p}(m)=v_{p}(\alpha)}+O((p +|m/\alpha|_{p}+|\alpha/m|_{p})^{-4})\)_._
2. \(\int_{G(\mathbb{Q}_{p}):\,N\alpha a\in\mathbb{Z}_{p}}\lvert H_{p}(\mathbf{s},g) \rvert^{-1}dg=\mathbf{1}_{v_{p}(\alpha)\geq 0}+O((p+|\alpha|_{p})^{-4})\)_._
Proof.: Use (2.9) and the fact that \(H_{p}(\mathbf{s},G(\mathbb{Z}_{p}))=1\) for large \(p\).
**Lemma 2.15**.: _Suppose \(\Re(\mathbf{s})\in[A,2A]^{J}\) and \(t\in\mathbb{R}\). Then the following hold:_
1. \(\sum_{\alpha\in\mathbb{Q}^{\times}}\sum_{\lambda\in\mathbf{M}}\sum_{m\geq 1 }\lvert H^{\prime}_{\infty}(\mathbf{s},\lambda,t,\alpha)\rvert\prod_{p}\lvert H^{ \prime}_{p}(\mathbf{s},\lambda,m,\alpha)\rvert\ll(1+\lVert\mathbf{s}\rVert^{6})/(1+t ^{2})\)_._
2. \(\sum_{\alpha\in\mathbb{Q}^{\times}}(\int_{\mathbb{R}^{\times}}\lvert\int_{ \mathbb{R}}H_{\infty}(\mathbf{s},g)^{-1}e(\alpha b)\,db\rvert\,da)\prod_{p}(\int_ {G(\mathbb{Q}_{p}):\,N\alpha a\in\mathbb{Z}_{p}}\lvert H_{p}(\mathbf{s},g)\rvert^ {-1}\,dg)<\infty\)_._
Proof.: (1): Let \(\beta:=\alpha/m\). Now plug in Lemmas 2.13(1) and 2.14(1). It then suffices to prove \(\sum_{m\geq 1}\sum_{\beta=r/s\in\mathbb{Q}^{\times}}(1+m^{4}\beta^{4})^{-1} \cdot\lvert rs\rvert^{\epsilon-4}<\infty\), where we write \(\beta=r/s\) in lowest terms (with \(s\geq 1\)). But \((1+m^{4}\beta^{4})\cdot(rs)^{4}\geq(s^{4}+m^{4})\cdot r^{4}\), which suffices.
(2): Use Lemmas 2.13(2) and 2.14(2), and the bound \(\sum_{\alpha=r/s\in\mathbb{Q}^{\times}}(1+\alpha^{4})^{-1}\lvert s\rvert^{ \epsilon-4}<\infty\).
For the rest of SS2, assume \(\Re(\mathbf{s})\in[A,2A]^{J}\). We will first project \(\mathsf{Z}(\mathbf{s},g)\) onto \(L^{2}(\mathbb{Q}^{\times}\backslash\mathbf{A}_{\mathbb{Q}}^{\times})\), and then analyze its orthogonal complement. Note that if \(f\in C(G(\mathbb{Q})\backslash G(\mathbf{A}_{\mathbb{Q}}))\), then \(f(1,b)\) is a well-defined function of \(b\in\mathbb{Q}\backslash\mathbf{A}_{\mathbb{Q}}\). Following [11, Lemma 3.3], let
\[\mathsf{Z}_{0}(\mathbf{s},g):=\int_{\mathbb{Q}\backslash\mathbf{A}_{\mathbb{Q}}} \mathsf{Z}(\mathbf{s},(1,b)g)\,db\quad\text{for }g\in G(\mathbb{Q})\backslash G( \mathbf{A}_{\mathbb{Q}}) \tag{2.15}\]
(well-defined since conjugation by \(G(\mathbb{Q})\) on the group \(\mathbb{Q}\backslash\mathbf{A}_{\mathbb{Q}}\subseteq G(\mathbb{Q})\backslash G (\mathbf{A}_{\mathbb{Q}})\) preserves \(db\)). Then \(\mathsf{Z}_{0}\in(C^{\infty}\cap L^{\infty})(G(\mathbf{A}_{\mathbb{Q}}))\), because \(\mathsf{Z}\in(C^{\infty}\cap L^{\infty})(G(\mathbf{A}_{\mathbb{Q}}))\) (by Proposition 2.12) and \(\mathbb{Q}\backslash\mathbf{A}_{\mathbb{Q}}\) is compact. Also, \(\mathsf{Z}_{0}\) is left \(\mathbf{A}_{\mathbb{Q}}\)-invariant, and thus descends to a (right) \(\det(\mathbf{K})\)-invariant function of \(\det(g)\in\mathbb{Q}^{\times}\backslash\mathbf{A}_{\mathbb{Q}}^{\times}\). Fourier analysis on \(C(\mathbb{Q}^{\times}\backslash\mathbf{A}_{\mathbb{Q}}^{\times})^{\det( \mathbf{K})}\) gives
\[\mathsf{Z}_{0}(\mathbf{s},g)=\int H^{*}(\mathbf{s},\chi)\chi(g)\,d\chi, \tag{2.16}\]
where \(\chi\) runs over characters of \(\mathbf{A}_{\mathbb{Q}}^{\times}\) lying in \(L^{\infty}(\mathbb{Q}^{\times}\backslash\mathbf{A}_{\mathbb{Q}}^{\times})^{\det( \mathbf{K})}\). (To justify (2.16), note \(\mathsf{Z}_{0}\in L^{1}\) by Proposition 2.12 and \(\int\lvert H^{*}(\mathbf{s},\chi)\rvert\,d\chi<\infty\) by Lemma 2.13(1) with \(\alpha=0\).)
**Proposition 2.16**.: _Fix \(I\subseteq J\) with \(I\cap J_{1}\neq\emptyset\) and \(I\cap J_{2}\neq\emptyset\). Fix \(\mathbf{\kappa}\in-\operatorname{div}(\omega)+\mathbb{R}_{>0}^{J\setminus I}\). Then \((s-1)^{\lvert I\rvert-1}\mathsf{Z}_{0}(s\mathbf{\kappa},1_{G})\) is holomorphic, with at most polynomial growth in vertical strips, on \(\Re(s)\geq 1-\delta\). Also, \(\lim_{s\to 1}\,(s-1)^{\lvert I\rvert-1}\mathsf{Z}_{0}(s\mathbf{\kappa},1_{G})= \mathcal{X}_{\Lambda_{I}(X)}(\mathbf{\kappa})\lim_{\mathbf{y}\to\mathbf{\kappa}}H^{*}(\mathbf{ y},1)\prod_{j\in I}(y_{j}-\kappa_{j})\)._
Proof.: Use (2.16), Lemma 2.11(1)\(\Rightarrow\)(2), and Lemma 2.10 with \(P=I\) and \(f=H^{*}(\mathbf{z}+\mathbf{\kappa},\chi_{0})\) for a finite set of \(\chi_{0}\). Cf. [11, proof of Theorem 2.1] or [10, proof of Theorem 48].
Proposition 2.16 and Lemma 2.11 give a satisfactory understanding of \(\mathsf{Z}_{0}\) for us. It remains to discuss \(\mathsf{Z}_{1}:=\mathsf{Z}-\mathsf{Z}_{0}\). We follow [11, proofs of Lemma 5.3 and Proposition 5.4]. Let
\[\mathsf{h}(\mathbf{s},a):=\int_{\mathbb{Q}\backslash\mathbf{A}_{\mathbb{Q}}} \mathsf{Z}_{1}(\mathbf{s},(a,b))\overline{\psi}(b)\,db\quad\text{for }a\in\mathbf{A}_{ \mathbb{Q}}^{\times}. \tag{2.17}\]
We have \(\mathsf{h}\in(C^{\infty}\cap L^{\infty})(\mathbf{A}_{\mathbb{Q}}^{\times})\), since \(\mathsf{Z}_{1}=\mathsf{Z}-\mathsf{Z}_{0}\in(C^{\infty}\cap L^{\infty})(G( \mathbf{A}_{\mathbb{Q}}))\) and \(\mathbb{Q}\backslash\mathbf{A}_{\mathbb{Q}}\) is compact. Also, \(\int_{\mathbb{Q}\backslash\mathbf{A}_{\mathbb{Q}}}\mathsf{Z}_{1}(\mathbf{s},(a,b))\,db=0\), by (2.15). We deduce from [11, proof of \(\Theta I=1\) in the proof of Lemma 3.4] (based on Fourier expansion of \(\mathsf{Z}_{1}\) in \(b\in\mathbb{Q}\backslash\mathbf{A}_{\mathbb{Q}}\)) that
\[\mathsf{Z}_{1}(\mathbf{s},g)=\sum_{\alpha\in\mathbb{Q}^{\times}}\psi(\alpha b) \mathsf{h}(\mathbf{s},\alpha a). \tag{2.18}\]
But since \(\mathsf{Z}_{0}\) is left \(\mathbf{A}_{\mathbb{Q}}\)-invariant (by (2.15)) and \(\int_{\mathbb{Q}\backslash\mathbf{A}_{\mathbb{Q}}}\psi(b)\,db=0\), (2.17) simplifies to
\[\mathsf{h}(\mathbf{s},a)=\int_{\mathbb{Q}\backslash\mathbf{A}_{\mathbb{Q}}} \mathsf{Z}(\mathbf{s},(a,b))\overline{\psi}(b)\,db. \tag{2.19}\]
Lemma 2.15(2) now implies \(\mathsf{h}\in L^{1}(\mathbf{A}_{\mathbb{Q}}^{\times})\), by [17, 2], general analysis of \(|I(\mathsf{Z}_{1})|\) in the first paragraph of the proof of Lemma 5.3]. So \(\mathsf{h}\in L^{q}(\mathbf{A}_{\mathbb{Q}}^{\times})\) for all \(q\in[1,\infty]\) (since \(\mathsf{h}\in L^{\infty}\)).
Since \(\mathsf{Z}\) is right \(\mathbf{K}\)-invariant, (2.19) implies \(\mathsf{h}\) is \(\mathbf{K}\)-invariant, i.e. \(\mathsf{h}(\boldsymbol{s},au)=\psi(av)\mathsf{h}(\boldsymbol{s},a)\) for all \((u,v)\in\mathbf{K}\). By [17, Lemma 3.5], then, \(\mathsf{h}=\sum_{\lambda\in\mathbf{M}}\sum_{m\geq 1}\mathbf{v}_{m,\lambda} \otimes\mathsf{h}_{m,\lambda}\), where (letting \(\mathbf{A}_{\mathbb{Q},f}^{\times}\) be the finite part of \(\mathbf{A}_{\mathbb{Q}}^{\times}\), and \(L^{2}(\mathbb{R}^{\times})^{\lambda_{\infty}}\) be the \(\lambda_{\infty}\)-eigenspace of \(L^{2}(\mathbb{R}^{\times})\))
\[\mathsf{h}_{m,\lambda}=\mathsf{h}_{m,\lambda}(\boldsymbol{s},a_{\infty}):= \tfrac{1}{2}\sum_{\varsigma\in\{\pm 1\}}\int_{\mathbf{A}_{\mathbb{Q},f}^{ \times}}\mathsf{h}(\boldsymbol{s},(a_{f},\varsigma a_{\infty}))\overline{ \mathbf{v}}_{m,\lambda}(a_{f},\varsigma a_{\infty})\,d^{\times}a_{f}\in L^{2} (\mathbb{R}^{\times})^{\lambda_{\infty}};\]
cf. [17, proof of Proposition 3.6]. In fact, \(\mathsf{h}\in L^{1}\cap L^{\infty}\) and (2.10) imply \(\mathsf{h}_{m,\lambda}\in L^{1}\cap L^{\infty}\).
For any \((m,\lambda,t)\in\mathbb{Z}_{\geq 1}\times\mathbf{M}\times\mathbb{R}\), a calculation using (2.19), \(\mathsf{Z}(\boldsymbol{s},(a,b))=\mathsf{Z}(\boldsymbol{s},(\alpha a,\alpha b))\) (for \(\alpha\in\mathbb{Q}^{\times}\)), (2.10), and \(\mathsf{Z}\in L^{1}\) shows that (cf. [17, proof of Proposition 3.6])
\[\int_{\mathbb{R}^{\times}}\mathsf{h}_{m,\lambda}(\boldsymbol{s},a)|a|_{ \infty}^{-it}\,d^{\times}a=\int_{\mathbf{A}_{\mathbb{Q}}^{\times}}\mathsf{h} (\boldsymbol{s},a)\overline{\mathbf{v}}_{m,\lambda}(a)|a|_{\infty}^{-it}\,d^{ \times}a=\int_{G(\mathbb{Q})\setminus G(\mathbf{A}_{\mathbb{Q}})}\mathsf{Z} (\boldsymbol{s},g)\overline{\theta}_{m,\lambda,t}(g)\,dg.\]
Yet by (2.12), (2.10), (2.14), and the manipulations in [17, second paragraph of proof of Lemma 5.3], we have (using Proposition 2.12 and Lemma 2.15(1) to justify manipulations)
\[\int_{G(\mathbb{Q})\setminus G(\mathbf{A}_{\mathbb{Q}})}\mathsf{Z}( \boldsymbol{s},g)\overline{\theta}_{m,\lambda,t}(g)\,dg=H^{*}(\boldsymbol{s},m,\lambda,t)=\sum_{\alpha\in\mathbb{Q}^{\times}}H_{\infty}^{\prime}( \boldsymbol{s},\lambda,t,\alpha)\prod_{p}H_{p}^{\prime}(\boldsymbol{s}, \lambda,m,\alpha).\]
Thus \(\sum_{\lambda\in\mathbf{M}}\sum_{m\geq 1}|H^{*}(\boldsymbol{s},m,\lambda,t)|<\infty\) by Lemma 2.15(1), and hence Fourier expansion of \(\mathsf{h}_{m,\lambda}\) on \(\mathbb{R}^{\times}\) gives \(\mathsf{h}(\boldsymbol{s},a)=\sum_{\lambda\in\mathbf{M}}\sum_{m\geq 1} \mathbf{v}_{m,\lambda}(a)(4\pi)^{-1}\int_{\mathbb{R}}H^{*}(\boldsymbol{s},m, \lambda,t)|a|_{\infty}^{it}\,dt\). Plugging this into (2.18) gives \(\mathsf{Z}_{1}(\boldsymbol{s},g)=\sum_{\lambda\in\mathbf{M}}\sum_{m\geq 1}(4\pi)^{-1} \int_{\mathbb{R}}H^{*}(\boldsymbol{s},m,\lambda,t)\theta_{m,\lambda,t}(g)\,dt\); the required use of Fubini is justified by Lemma 2.15(1). Finally, take \(g=1_{G}\) (using (2.11)), write
\[\sum_{m=p^{i}:\,j\geq 0}H_{p}^{\prime}(\boldsymbol{s},\lambda,m,\alpha)\lambda_{S }(m/N)|m/N|_{p}^{-it}=|\alpha|_{p}^{-it}H_{p}^{\vee}(\boldsymbol{s},\lambda,t,\alpha)\]
(using (2.14) and (2.13)), and recall \(H_{\infty}^{\prime}=|\alpha|_{\infty}^{-it}H_{\infty}^{\vee}\), to get
\[\mathsf{Z}_{1}(\boldsymbol{s},1_{G})=\sum_{\lambda\in\mathbf{M}:\,\lambda(-1)= 1}\,\sum_{\alpha\in\mathbb{Q}^{\times}}(2\pi)^{-1}\int_{\mathbb{R}}\prod_{v}H_ {v}^{\vee}(\boldsymbol{s},\lambda,t,\alpha)\,dt \tag{2.20}\]
(cf. [17, Proposition 5.4]); the required manipulations are justified by Lemma 2.15(1).
Both additive and multiplicative harmonics appear above; the sum over \(\alpha\in\mathbb{Q}^{\times}\) somehow reflects the non-abelian nature of \(G\). The multiplicative harmonics in (2.20) in fact reflect a symmetry in \(\boldsymbol{s}\). By (2.1) we have \(|a|_{v}=H_{v}(\operatorname{div}(a),g)^{-1}\), so by inspection of (2.13),
\[H_{v}^{\vee}(\boldsymbol{s},\lambda,t,\alpha)=H_{v}^{\vee}(\boldsymbol{s}-it \operatorname{div}(a),\lambda,0,\alpha). \tag{2.21}\]
Thus it will suffice to study \(H_{v}^{\vee}\) for \(t=0\) (without the factor \(|a|_{v}^{-it}\)).
## 3. New geometric and parametric observations
In this section, we make crucial observations for later. Until specified otherwise, we do not assume \(X\) is split or that \(D\) has strict normal crossings. (This generality will be useful in SS6.) In particular, the \(D_{j}\) are irreducible but not necessarily smooth or geometrically irreducible.
**Proposition 3.1**.: _Let \(j\in J\) and \(c\in\mathbb{Q}\). Then \(\mathsf{d}_{j}\leq 1-\operatorname{ord}_{D_{j}}(b-c)\)._
Proof.: Let \(r=\operatorname{ord}_{D_{j}}(a)\) and \(s=\operatorname{ord}_{D_{j}}(b-c)\). Now work over \(\mathbb{C}\). Locally near a _general_ point \(x\in D_{j}\), choose coordinates \(y,z\ll 1\) with \(a=y^{r}f(y,z)\) and \(b-c=y^{s}g(y,z)\), where \(f\), \(g\) are analytic with \(f(0,0)g(0,0)\neq 0\). (We choose \(y\) so that \(y=0\) cuts out \(D_{j}\) near \(x\).) Thus \(da\,db=O(y^{r+s-1})\,dy\,dz\). So \(\omega=db\,da/a=O(y^{s-1})\,dz\,dy\). So \(-\mathsf{d}_{j}=\operatorname{ord}_{D_{j}}(\omega)\geq s-1\).
Proof of Proposition 1.4.: By Proposition 2.3, \(\mathsf{d}_{j}\geq 1-\operatorname{ord}_{D_{j}}(a)\). Now use Proposition 3.1.
Recall \(J_{1}\), \(J_{2}\), \(J_{3}\) from SS2.3. Inspired by Proposition 1.4 and [12, Theorem 5.1], let
\[J_{1}^{c}:=\{j\in J_{1}:\operatorname{ord}_{D_{j}}(b-c)=\operatorname{ord}_{D_{j }}(a)\},\quad J_{2}^{*}:=\{j\in J_{2}:\operatorname{ord}_{D_{j}}(b/a)=0\}.\]
Let \(I^{c}:=\{j\in J:\operatorname{ord}_{D_{j}}(b-c)>0\}\); then \(J_{1}^{c}\subseteq I^{c}\subseteq J_{1}\) (the latter by Proposition 1.4).
**Definition 3.2**.: For \(j\in J\), let \(\operatorname{C}(j)=\operatorname{arg\,max}_{c\in\mathbb{Q}}\operatorname{ ord}_{D_{j}}(b-c)\subseteq\mathbb{Q}\).
**Proposition 3.3**.: _If \(j\in I^{c}\) for some \(c\in\mathbb{Q}\), then \(\operatorname{C}(j)=\{c\}\); otherwise, \(\operatorname{C}(j)=\mathbb{Q}\)._
Proof.: Let \(f(d)=\operatorname{ord}_{D_{j}}(b-d)\). If \(j\in I^{c}\), then \(f(d)=0<f(c)\) for all \(d\neq c\). If \(j\notin\bigcup_{c\in\mathbb{Q}}I^{c}\) and \(f(0)\geq 0\), then \(f(c)=0\) for all \(c\in\mathbb{Q}\). If \(f(0)<0\), then \(f(c)=f(0)\) for all \(c\in\mathbb{Q}\).
Thus \(I^{d}\cap I^{e}=\emptyset\) if \(d\neq e\). Also, \(D_{j}\) is special if and only if \(j\in J_{2}^{*}\) or \(j\in\bigcup_{c\in\mathbb{Q}}J_{1}^{c}\).
**Proposition 3.4**.: _Let \(c\in\mathbb{Q}\). Then \(|J_{1}^{c}\cup J_{2}^{*}|\leq\operatorname{rank}(\operatorname{Pic}(X))\)._
Proof.: Since \(X\) admits no nonconstant morphism to \(\mathbb{A}^{1}\), we have \(\operatorname{div}_{\infty}((b-c)/a)\neq 0\) (by the algebraic Hartogs lemma). But \(\operatorname{ord}_{D_{j}}((b-c)/a)\geq 0\) for all \(j\in J_{1}^{c}\cup J_{2}^{*}\). Since \(\operatorname{div}_{\infty}((b-c)/a)\) is supported on \(D\), it follows that \(|J_{1}^{c}\cup J_{2}^{*}|\leq|J|-1=\operatorname{rank}(\operatorname{Pic}(X))\), by Proposition 2.2.
Precisely understanding the \(D_{j}\) seems tricky in general, but the following will suffice:
**Proposition 3.5**.: _Let \(j\in J\), \(c\in\operatorname{C}(j)\). Let \(r=\operatorname{ord}_{D_{j}}(a)\), \(s=\operatorname{ord}_{D_{j}}(b-c)\), \(F=(b-c)^{r}/a^{s}\). Then \(F\in\Gamma(U,\mathcal{O}_{X})^{\times}\) for some dense open \(U\subseteq X\) with \(U\cap D_{j}\neq\emptyset\). The rational map \(f\colon D_{j}\dashrightarrow\mathbb{P}^{1}_{\mathbb{Q}}\) extending \(F|_{U\cap D_{j}}\) is nonconstant._
Proof.: Since \(\operatorname{ord}_{D_{j}}(F)=0\), the first statement is clear. In particular, \(f^{-1}(\mathbb{G}_{m})\neq\emptyset\).
_Case 1: \(s=0\)._ If \(f\) were constant, then we would have \(b|_{D_{j}}=t\) for some constant \(t\in\mathbb{Q}\), and thus \(1\leq\operatorname{ord}_{D_{j}}(b-t)\leq\operatorname{ord}_{D_{j}}(b-c)=s\) (since \(c\in\operatorname{C}(j)\)), a contradiction.
_Case 2: \(r=s\neq 0\)._ The formula (1.1) is "homogeneous" in \(a\), \(b\), so it induces a transitive right \(G\)-action \(t\cdot(u,v):=(v+t)/u\) on \(t=b/a\in\mathbb{A}^{1}\), which extends to a \(G\)-action on \(t\in\mathbb{P}^{1}\) (with a fixed point at \(\infty\)). Moreover, for this \(G\)-action on \(\mathbb{P}^{1}\), the rational map \(b/a\) from \(X\) to \(\mathbb{P}^{1}\) is \(G\)-equivariant. Similarly, the rational map \((b-c)/a\) from \(X\) to \(\mathbb{P}^{1}\) is \(G\)-equivariant; one can check this using (1.1), or by factoring \((b-c)/a\) through the \(G\)-equivariant map \((a,b)\mapsto(1,-c)(a,b)=(a,b-c)\) on \(G\). Thus \(\frac{b-c}{a}|_{D_{j}}\) is a \(G\)-equivariant rational map \(D_{j}\dashrightarrow\mathbb{P}^{1}\). Since \(f^{-1}(\mathbb{G}_{m})\neq\emptyset\), and \(G\) acts transitively on \(\mathbb{A}^{1}\), it follows that \(f\) is nonconstant.
_Case 3: \(s\neq 0\) and \(r\neq s\)._ Then \(r>s\) by Proposition 1.4, so \(a/(b-c)\) vanishes on a dense open \(W\subseteq D_{j}\). But (1.1) yields, in \(\mathbb{Q}(X\times G)\), the identity (for \(x\in X\) and \((u,v)\in G\))
\[F(x\cdot(u,v))=(av+b-c)^{r}/(au)^{s}=(av/(b-c)+1)^{r}\cdot F(x)/u^{s}.\]
Since \(F(x)\), \(F(x\cdot(u,v))\), \((av/(b-c)+1)^{r}\) are regular on \(X\times G\) near a _general_ point \((x,g)\in D_{j}\times G\), restriction to \(D_{j}\) gives \(f(x\cdot(u,v))=f(x)/u^{s}\) in \(\mathbb{Q}(D_{j}\times G)\). Thus \(f\) is \(G\)-equivariant for a nontrivial multiplicative action of \(G\) on \(\mathbb{P}^{1}\) (fixing \(0\), \(\infty\), and acting transitively on \(\mathbb{G}_{m}(\overline{\mathbb{Q}})\)). Since \(f^{-1}(\mathbb{G}_{m})\) is nonempty, it follows that \(f\) is nonconstant.
Proposition 3.5 leads to new insight on local parameterizations of \(X\) near arbitrary \(D_{j}\).
**Lemma 3.6**.: _Let \(j\), \(c\), \(r\), \(s\), \(F\) be as in Proposition 3.5. Let \(U\subseteq X\) be the largest open set such that \(D_{j}\cap U\) is smooth and \(a,b-c\in\Gamma(U\setminus D_{j},\mathcal{O}_{X})^{\times}\). Let \(x\in(D_{j}\cap U)(\mathbb{R})\)._
1. _Assume_ \(j\in J_{1}\cup J_{2}\)_. Then locally near_ \(x\)_, there are real-analytic coordinates_ \(y,z\ll 1\) _with_ \(a=\epsilon y^{r}\) _and_ \((b-c)/y^{s}=k+z\)_, where_ \(\epsilon,k\in\mathbb{R}^{\times}\)
_(2) Assume \(j\in J_{3}\). Then locally near \(x\), there are real-analytic coordinates \(y,z\ll 1\) with \(b-c=\epsilon y^{s}\) and \(a=k+z\), where \(\epsilon,k\in\mathbb{R}^{\times}\)._
_(3) The functions \((Ty)/y\) and \(Tz\) are regular near \(x\), for any \(T\in\{(b-c)\,\frac{\partial}{\partial b},a\,\frac{\partial}{\partial a},a\, \frac{\partial}{\partial b}\}\)._
Proof.: (1): There exists \(t\in\mathbb{Q}(X)\) (defining \(D_{j}\) locally) such that \(a/t^{r},(b-c)/t^{s}\in\mathcal{O}_{X,x}^{\times}\). Let \(u\in\mathbb{Q}(X)\) be a regular local coordinate complementary to \(t\); then analytically we have \(a/t^{r}=k_{1}+f_{1}(t,u)\) and \((b-c)/t^{s}=k_{2}+f_{2}(t,u)\), where \(k_{1},k_{2}\in\mathbb{R}^{\times}\) and \(f_{1}(0,0)=f_{2}(0,0)=0\). Since \(r\neq 0\), we may change variables from \(t\) to \(y:=t(1+f_{1}(t,u)/k_{1})^{1/r}\). Then \(a=k_{1}y^{r}\), \((b-c)/y^{s}=k_{2}+f_{0}(y,u)\), and \(F(a,b)=(k_{2}+f_{0})^{r}/k_{1}^{s}=k_{3}+f_{3}(y,u)\), where \(k_{3}=k_{2}^{r}/k_{1}^{s}\).
Differentiating \(a=k_{1}y^{r}\) in \(b\) gives \(\partial y/\partial b=0\), since \(\partial a/\partial b=0\). Differentiating \(F=k_{3}+f_{3}\) in \(b\) gives \(rF=(b-c)\partial F/\partial b=((b-c)\partial u/\partial b)(\partial f_{3}/ \partial u)\), since \(\partial y/\partial b=0\). Yet the function \(\partial u/\partial b\) near \(x\) is regular away from \(D_{j}\) (because \(u\in\mathcal{O}_{X,x}\), and \(a\), \(b\) are regular _coordinates_ away from \(D_{j}\)). Therefore, \((b-c)\partial u/\partial b\) is either regular near \(x\), or polar along \(D_{j}\). Since \(rF\in\Gamma(U,\mathcal{O}_{X})^{\times}\), we conclude that the _regular_ function \(\partial f_{3}/\partial u\) near \(x\) is either invertible, or zero along \(D_{j}\). But by Proposition 3.5, the map \(F=k_{3}+f_{3}\) is nonconstant along \(D_{j}\), so the latter case is impossible; instead, \(\partial f_{3}/\partial u\) must be invertible. So by the analytic inverse function theorem, we may change variables from \(u\) to \(w:=f_{3}(y,u)\). Since \((1+f_{0}/k_{2})^{r}=1+f_{3}/k_{3}\), taking \(r\)th roots gives a final change of variables from \(w\) to \(z:=f_{0}\).
(2): This is similar. First arrange for \(b=\epsilon y^{s}\) and \(a=k+f(y,u)\); then analyze the equation \(a=(a\,\partial u/\partial a)(\partial f/\partial u)\) by the same method, using in particular the nonconstancy of \(a|_{D_{j}}\) (guaranteed by Proposition 3.5), to change variables from \(u\) to \(z:=f(y,u)\).
(3): Given (1)-(2), this is routine (but new when \(T=(b-c)\,\frac{\partial}{\partial b}\)). For \(T\in\{(b-c)\,\frac{\partial}{\partial b},a\,\frac{\partial}{\partial a}\}\), use the chain rule and the identities \(Ta/a\in\mathbb{Z}\), \(Tb/(b-c)\in\mathbb{Z}\). For \(T=a\,\frac{\partial}{\partial b}\), use \(a/(b-c)\in\Gamma(U,\mathcal{O}_{X})\) (given by Proposition 1.4) to reduce to the \(T=(b-c)\,\frac{\partial}{\partial b}\) case.
_Remark 3.7_.: The lemma generalizes to any local field \(K\) of characteristic \(0\), e.g. any \(\mathbb{Q}_{v}\). The map \((y,z)\mapsto(a,b)\) is a priori injective away from \(D\), forcing \(\gcd(r,s)=1\) when \(K=\mathbb{C}\).
For the rest of SS3, assume \(D\) has strict normal crossings. We can refine Lemma 3.6 over \(\mathbb{Q}_{p}\).
**Lemma 3.8**.: _Let \(j\), \(c\), \(r\), \(s\), \(U\) be as in Lemma 3.6. Let \(\mathscr{U}\) be the complement of the closure of \(X\setminus U\) in \(\mathscr{X}\). Let \(\rho\colon\mathscr{X}(\mathbb{Z}_{p})\to\mathscr{X}(\mathbb{F}_{p})\) be reduction modulo a large prime \(p\). Let \(x_{0}\in\mathscr{U}(\mathbb{F}_{p})\). Then \(\rho^{-1}(x_{0})\cong p\mathbb{Z}_{p}^{2}\) analytically, where coordinates \(y,z\in p\mathbb{Z}_{p}\) may be arranged so that_
_(1) if \(x_{0}\in\mathscr{D}_{j}(\mathbb{F}_{p})\) and \(j\notin J_{3}\), then \(a=w_{1}y^{r}\), \(b-c=y^{s}(w_{2}+z)\), where \(w_{1},w_{2}\in\mathbb{Z}_{p}^{\times}\);_
_(2) if \(x_{0}\in\mathscr{D}_{j}(\mathbb{F}_{p})\) and \(j\in J_{3}\), then \(b-c=w_{1}y^{s}\), \(a=w_{2}+z\), where \(w_{1},w_{2}\in\mathbb{Z}_{p}^{\times}\); and_
_(3) if \(x_{0}\in G(\mathbb{F}_{p})\), then \(a=w_{1}+y\), \(b-c=w_{2}+z\), where \(w_{1},w_{2}\in\mathbb{Z}_{p}^{\times}\)._
_The same holds if we replace \(\mathbb{F}_{p}\) with \(\mathbb{F}_{p^{k}}\), and \(\mathbb{Z}_{p}\) with the corresponding unramified extension._
Proof.: Uniformity over \(p\), \(x_{0}\) takes some care. We use the quasi-compactness of \(U\). Let \(V\subseteq\mathscr{U}\) be a small affine open neighborhood of a point of \(U\). Then there exist \(t,u\in\Gamma(V,\mathcal{O}_{V})\) such that for every prime \(p\) and point \(x\in V(\mathbb{Z}_{p}^{\rm ur})\), the maximal ideal of the local ring \(\mathcal{O}_{V\times\mathbb{Z}_{p}^{\rm ur},\rho(x)}\) is generated by \(p\), \(t-t(x)\), \(u-u(x)\).3 Moreover, since \(D_{j}\) is smooth, we may choose \(t\), \(u\) so that \({\rm div}(t|_{V})=\mathscr{D}_{j}|_{V}\) (provided \(V\) is small enough).
Footnote 3: Thus \(t-t(x)\), \(u-u(x)\) are regular local coordinates for \(V_{\mathbb{F}_{p}}\) at \(\rho(x)\in V(\mathbb{F}_{p})\). This condition, when cast in terms of differentials, is equivalent to the nonvanishing of some determinant modulo \(p\), and thus can be guaranteed by removing from \(V\) the zero locus of the determinant if necessary.
Now \(a/t^{r}\), \((b-c)/t^{s}\) are invertible regular functions over \(\mathbb{Q}\) (i.e. morphisms \(V_{\mathbb{Q}}\to\mathbb{G}_{m}\)); so are \(rF\), \((b-c)\partial u/\partial b\) if \(j\notin J_{3}\), and \(a\), \(a\,\partial u/\partial a\) if \(j\in J_{3}\). So they are invertible regular
over some \(\mathbb{Z}[1/M]\). By the inverse function theorem in \(\mathbb{Z}_{p}^{\rm ur}[[t-t(x),u-u(x)]]\), the method of Lemma 3.6 thus yields (1)-(2) for each \(x_{0}\in\mathscr{U}(\mathbb{F}_{p})\), provided \(p\geq A\) for some constant \(A=A(j,c)>0\). For (3), note that if \(x\in G(\mathbb{Z}_{p})\) with \(p\) large, then \(a(x),b(x)-c\in\mathbb{Z}_{p}^{\times}\), and \(a-a(x)\), \(b-b(x)\) are regular local coordinates for \(G_{\mathbb{F}_{p}}\) at \(\rho(x)\in G(\mathbb{F}_{p})\).
We end with some local constancy analysis useful for Lemmas 4.7 and 5.1. For sets \(U\subseteq X\), let \(\mathsf{C}(U)=\bigcap_{j\in J:\,D_{j}\cap U\neq\emptyset}\mathrm{C}(j)\subseteq \mathbb{Q}\). The following lemma holds by definition:
**Lemma 3.9**.: _Suppose \(U\subseteq X\) and \(c\in\mathsf{C}(U)\). If \(j\in J\) and \(D_{j}\cap U\neq\emptyset\), then \(c\in\mathrm{C}(j)\)._
**Proposition 3.10**.: _Let \(\mathcal{C}=\{\mathrm{open}\ U\subseteq X:\mathsf{C}(U)\neq\emptyset\}\). Then \(\bigcup_{U\in\mathcal{C}}U=X\)._
Proof.: If \(x\in X\), then \(\mathsf{C}(U)=\mathsf{C}(\{x\})\) for small \(U\ni x\). To conclude, it suffices to show that \(\mathsf{C}(\{x\})\neq\emptyset\). But by Proposition 3.3, \(\mathsf{C}(\{x\})\) is empty if and only if \(x\in D_{i}\cap D_{k}\) for some \(i\in I^{d}\) and \(k\in I^{e}\) with \(d\neq e\). This situation is impossible: \(D\) has strict normal crossings, and \(i\neq k\) (since \(\mathrm{C}(i)\neq\mathrm{C}(k)\)), so \(b\in\mathcal{O}_{X,x}\); so \(d=b(x)=e\), a contradiction.
It turns out that sets \(U\in\mathcal{C}\) possess extra analytic symmetries. To facilitate proofs, let \(\Gamma_{p}(M,R)\) be the set of \(R\)-valued analytic functions on a \(p\)-adic analytic manifold \(M\).
**Lemma 3.11**.: _Let \(U\in\mathcal{C}\) and \(c\in\mathsf{C}(U)\). Let \(\mathscr{U}\) be the complement of the closure of \(X\setminus U\) in \(\mathscr{X}\). Then for all sufficiently large primes \(p\), the following hold for all \(x_{1},x_{2}\in 1+p\mathbb{Z}_{p}\):_
_(1) The map \(\phi\colon(a,b)\mapsto(ax_{1},c+(b-c)x_{2})\) on \(G(\mathbb{Q}_{p})\) maps \(G(\mathbb{Q}_{p})\cap\mathscr{U}(\mathbb{Z}_{p})\) to itself._
_(2) If \(j\in J\), then \(H_{D_{j},p}(g)=H_{D_{j},p}(\phi g)\) for all \(g\in G(\mathbb{Q}_{p})\cap\mathscr{U}(\mathbb{Z}_{p})\)._
Proof.: Let \(B\) be the closure in \(\mathscr{X}\) of the set of pairwise intersections of irreducible components of \(D\cup\mathrm{div}_{0}(b-c)\). Importantly for Hartogs-style arguments, \(B_{\mathbb{Q}}\) has dimension \(0\).
Let \(p\) be large. Let \(V\subseteq\mathscr{U}_{\mathbb{Z}_{p}}\) be an affine open set such that for each \(j\in J\), there exist \(t_{j},\upsilon_{j}\in\Gamma(V,\mathcal{O}_{V})\) with \(\mathrm{div}(t_{j}|_{V})=\mathscr{D}_{j}|_{V}\) such that \(t_{j}\), \(\upsilon_{j}\) are regular coordinates for \(V_{\mathbb{F}_{p}}\) (as in the proof of Lemma 3.8; note that \(D_{j}\) is smooth). By calculation in terms of the analytic local coordinates of Lemma 3.8 (applicable by Lemma 3.9), we find that for all \(f\in\Gamma(V,\mathcal{O}_{V})\) and \(x\in(V\setminus B)(\mathbb{Z}_{p}^{\rm ur})\), the rational functions \((t_{j}-\phi^{*}t_{j})/pt_{j}\), \((f-\phi^{*}f)/p\) on \(V\) lie in
\[\mathbb{Q}_{p}(V)\cap\mathbb{Z}_{p}^{\rm ur}[[t_{j}-t_{j}(x),\upsilon_{j}- \upsilon_{j}(x)]]=\mathbb{Q}_{p}(V)\cap\mathcal{O}_{V\times\mathbb{Z}_{p}^{ \rm ur},\rho(x)}^{\wedge}=\mathcal{O}_{V,\rho(x)}, \tag{3.1}\]
where \(\mathcal{O}^{\wedge}\) denotes completion. (The first equality in (3.1) comes from the maximal ideal \((p,t_{j}-t_{j}(x),\upsilon_{j}-\upsilon_{j}(x))\) of \(\mathcal{O}_{V\times\mathbb{Z}_{p}^{\rm ur},\rho(x)}\); for the second equality, note that \(\mathbb{Q}_{p}(V)=\mathrm{Frac}(\mathcal{O}_{V,\rho(x)})\), and \(\mathcal{O}_{V,\rho(x)}\to\mathcal{O}_{V\times\mathbb{Z}_{p}^{\rm ur},\rho(x)}^ {\wedge}\) is a faithfully flat injection of regular local rings.)
Thus \((t_{j}-\phi^{*}t_{j})/pt_{j},(f-\phi^{*}f)/p\in\bigcap_{x\in(V\setminus B)_{ \mathbb{F}_{p}}}\mathcal{O}_{V,x}\), since \((V\setminus B)(\mathbb{Z}_{p}^{\rm ur})\) maps onto \((V\setminus B)(\overline{\mathbb{F}}_{p})\). Yet \(\dim(B_{\mathbb{F}_{p}})=0=\dim(V_{\mathbb{F}_{p}})-2\), so every prime divisor \(P\subseteq V\) that intersects \(B_{\mathbb{F}_{p}}\) must pass through some point of \((V\setminus B)_{\mathbb{F}_{p}}\). Upon writing each \(\mathcal{O}_{V,x}\) as the intersection of its localizations at height \(1\) primes, we conclude that \((t_{j}-\phi^{*}t_{j})/pt_{j},(f-\phi^{*}f)/p\in\bigcap_{x\in V_{\mathbb{F}_{p}}} \mathcal{O}_{V,x}\).
Crucially, each function in \(\bigcap_{x\in V_{\mathbb{F}_{p}}}\mathcal{O}_{V,x}\) induces on \(\mathbb{Z}_{p}\)-points a function in \(\Gamma_{p}(V(\mathbb{Z}_{p}),\mathbb{Z}_{p})\). By converting between the local analytic coordinates in (3.1) and the global algebraic coordinates on \(V\) given by \(\Gamma(V,\mathcal{O}_{V})\), it follows that for every \(x\in V(\mathbb{F}_{p})\), the analytic map
\[\phi|_{\rho^{-1}(x)\cap\phi^{-1}(V(\mathbb{Q}_{p}))}\colon\rho^{-1}(x)\cap\phi^{ -1}(V(\mathbb{Q}_{p}))\to G(\mathbb{Q}_{p})\cap V(\mathbb{Q}_{p})\]
(where \(\phi^{-1}(V(\mathbb{Q}_{p})):=\{g\in G(\mathbb{Q}_{p}):\phi g\in V(\mathbb{Q}_{p})\}\)) extends uniquely to an analytic map \(\rho^{-1}(x)\to\rho^{-1}(x)\), since \(\rho^{-1}(x)\cap\phi^{-1}(V(\mathbb{Q}_{p}))\) is dense in \(\rho^{-1}(x)\) (in the \(p\)-adic topology).4
Thus \(\phi\) extends (uniquely) to a map \(V(\mathbb{Z}_{p})\to V(\mathbb{Z}_{p})\). Covering \(\mathscr{U}_{\mathbb{Z}_{p}}\) by \(V\) yields (1). Also, \((\phi^{*}t_{j})/t_{j}\in 1+p\Gamma_{p}(V(\mathbb{Z}_{p}),\mathbb{Z}_{p})\) for \(V\) implies \(|t_{j}(\phi x)|_{p}=|t_{j}(x)|_{p}\) for all \(x\in V(\mathbb{Z}_{p})\), whence (2) holds by Definition 2.5. (It is unclear to us whether (1) could be upgraded to a morphism \(\mathscr{V}\to\mathscr{V}\) for some \(\mathbb{Z}_{p}\)-scheme \(\mathscr{V}\) with \(\mathscr{V}(\mathbb{Z}_{p})=\mathscr{U}(\mathbb{Z}_{p})\). The problem with the candidate \(\mathscr{V}=\mathscr{U}_{\mathbb{Z}_{p}}\) is that it could presumably have a prime divisor disjoint from \(\mathscr{U}_{\mathbb{Z}_{p}}\).)
Lemma 3.11 leaves out small primes \(p\), which we address by a \(p\)-adic analytic Hartogs. Let \(\operatorname{ord}_{0}(f)\in\mathbb{Z}_{\geq 0}\cup\{\infty\}\) be the degree of the leading homogeneous term of a power series \(f\).
**Lemma 3.12**.: _Fix a finite extension \(K/\mathbb{Q}_{p}\) and an integer \(n\geq 2\). Fix \(P,Q\in\mathcal{O}_{K}[[\boldsymbol{x}]]=\mathcal{O}_{K}[[x_{1},\ldots,x_{n}]]\). Suppose \(\gcd(P,Q)=1\) and \(\operatorname{ord}_{0}(Q)=\operatorname{ord}_{0}(Q\bmod\pi_{K})\geq 1\), where \(\pi_{K}\) is a uniformizer of \(\mathcal{O}_{K}\). Then for some finite extension \(L/K\), the set \(\{\boldsymbol{x}\in p\mathcal{O}_{L}^{n}:Q(\boldsymbol{x})=0,P(\boldsymbol{x}) \neq 0\}\) has a limit point lying in \((p\mathcal{O}_{L}^{2}\setminus 0)\times\{0\}^{n-2}\)._
Proof.: If \(Q=0\), then \(P\in\mathcal{O}_{K}[[\boldsymbol{x}]]^{\times}\), so \(P(\boldsymbol{x})\in\mathcal{O}_{K}^{\times}\) for all \(\boldsymbol{x}\in p\mathcal{O}_{K}^{n}\), and the result is obvious. Now suppose \(Q\neq 0\). Since \(Q\notin\mathcal{O}_{K}[[\boldsymbol{x}]]^{\times}\), we must have \(P\neq 0\). Also, \(Q\) is primitive (i.e. \(\pi_{K}\nmid Q\)), since \(\operatorname{ord}_{0}(Q\bmod\pi_{K})=\operatorname{ord}_{0}(Q)<\infty\). By removing factors of \(\pi_{K}\) in \(P\), we may assume \(P\) is primitive too. Let \(P_{0},Q_{0}\in\mathcal{O}_{K}[\boldsymbol{x}]\) be the lowest-degree _primitive_ homogeneous terms in \(P\), \(Q\), respectively. Then in particular, \(\deg(Q_{0})=\operatorname{ord}_{0}(Q\bmod\pi_{K})=\operatorname{ord}_{0}(Q)\).
After replacing \(K\) with a finite extension if necessary, there exists a \(\boldsymbol{k}\in\mathcal{O}_{K}^{n-1}\times\{0\}\) such that \(P_{0}(k_{1},\ldots,k_{n-1},1)Q_{0}(k_{1},\ldots,k_{n-1},1)\in\mathcal{O}_{K}^{\times}\). Via the \(\mathcal{O}_{K}\)-linear automorphism \(\boldsymbol{x}\mapsto\boldsymbol{x}+x_{n}\boldsymbol{k}\) of \(\mathcal{O}_{K}[[\boldsymbol{x}]]\), we may then assume \(P_{0}(0,\ldots,0,1)Q_{0}(0,\ldots,0,1)\in\mathcal{O}_{K}^{\times}\). Let \(R=\mathcal{O}_{K}[[x_{1},\ldots,x_{n-1}]]\); then \(P,Q\bmod(\pi_{K},x_{1},\ldots,x_{n-1})\in(\mathcal{O}_{K}/\pi_{K})[[x_{n}]]\) have orders \(\deg P_{0}\), \(\deg Q_{0}\), respectively. So by the Weierstrass preparation theorem in \(R[[x_{n}]]\), there exist unique _monic_ polynomials \(P_{1},Q_{1}\in R[x_{n}]\) of degrees \(\deg P_{0}\), \(\deg Q_{0}\), respectively, with \(P/P_{1},Q/Q_{1}\in\mathcal{O}_{K}[[\boldsymbol{x}]]^{\times}\).
Let \(P_{1,j},Q_{1,j}\in R\) be the \(x_{n}^{j}\) coefficients of \(P_{1}\), \(Q_{1}\), respectively. Reduction modulo \((\pi_{K},x_{1},\ldots,x_{n-1})\) forces \(\pi_{K}\mid P_{1,j}(\boldsymbol{0})\) if \(j<\deg P_{1}\), and \(\pi_{K}\mid Q_{1,j}(\boldsymbol{0})\) if \(j<\deg Q_{1}\). Therefore, any divisor of \(P_{1}Q_{1}\) in \(R[x_{n}]\) either lies in \(R^{\times}\) or the ideal \((\pi_{K},x_{1},\ldots,x_{n})\). From \(\gcd(P,Q)=1\) in \(\mathcal{O}_{K}[[\boldsymbol{x}]]\), we thus get \(\gcd(P_{1},Q_{1})=1\) in \(R[x_{n}]\). So by Gauss' lemma, \(\gcd(P_{1},Q_{1})=1\) in \(\operatorname{Frac}(R)[x_{n}]\), a Euclidean domain, whence \(AP_{1}+BQ_{1}=C\) for some \(A,B,C\in R\) with \(C\neq 0\).
Moreover, \(Q/Q_{1}\in\mathcal{O}_{K}[[\boldsymbol{x}]]^{\times}\) implies \(\operatorname{ord}_{0}(Q_{1})=\operatorname{ord}_{0}(Q)=\deg(Q_{0})=\deg(Q_{1})\) (if we define \(\operatorname{ord}_{0}(Q_{1})\) viewing \(Q_{1}\) as a power series in \(\boldsymbol{x}\)), whence \(\operatorname{ord}_{0}(Q_{1,j})\geq\deg(Q_{1})-j\geq 1\) (so \(Q_{1,j}(\boldsymbol{0})=0\)) for all \(j<\deg Q_{1}\). Let \(f_{j}:=Q_{1,j}|_{x_{3}=\cdots=x_{n-1}=0}\in\mathcal{O}_{K}[[x_{1},x_{2}]]\).
The idea now is to choose \(\boldsymbol{x}\) with \(Q_{1}=0\), \(C\neq 0\). Let \(L(d)\) be the compositum of all degree \(\leq d\) extensions of \(K\) in \(\overline{K}\); then \(L(d)/K\) is finite (by Krasner's lemma).
_Case 1: \(n\geq 3\), and \(f_{j}=0\) for all \(j<\deg Q_{1}\)._ For each large \(v\geq 1\), choose \(x_{1},x_{2}\in p+p^{v}\mathcal{O}_{K}^{\times}\) and \(x_{3},\ldots,x_{n-1}\in p^{v}\mathcal{O}_{K}^{\times}\) with \(C\neq 0\); then choose \(x_{n}\in\overline{K}\) with \(Q_{1}=0\). (Roughly, \(v_{p}(x_{3}),\ldots,v_{p}(x_{n-1})\) ensure \(Q_{1}(x_{1},\ldots,x_{n})\) resembles a monomial in \(x_{n}\) of degree \(\deg Q_{1}\).) Letting \(v\to\infty\) produces an infinite sequence of \(\boldsymbol{x}\in p\mathcal{O}_{L(\deg Q_{1})}^{n}\) tending to \((p,p,0,\ldots,0)\).
_Case 2: \(n\geq 3\), and \(f_{j}\neq 0\) for some \(j<\deg Q_{1}\)._ Take \(j<\deg Q_{1}\) minimal with \(f_{j}\neq 0\). Via a linear automorphism \((x_{1},x_{2})\mapsto(x_{1},x_{2}+kx_{1})\) with \(k\in\mathcal{O}_{K}\), assume \(f_{j}|_{x_{2}=0}\neq 0\). Choose \(c\in\mathbb{Z}_{\geq 0}\) such that \(f_{j}(p^{v}y,0)\in\mathcal{O}_{K}[[y]]\) is divisible by its leading term. For each large \(v,w\geq 1\), choose \(x_{2}\in p^{w}+p^{v}\mathcal{O}_{K}^{\times}\) and \(x_{3},\ldots,x_{n-1}\in p^{wv}\mathcal{O}_{K}^{\times}\) with \(C\in\mathcal{O}_{K}[[x_{1}]]\setminus 0\). Then \(C\) has only finitely many zeros \(x_{1}\in\overline{K}\) with \(v_{p}(x_{1})>0\), and for each such \(x_{1}\) the set \(\{x_{n}\in\overline{K}:Q_{1}=0\}\) is finite. Thus there exists \(x_{n}\in p^{v}\mathcal{O}_{K}^{\times}\) such that \(\{x_{1}\in\overline{K}:v_{p}(x_{1})>0,\ Q_{1}=C=0\}=\emptyset\). Next, if \(v\), \(w\) are large enough, then by Weierstrass preparation in \(\mathcal{O}_{K}[[y]]\) (applied to \(Q_{1}(p^{c}y,x_{2},\ldots
of degree \(\operatorname{ord}_{0}(f_{j}|_{x_{2}=0})\).) Taking \(v\to\infty\) produces \(\boldsymbol{x}\in p\mathcal{O}_{L(w)}^{n}\) with \(1\ll\max(|x_{1}|_{p},|x_{2}|_{p})\ll 1\) and \(x_{3},\ldots,x_{n}\to 0\). Now pass to a convergent subsequence (by compactness).
_Case 3: \(n=2\)._ Here \(\#\{x_{1}\in\overline{K}:v_{p}(x_{1})>0,\ C=0\}<\infty\). So there are infinitely many \(x_{1}\in p^{\deg Q_{1}}\mathcal{O}_{K}^{\times}\) with \(C\neq 0\). For each such \(x_{1}\), there exists \(x_{2}\in p\mathcal{O}_{L(\deg Q_{1})}\) with \(Q_{1}=0\). (It is important here that \(Q_{1,j}(0)=0\) for all \(j<\deg Q_{1}\).) By compactness, this suffices.
**Proposition 3.13**.: _Let \(f\in\operatorname{Frac}(\mathbb{Z}_{p}[[\xi_{1},\ldots,\xi_{n},y,z]])\), where \(n\in\mathbb{Z}_{\geq 0}\). Suppose that for every \(l\in\mathbb{Z}_{\geq 1}\) and finite extension \(K/\mathbb{Q}_{p}\), there exists \(m=m(l,K)\in\mathbb{Z}_{\geq 1}\) such that \(f\in\Gamma_{p}(\mathcal{A}_{m}^{n}\times(\mathcal{A}_{l}^{2}\setminus \mathcal{A}_{l+1}^{2}),\mathcal{O}_{K})\), where \(\mathcal{A}_{l}:=p^{l}\mathcal{O}_{K}\). Then \(f\in\bigcup_{l\geq 1}\Gamma_{p}((p^{l}\mathbb{Z}_{p})^{n+2},\mathbb{Z}_{p})\)._
Proof.: Write \(f=\kappa P/Q\) with \(P,Q\in\mathbb{Z}_{p}[[\xi_{1},\ldots,z]]\), \(Q\neq 0\), and \(\kappa\in\mathbb{Q}_{p}^{\times}\). By a suitable scaling \((\xi_{1},\ldots,z)\mapsto(p^{d}\xi_{1},\ldots,p^{d}z)\), we may assume \(\operatorname{ord}_{0}(Q)=\operatorname{ord}_{0}(Q\bmod p)\). After dividing out \(\gcd(P,Q)\), and applying Gauss' lemma to the leading homogeneous terms of \(Q\), \(\gcd(P,Q)\), \(Q/\gcd(P,Q)\), we may assume that \(\gcd(P,Q)=1\) and \(\operatorname{ord}_{0}(Q)=\operatorname{ord}_{0}(Q\bmod p)\).
_Case 1: \(Q(0,\ldots,0)\neq 0\)._ Then \(Q(0,\ldots,0)\in\mathbb{Z}_{p}^{\times}\), so \(f\in\mathbb{Z}_{p}[[\xi_{1},\ldots,z]]\otimes\mathbb{Q}_{p}\). Hence \(f\in f(0,\ldots,0)+\Gamma_{p}((p^{l}\mathbb{Z}_{p})^{n+2},\mathbb{Z}_{p})\) if \(l\) is large enough. Since \(f(0,\ldots,0,p^{l})\in\mathbb{Z}_{p}\) for all \(l\geq 1\), we conclude that \(f(0,\ldots,0)\in\mathbb{Z}_{p}\), and thus \(f\in\Gamma_{p}((p^{l}\mathbb{Z}_{p})^{n+2},\mathbb{Z}_{p})\) for sufficiently large \(l\).
_Case 2: \(Q(0,\ldots,0)=0\)._ Then by Lemma 3.12, there exists a finite extension \(K/\mathbb{Q}_{p}\) and integer \(l\geq 1\) such that \(f\notin\bigcup_{m\geq 1}\Gamma_{p}(\mathcal{A}_{m}^{n}\times(\mathcal{A}_{l}^{2} \setminus\mathcal{A}_{l+1}^{2}),\mathcal{O}_{K})\). This is a contradiction.
**Lemma 3.14**.: _Let \(U\in\mathcal{C}\) and \(c\in\mathsf{C}(U)\). Fix a prime \(p\) and a compact open set \(O_{p}\subseteq U(\mathbb{Q}_{p})\). Then there exists an integer \(m\geq 1\) such that the following hold for all \(x_{1},x_{2}\in 1+p^{m}\mathbb{Z}_{p}\):_
_(1) The map \(\phi\colon(a,b)\mapsto(ax_{1},c+(b-c)x_{2})\) on \(G(\mathbb{Q}_{p})\) maps \(G(\mathbb{Q}_{p})\cap O_{p}\) to itself._
_(2) If \(j\in J\), then \(H_{D_{j},p}(g)=H_{D_{j},p}(\phi g)\) for all \(g\in G(\mathbb{Q}_{p})\cap O_{p}\)._
Proof.: It suffices to prove this for small compact open sets \(O_{p}\) covering \(U(\mathbb{Q}_{p})\). Let \(V\subseteq U_{\mathbb{Q}_{p}}\) be an affine open set such that for each \(j\in J\) there exists \(t_{j}\in\Gamma(V,\mathcal{O}_{V})\) with \(\operatorname{div}(t_{j}|_{V})=D_{j}|_{V}\). Let \(f_{1},\ldots,f_{k}\in\Gamma(V,\mathcal{O}_{V})\) generate \(\Gamma(V,\mathcal{O}_{V})\) as a \(\mathbb{Q}_{p}\)-algebra. Fix a small \(O_{p}\cong p\mathbb{Z}_{p}^{2}\), and assume in particular \(O_{p}\subseteq V(\mathbb{Q}_{p})\). If \(O_{p}\) is sufficiently small, then for each finite extension \(K/\mathbb{Q}_{p}\), we may view \(p\mathcal{O}_{K}^{2}\) as an open neighborhood of \(O_{p}\) in \(V(K)\). Furthermore, for \(B\) as in the proof of Lemma 3.11, we may assume \(B(K)\setminus B(\mathbb{Q}_{p})\) is disjoint from \(p\mathcal{O}_{K}^{2}\) for all \(K\).5 We may also assume, by translating coordinates if necessary, that \(O_{p}\cap B(\mathbb{Q}_{p})\subseteq\{(0,0)\}\).
Footnote 5: This is inessential but convenient; it is possible since \(B(\overline{\mathbb{Q}}_{p})\) is finite and \(V(K)\) is Hausdorff.
Algebraically, view \(\phi\) now as a rational map \(\mathbb{A}^{2}\times X\dasharrow X\) defined by the formula \((x_{1},x_{2},a,b)\mapsto(ax_{1},c+(b-c)x_{2})\) on \((a,b)\in G\). A \(p\)-adic Lemma 3.6 (see Remark 3.7) shows that for each \(K/\mathbb{Q}_{p}\) and \(l\in\mathbb{Z}_{\geq 1}\), there exists \(m\in\mathbb{Z}_{\geq 1}\) such that the rational functions \((t_{j}-\phi^{*}t_{j})/pt_{j}\), \((f_{i}-\phi^{*}f_{i})/p\) on \(\mathbb{A}^{2}\times V\) lie in \(\Gamma_{p}((1+p^{m}\mathcal{O}_{K})^{2}\times(p^{l}\mathcal{O}_{K}^{2}\setminus p ^{l+1}\mathcal{O}_{K}^{2}),\mathcal{O}_{K})\). By Proposition 3.13, we get \((t_{j}-\phi^{*}t_{j})/pt_{j},(f_{i}-\phi^{*}f_{i})/p\in\Gamma_{p}((1+p^{l} \mathbb{Z}_{p})^{2}\times(p^{l}\mathbb{Z}_{p})^{2},\mathbb{Z}_{p})\) for some \(l\geq 1\). In particular, \(\phi\) extends uniquely to an analytic map \(\Phi\colon(1+p^{l}\mathbb{Z}_{p})^{2}\times O_{p}\to O_{p}\), since \(((1+p^{l}\mathbb{Z}_{p})^{2}\times O_{p})\cap\phi^{-1}(V(\mathbb{Q}_{p}))\) is dense in \((1+p^{l}\mathbb{Z}_{p})^{2}\times O_{p}\). In addition, via Definition 2.5, \(H_{D_{j},p}=\Phi^{*}H_{D_{j},p}\) on \((1+p^{l}\mathbb{Z}_{p})^{2}\times O_{p}\) if \(O_{p}\) is sufficiently small. So (1)-(2) hold with \(m:=l\).
## 4. Non-archimedean local calculations
Recall the definition \(\mathsf{u}_{j}:=\operatorname{ord}_{D_{j}}(a)\) (for \(j\in J\)) from SS2.3. Throughout SS4, assume \(X\) is split and \(D\) has strict normal crossings, let \(\delta\in(0,1)\) and \(\mathsf{A}_{0}\in(1,\infty)\), and let \(\boldsymbol{s}\) be such that
\[-\delta\leq\Re(s_{j}-\mathsf{d}_{j}-2\mathsf{u}_{j})\leq\mathsf{A}_{0}\qquad \text{for all $j\in J$.} \tag{4.1}\]
In particular, \(\Re(s_{j}-\mathsf{d}_{j}+1)>0\) for \(j\in J_{1}\cup J_{3}\), but not always for \(j\in J_{2}\); this asymmetry is acceptable, due to the integrality condition \(N\alpha a\in\mathbb{Z}_{p}\) in (2.13). Also, the translate \(2\mathsf{u}_{j}\) in (4.1) is included to ensure satisfactory bounds in Lemmas 4.3, 4.5, and 4.6.
For any \(\delta>0\), the region (4.1) goes beyond the region \(\Omega^{\prime}_{\epsilon}\) considered in [17, Lemma 5.11]. Our eventual success thus relies on precise calculations, revealing leading-order structure like in [23], that we carry out using a new source of cancellation in the local integral (2.13): the nonconstancy identified in Proposition 3.5, fed into [1].
Over (4.1), it turns out that the most important factors \(H_{p}^{\vee}(\boldsymbol{s},\lambda,t,\alpha)\) in (2.20) are those with \(p\) large and \(v_{p}(\alpha)\) small but nonzero. But we also need reasonable control on other factors. To get our bearings, we first prove a useful bound (applicable to (2.13) after taking absolute values) valid for arbitrary \(p\), \(\alpha\), which does not require any subtle cancellations:
**Lemma 4.1**.: _Let \((\lambda,\alpha)\in\mathbf{M}\times\mathbb{Q}^{\times}\). Then \(\int_{G(\mathbb{Q}_{p}):\,N\alpha a\in\mathbb{Z}_{p}}dg/|H_{p}(\boldsymbol{s},g)|\) is (for all \(\epsilon>0\))_
\[\ll_{p,\epsilon}|\alpha|_{p}^{-\epsilon}\mathbf{1}_{|\alpha|_{p}\leq 1}+| \alpha|_{p}^{\epsilon}\sum_{j\in J_{1}}\lvert\alpha|_{p}^{-\Re(s_{j}-\mathsf{d }_{j}+1)/\mathsf{u}_{j}}\mathbf{1}_{|\alpha|_{p}>1}+|\alpha|_{p}^{-\epsilon} \sum_{j\in J_{2}}\lvert\alpha|_{p}^{\Re(s_{j}-\mathsf{d}_{j}+1)/|\mathsf{u}_{j }|}\mathbf{1}_{|\alpha|_{p}<1}.\]
Proof.: Since \(|H_{p}(\boldsymbol{s},g)|=H_{p}(\Re(\boldsymbol{s}),g)\), we may assume \(\boldsymbol{s}\) is real. Let \(U\subseteq X(\mathbb{Q}_{p})\) be small open sets \(\cong\mathbb{Z}_{p}^{2}\) forming a cover \(\mathscr{C}_{p}\) of \(X(\mathbb{Q}_{p})\). Writing \(U\cap D\subseteq D_{k}\cup D_{l}\) for some \(k,l\in J\) depending on \(U\) (possible since \(D\) has strict normal crossings), we get
\[\int_{G(\mathbb{Q}_{p}):\,N\alpha a\in\mathbb{Z}_{p}}\frac{dg}{|H_{p}( \boldsymbol{s},g)|}\ll_{p}\sum_{U\in\mathscr{C}_{p}}\int_{\mathbb{Z}_{p}^{2} }\lvert y\rvert_{p}^{(s_{k}-\mathsf{d}_{k})\varsigma_{k}}\lvert z\rvert_{p}^{ (s_{l}-\mathsf{d}_{l})\varsigma_{k}}\mathbf{1}_{N\alpha y^{u_{k}\varsigma_{k} }z^{u_{l}\varsigma_{l}}\in\mathbb{Z}_{p}}\,dy\,dz,\]
where \(\varsigma_{j}:=\mathbf{1}_{U\cap D_{j}\neq\emptyset}\) for \(j\in\{k,l\}\), and where \(y\) (resp. \(z\)) is a local parameter for \(D_{k}\) (resp. \(D_{l}\)) if \(\varsigma_{k}=1\) (resp. \(\varsigma_{l}=1\)). Given \(U\), \(k\), \(l\), call the integral on the right \(I(U,k,l)\). The contribution to \(I\) from \(|y|_{p}=Y\), \(|z|_{p}=Z\) is \((1-p^{-1})^{2}\cdot f\cdot\mathbf{1}_{(Y,Z)\in\mathfrak{S}_{2}}\), where \(f=Y^{(s_{k}-\mathsf{d}_{k})\varsigma_{k}+1}Z^{(s_{l}-\mathsf{d}_{l})\varsigma_ {l}+1}\) and \(\mathfrak{S}_{R}=\{(Y,Z)\in(0,1]^{2}:|N\alpha|_{p}\,Y^{u_{k}\varsigma_{k}}Z^{u _{l}\varsigma_{l}}\leq 1,\ (\log_{p}Y,\log_{p}Z)\in R^{2}\}\).
Here \(\log f\) is linear (hence convex) in \((\log_{p}Y,\log_{p}Z)\in\mathbb{R}^{2}\). We claim the following:
1. If \(L\in\mathbb{R}_{>0}\), then \(\#\{(Y,Z)\in\mathfrak{S}_{2}:f=L\}\ll_{\epsilon}(1+|\alpha|_{p}^{-1}+L^{-1})^ {\epsilon}\).
2. If \(\mathfrak{S}_{\mathbb{R}}\neq\emptyset\), then \(\arg\max_{\mathfrak{S}_{\mathbb{R}}}f\) contains \(\ell_{1}\cap\ell_{2}\) for some two intersecting plane curves \(\ell_{1},\ell_{2}\in\{Y=1,Z=1,|N\alpha|_{p}\,Y^{u_{k}\varsigma_{k}}Z^{u_{l} \varsigma_{l}}=1\}\subseteq\mathbb{R}_{>0}^{2}\) (with transverse intersection).
3. If \(\mathfrak{S}_{\mathbb{R}}\neq\emptyset\) and \(\max_{\mathfrak{S}_{\mathbb{R}}}f=M_{\star}\), then \(M_{\star}+M_{\star}^{-1}\ll(|\alpha|_{p}+|\alpha|_{p}^{-1})^{O(1)}\).
Order \(k\), \(l\) so that \(\mathsf{u}_{k}\varsigma_{k}\geq\mathsf{u}_{l}\varsigma_{l}\). We prove (1)-(3) together, by casework.
_Case 1: \(\mathsf{u}_{k}\varsigma_{k}<0\)._ Then \(\mathfrak{S}_{\mathbb{R}}=\emptyset\) unless \(|N\alpha|_{p}\leq 1\). Also, \(\log_{p}Y,\log_{p}Z\ll|v_{p}(N\alpha)|\) for \((Y,Z)\in\mathfrak{S}_{\mathbb{R}}\). So (3) is clear; and \(\#\mathfrak{S}_{\mathbb{Z}}\ll|v_{p}(N\alpha)|^{O(1)}\cdot\mathbf{1}_{|N\alpha|_ {p}\leq 1}\), giving (1). If \(\mathfrak{S}_{\mathbb{R}}\neq\emptyset\), then \(\log\mathfrak{S}_{\mathbb{R}}\subseteq\mathbb{R}^{2}\) is a nonempty compact polytope (of dimension \(\leq 2\)), so by convexity, \(\log f\) is maximized (not necessarily uniquely) at a vertex of \(\mathfrak{S}_{\mathbb{R}}\), giving (2).
_Case 2: \(\mathsf{u}_{k}\varsigma_{k}\geq 0\)._ Then \((s_{k}-\mathsf{d}_{k})\varsigma_{k}\geq-\delta\), since either \(\varsigma_{k}=0\) or \(k\in J_{1}\cup J_{3}\). Now suppose \(f=L\) for some \((Y,Z)\in\mathfrak{S}_{\mathbb{R}}\); then \(Y=L^{e_{1}}/Z^{e_{2}}\) and \(Y^{u_{k}\varsigma_{k}}Z^{u_{l}\varsigma_{l}}=L^{e_{3}}/Z^{e_{4}}\), where
\[e_{1}=((s_{k}-\mathsf{d}_{k})\varsigma_{k}+1)^{-1},\ e_{2}=((s_{l}-\mathsf{d}_{l} )\varsigma_{l}+1)e_{1},\ e_{3}=\mathsf{u}_{k}\varsigma_{k}e_{1},\ e_{4}= \mathsf{u}_{k}\varsigma_{k}e_{2}-\mathsf{u}_{l}\varsigma_{l}.\]
If \(\mathsf{u}_{l}\varsigma_{l}\geq 0\), then \((s_{l}-\mathsf{d}_{l})\varsigma_{l}\geq-\delta\), so the constraint \(Y\leq 1\) in \(\mathfrak{S}_{\mathbb{R}}\) becomes \(Z\geq L^{e_{1}/e_{2}}\), where \(0<\frac{e_{1}}{e_{2}}\leq\frac{1}{1-\delta}\). Meanwhile, if \(\mathsf{u}_{l}\varsigma_{l}<0\), then \(\varsigma_{l}=1\) and (using (4.1) for \(j\in\{k,l\}\))
\[\tfrac{e_{4}}{e_{1}}=\mathsf{u}_{k}\varsigma_{k}\tfrac{e_{2}}{e_{1}}+|\mathsf{u}_{ l}|e_{1}^{-1}\geq\mathsf{u}_{k}\varsigma_{k}(2\mathsf{u}_{l}+1-\delta)+|\mathsf{u}_{ l}|((2\mathsf{u}_{k}-\delta)\varsigma_{k}+1)\geq(\mathsf{u}_{k}\varsigma_{k}+| \mathsf{u}_{l}|)(1-\delta),\]
so the constraint \(Y^{u_{k}\varsigma_{k}}Z^{u_{l}\varsigma_{l}}\leq|N\alpha|_{p}^{-1}\) becomes \(Z\geq|N\alpha|_{p}^{1/e_{4}}L^{e_{3}/e_{4}}\), where \(0<\frac{1}{e_{4}}\ll\frac{1}{e_{1}}\ll\mathsf{A}_{0}\) and \(0\leq\frac{e_{3}}{e_{4}}\leq\frac{1}{1-\delta}\). Either way, \(Z\leq 1\) now gives (1). We also find that \(\{(Y,Z)\in\mathfrak{S}_{\mathbb{R}}:f\geq M\}\)
is compact for every \(M\in\mathbb{R}_{>0}\). If \(\mathfrak{S}_{\mathbb{R}}\neq\emptyset\), then \(f\) achieves a maximum \(M_{\star}\); but \(\arg\max_{\mathfrak{S}_{\mathbb{R}}}f\) contains a vertex \(x\) of \(\{(Y,Z)\in\mathfrak{S}_{\mathbb{R}}:f\geq M_{\star}/2\}\) (with \(f(x)=M_{\star}\neq M_{\star}/2\)), giving (2) and \(x\in\{(1,1),(|N\alpha|_{p}^{-1/u_{k}},1),(1,|N\alpha|_{p}^{-1/u_{l}})\}\), giving (3).
With (1)-(3) proven, we now get (by summing over level sets \(f=L\), a la Lebesgue)
\[I(U,k,l)=(1-p^{-1})^{2}\sum_{(Y,Z)\in\mathfrak{S}_{\mathbb{Z}}}f(Y,Z)\ll_{ \epsilon}(1+|\alpha|_{p}^{-1}+|\alpha|_{p})^{\epsilon}\cdot(M_{1}+M_{2}+M_{3}),\]
where \(M_{1}=f_{\star}(1,1)\), \(M_{2}=f_{\star}(|N\alpha|_{p}^{-1/u_{k}},1)\mathbf{1}_{u_{k}\llneq 0}\), \(M_{3}=f_{\star}(1,|N\alpha|_{p}^{-1/u_{l}})\mathbf{1}_{u_{l}\llneq 0}\), where \(f_{\star}(x):=f(x)\mathbf{1}_{x\in\mathfrak{S}_{\mathbb{R}}}\). Here \(M_{1}=\mathbf{1}_{|N\alpha|_{p}\leq 1}\). Also, \(M_{2}\), \(M_{3}\) equal
\[|N\alpha|_{p}^{-(s_{j}-\mathbf{d}_{j}+1)/u_{j}}(\mathbf{1}_{j\in J_{1}}\mathbf{ 1}_{|N\alpha|_{p}\geq 1}+\mathbf{1}_{j\in J_{2}}\mathbf{1}_{|N\alpha|_{p} \leq 1})\mathbf{1}_{u_{j}\leq 0}\]
for \(j=k\), \(j=l\), respectively. This suffices, since \(|\mathscr{C}_{p}|\ll_{p}1\) (and \(N,\boldsymbol{s}\ll 1\)) and
\[\mathbf{1}_{|N\alpha|_{p}\leq 1} \leq\mathbf{1}_{|\alpha|_{p}\leq 1}+\sum_{i\in J_{1}}\lvert N \alpha|_{p}^{-(s_{i}-\mathbf{d}_{i}+1)/u_{i}}\mathbf{1}_{|\alpha|_{p}>1},\] \[|\alpha|_{p}^{-(s_{j}-\mathbf{d}_{j}+1)/u_{j}}\mathbf{1}_{j\in J _{1}}\mathbf{1}_{|N\alpha|_{p}\geq 1} \leq\mathbf{1}_{|\alpha|_{p}=1}+\sum_{i\in J_{1}}|\alpha|_{p}^{-(s _{i}-\mathbf{d}_{i}+1)/u_{i}}\mathbf{1}_{|\alpha|_{p}>1},\] \[|\alpha|_{p}^{-(s_{j}-\mathbf{d}_{j}+1)/u_{i}}\mathbf{1}_{j\in J _{2}}\mathbf{1}_{|N\alpha|_{p}\leq 1} \ll\mathbf{1}_{|\alpha|_{p}\leq 1}+\sum_{i\in J_{1}}| \alpha|_{p}^{-(s_{i}-\mathbf{d}_{i}+1)/u_{i}}\mathbf{1}_{|\alpha|_{p}>1}+\sum_ {i\in J_{2}}\lvert\alpha|_{p}^{(s_{i}-\mathbf{d}_{i}+1)/|u_{i}|}\mathbf{1}_{| \alpha|_{p}<1}.\]
(These final displayed inequalities follow from (4.1) and the fact \(J_{1}\neq\emptyset\).)
We call the strategy above, using principles (1)-(3), "\(\mathbb{R}\)-vertex bounding". For subsequent calculations, we choose \(c(j)\in\mathrm{C}(j)\) and let \(\mathsf{v}_{j}=\mathrm{ord}_{D_{j}}(b-c(j))\) for \(j\in J\). Also, we make the following definition for convenience in SS5 (when we apply Lemma 4.3).
**Definition 4.2**.: Let \(\overline{S}=\overline{S}(X,\mathscr{X},H)\) be a superset of \(S\) (defined after Lemma 2.11) such that \(\{q\in\mathbb{Q}:I^{q}\neq\emptyset\}\subseteq\mathbb{Z}_{\overline{S}}\) and \(\{q_{1}-q_{2}:q_{1},q_{2}\in\mathbb{Q},\;I^{q_{1}},I^{q_{2}}\neq\emptyset\} \subseteq\mathbb{Z}_{\overline{S}}^{\times}\).
**Lemma 4.3** (Denominator bias).: _Suppose \(v_{p}(\alpha)=-k<0\). Let_
\[\mathfrak{D}=\mathfrak{D}_{p}(\boldsymbol{s},\lambda,\alpha):=H_{p}^{\vee}( \boldsymbol{s},\lambda,0,\alpha)-\mathbf{1}_{p\notin\overline{S}}\sum_{c\in \mathbb{Q}}\sum_{j\in J_{1}^{c}}e(-c\alpha\bmod\mathbb{Z}_{p})p^{-k(s_{j}- \mathbf{d}_{j}+1)/u_{j}}\mathbf{1}_{u_{j}|k}.\]
_Then \(\mathfrak{D}\ll_{\epsilon}p^{(\epsilon-2)k}\sum_{j\in J_{1}}(p^{\delta-1}+p^{- 1/2}\mathbf{1}_{u_{j}|k}+\mathbf{1}_{k>u_{j}})p^{(\delta-1)k/u_{j}}\)._
Proof.: Here \(|\alpha|_{p}=p^{k}\), so by Lemma 4.1 (and the bound \(p^{-k\Re(s_{j}-\mathbf{d}_{j}+1)/u_{j}}\leq p^{-2k}p^{(\delta-1)k/u_{j}}\) for \(j\in J_{1}\), using \(\mathsf{u}_{j}>0\)), we may assume \(p\) is large. Following [17, proof of Lemma 5.6], let \(\rho\) be the reduction map \(X(\mathbb{Q}_{p})=\mathscr{X}(\mathbb{Z}_{p})\to\mathscr{X}(\mathbb{F}_{p})\); then \(H_{p}^{\vee}=\sum_{x_{0}\in\bigcup_{j\in J_{1}}\mathscr{D}_{j}(\mathbb{F}_{p})} \mathcal{I}_{p}(x_{0})\), where
\[\mathcal{I}_{p}(x_{0}):=\int_{\rho^{-1}(x_{0}):\,\alpha a\in\mathbb{Z}_{p}}H_{ p}(\boldsymbol{s},g)^{-1}e(-\alpha b\bmod\mathbb{Z}_{p})\lambda_{S}(\alpha a)\,dg.\]
Let \(\mathcal{I}_{p}^{0}(x_{0})\) be the contribution to \(\mathcal{I}_{p}(x_{0})\) from \(\alpha a\in\mathbb{Z}_{p}^{\times}\), and let \(\mathcal{I}_{p}^{1}(x_{0})\) be the contribution from \(\alpha a\in p\mathbb{Z}_{p}\); then \(\mathcal{I}_{p}(x_{0})=\mathcal{I}_{p}^{0}(x_{0})+\mathcal{I}_{p}^{1}(x_{0})\), and we have
\[\mathcal{I}_{p}^{0}(x_{0})=\int_{\rho^{-1}(x_{0}):\,\alpha a\in\mathbb{Z}_{p}^{ \times}}\frac{e(-\alpha b\bmod\mathbb{Z}_{p})\,dg}{H_{p}(\boldsymbol{s},g)}, \quad|\mathcal{I}_{p}^{1}(x_{0})|\leq\int_{\rho^{-1}(x_{0}):\,\alpha a\in p \mathbb{Z}_{p}}\frac{dg}{|H_{p}(\boldsymbol{s},g)|}.\]
For each \(j\in J\), fix a dense open \(U_{j}\subseteq X\) on which \(D_{j}\cap U_{j}\) is a nonempty _principal_ Weil divisor and \(a,b-c(j)\in\Gamma(U_{j}\setminus D_{j},\mathcal{O}_{X})^{\times}\). Choose \(t_{j}\in\mathbb{Q}(U_{j})\) with \(\mathrm{div}(t_{j}|_{U_{j}})=D_{j}\cap U_{j}\). Let \(\mathscr{Z}_{j}\) be the closure of \(X\setminus U_{j}\) in \(\mathscr{X}\). Then let \(\mathscr{D}_{j}^{*}:=\mathscr{D}_{j}\setminus\mathscr{Z}_{j}\). We claim the following:
1. If \(j\in J_{1}\) and \(c(j)=c\), then \[\sum_{x_{0}\in\mathscr{D}_{j}^{*}(\mathbb{F}_{p})}\mathcal{I}_{p}(x_{0})=(e(-c \alpha\bmod\mathbb{Z}_{p})\mathbf{1}_{u_{j}|k}\mathbf{1}_{j\in J_{1}^{c}}+O(p^{ -1/2}\mathbf{1}_{u_{j}|k})+O(p^{-1}))\cdot p^{-(s_{j}-\mathsf{d}_{j}+1)k/u_{j}}.\]
2. If \(x_{0}\in\bigcup_{j\in J_{1}}\mathscr{D}_{j}(\mathbb{F}_{p})\), then \[\mathcal{I}_{p}(x_{0})\ll_{\epsilon}p^{(\epsilon-2)k}\sum_{j\in J_{1}:\,x_{0} \in\mathscr{D}_{j}(\mathbb{F}_{p})}(p^{\delta-1}+\mathbf{1}_{k>u_{j}})p^{( \delta-1)k/u_{j}}.\]
These two claims would imply the lemma, since \(|H^{\vee}-\sum_{j\in J_{1}}\sum_{x_{0}\in\mathscr{D}_{j}^{*}(\mathbb{F}_{p})} \mathcal{I}_{p}(x_{0})|\) is
\[\leq\sum_{j\in J_{1}}\sum_{x_{0}\in(\mathscr{D}_{j}\setminus\mathscr{D}_{j}^{* })(\mathbb{F}_{p})}|\mathcal{I}_{p}(x_{0})|\ll\max_{x_{0}\in\bigcup_{j\in J_{ 1}}\mathscr{D}_{j}(\mathbb{F}_{p})}|\mathcal{I}_{p}(x_{0})|.\]
We prove (1) first. Suppose \(j\in J_{1}\) and \(c(j)=c\), and let \(x_{0}\in\mathscr{D}_{j}^{*}(\mathbb{F}_{p})\). Then \(\rho^{-1}(x_{0})\) has analytic coordinates \(y,z\in p\mathbb{Z}_{p}\) with \(y/t_{j}\in 1+\mathbb{Z}_{p}[[y,z]]\) and \(a=w_{1}y^{u_{j}}\), \(b-c=y^{\nu_{j}}(w_{2}+z)\) (with \(w_{1},w_{2}\in\mathbb{Z}_{p}^{\times}\)); this follows from Lemma 3.8. Upon writing \(dg\) in terms of \(\tau_{p}\) (using (2.7)) as in [12, proof of Lemma 5.6], we get
\[\mathcal{I}_{p}^{0}(x_{0}) =\int_{y,z\in p\mathbb{Z}_{p}:\,v_{p}(y)=k/u_{j}}|y|_{p}^{s_{j}- \mathsf{d}_{j}}e(-c\alpha-y^{\nu_{j}}(w_{2}+z)\alpha\bmod\mathbb{Z}_{p})\, \frac{dy\,dz}{1-p^{-1}}\] \[=e(-c\alpha\bmod\mathbb{Z}_{p})\int_{y\in p\mathbb{Z}_{p}:\,v_{p} (y)=k/u_{j}}|y|_{p}^{s_{j}-\mathsf{d}_{j}+1}e(-y^{\nu_{j}}w_{2}\alpha\bmod \mathbb{Z}_{p})\frac{\mathbf{1}_{y^{\nu_{j}}p\alpha\in\mathbb{Z}_{p}}}{p}\, \frac{dy/|y|_{p}}{1-p^{-1}}.\]
Thus \(p\mathcal{I}_{p}^{0}(x_{0})=p^{-(s_{j}-\mathsf{d}_{j}+1)k/u_{j}}e(-c\alpha \bmod\mathbb{Z}_{p})\mathbf{1}_{u_{j}|k}\) if \(\mathsf{v}_{j}\geq\mathsf{u}_{j}\); and if \(\mathsf{v}_{j}<\mathsf{u}_{j}\), then summing over \(x_{0}\in\mathscr{D}_{j}^{*}(\mathbb{F}_{p})\) first (nontrivially, for \(y\) held constant), \(y\) second (trivially), gives
\[\sum_{x_{0}\in\mathscr{D}_{j}^{*}(\mathbb{F}_{p})}p\mathcal{I}_{p}^{0}(x_{0}) \ll p^{-\Re(s_{j}-\mathsf{d}_{j}+1)k/u_{j}}\mathbf{1}_{u_{j}|k}\cdot p^{1/2} \mathbf{1}_{\mathsf{v}_{j}k/u_{j}=k-1}\]
(by the general bound [1, Theorem 6] on exponential sums over curves; this applies because \(w_{2}\equiv(b-c)/t_{j}^{\mathsf{v}_{j}}\bmod p\), and \((b-c)/t_{j}^{\mathsf{v}_{j}}\) is nonconstant on \(D_{j}\) by Proposition 3.5).6 Furthermore, if \(\mathsf{v}_{j}\geq\mathsf{u}_{j}\), then (by Proposition 1.4) \(\mathsf{v}_{j}=\mathsf{u}_{j}\) and \(j\in J_{1}^{c}\). On the other hand, for \(\mathcal{I}_{p}^{1}(x_{0})\), summing a geometric series over \(v_{p}(y)\in\mathbb{Z}\) yields
Footnote 6: If \(\mathsf{v}_{j}=0\) (and \(\mathsf{u}_{j}=1\), \(k=1\)), then we can _only_ sum nontrivially over \(x_{0}\), not \(y\).
\[\mathcal{I}_{p}^{1}(x_{0})\ll\int_{y,z\in p\mathbb{Z}_{p}:\,v_{p}(y)\geq(k+1)/u _{j}}|y|_{p}^{\Re(s_{j}-\mathsf{d}_{j})}\,dy\,dz\leq\frac{p^{-\Re(s_{j}-\mathsf{ d}_{j}+1)(k+1)/u_{j}}}{1-p^{-\Re(s_{j}-\mathsf{d}_{j}+1)}}\cdot\frac{1}{p}\ll \frac{p^{-\Re(s_{j}-\mathsf{d}_{j}+1)k/u_{j}}}{p^{3}},\]
since \(\Re(s_{j}-\mathsf{d}_{j}+1)\geq 2\mathsf{u}_{j}\). Finally, summing \(\mathcal{I}_{p}^{0}(x_{0})\), \(\mathcal{I}_{p}^{1}(x_{0})\) over \(x_{0}\in\mathscr{D}_{j}^{*}(\mathbb{F}_{p})\) using Lang-Weil delivers (1), since \(D_{j}\) is geometrically irreducible.
We now prove claim (2). If \(x_{0}\in\mathscr{D}_{j}(\mathbb{F}_{p})\setminus\bigcup_{i\in J\setminus\{j\}} \mathscr{D}_{i}(\mathbb{F}_{p})\) for some \(j\in J_{1}\), then
\[\mathcal{I}_{p}(x_{0})\ll\int_{y,z\in p\mathbb{Z}_{p}:\,v_{p}(y)\geq k/u_{j}}|y|_ {p}^{\Re(s_{j}-\mathsf{d}_{j})}\,dy\,dz\ll p^{-\Re(s_{j}-\mathsf{d}_{j}+1)k/u_{j} }p^{-1}\leq p^{-(2u_{j}-\delta+1)k/u_{j}}p^{-1},\]
which suffices. Similarly, if \(x_{0}\in(\mathscr{D}_{j}\cap\mathscr{D}_{i})(\mathbb{F}_{p})\) for some \(j\in J_{1}\) and \(i\in J_{3}\), then
\[\mathcal{I}_{p}(x_{0})\ll\int_{y,z\in p\mathbb{Z}_{p}:\,v_{p}(y)\geq k/u_{j}}|y|_ {p}^{\Re(s_{j}-\mathsf{d}_{j})}|z|_{p}^{\Re(s_{i}-\mathsf{d}_{i})}\,dy\,dz\ll p^{- \Re(s_{j}-\mathsf{d}_{j}+1)k/u_{j}}p^{-\Re(s_{i}-\mathsf{d}_{i}+1)},\]
which suffices since \(\Re(s_{j}-\mathsf{d}_{j}+1)\geq 2\mathsf{u}_{j}-\delta+1\) and \(\Re(s_{i}-\mathsf{d}_{i}+1)\geq 1-\delta\).
The remaining cases are messy to do explicitly, so we use the "\(\mathbb{R}\)-vertex bounding" strategy of Lemma 4.1. If \(x_{0}\in(\mathscr{D}_{j}\cap\mathscr{D}_{i})(\mathbb{F}_{p})\) for some distinct \(j,i\in J_{1}\), then
\[\mathcal{I}_{p}(x_{0})\ll\int_{\begin{subarray}{c}y,z\in p\mathbb{Z}_{p}:\\ y^{j}z^{\mathsf{u}_{j}}\in p^{k}\mathbb{Z}_{p}\end{subarray}}|y|_{p}^{\Re(s_{j} -\mathsf{d}_{j})}|z|_{p}^{\Re(s_{i}-\mathsf{d}_{i})}\,dy\,dz\ll_{\epsilon}p^{k \epsilon}\max_{\begin{subarray}{c}Y,Z\leq p^{-1:}\\ Y^{\mathsf{u}_{j}}Z^{\mathsf{u}_{i}}\in p^{-k}\end{subarray}}Y^{\Re(s_{j}- \mathsf{d}_{j}+1)}Z^{\Re(s_{i}-\mathsf{d}_{i}+1)},\]
which is \(\leq p^{k\epsilon}p^{-\Re(s_{j}-\mathsf{d}_{j}+1)\max(1,(k-u_{i})/u_{j})}p^{- \Re(s_{i}-\mathsf{d}_{i}+1)}\) (assuming \(\arg\max(\cdots)\cap\{Z=p^{-1}\}\neq\emptyset\); otherwise, switch \(j\), \(i\)), which is in turn (using \(\Re(s_{j}-\mathsf{d}_{j}+1)\geq(1-\delta)+2\mathsf{u}_{j}\))
\[\leq p^{k\epsilon}p^{-(1-\delta)\max(1,(k-u_{i})/u_{j})-(2u_{j})(k-u_{i})/u_{j }}p^{-(2u_{i}-\delta+1)}=p^{k\epsilon}p^{-(1-\delta)\max(2,(k+u_{j}-u_{i})/u_ {j})}p^{-2k}; \tag{4.2}\]
this suffices, since \(\max(2,\frac{k+u_{j}-u_{i}}{u_{j}})\geq\mathbf{1}_{k\leq\max(u_{j},u_{i})}+ \frac{k}{\max(u_{j},u_{i})}\) (by Proposition 4.4 below). If \(x_{0}\in(\mathscr{D}_{j}\cap\mathscr{D}_{i})(\mathbb{F}_{p})\) for some \(j\in J_{1}\) and \(i\in J_{2}\), then
\[\mathcal{I}_{p}(x_{0})\ll\int_{\begin{subarray}{c}y,z\in p\mathbb{Z}_{p}:\\ y^{j}z^{\mathsf{u}_{i}}\in p^{k}\mathbb{Z}_{p}\end{subarray}}|y|_{p}^{\Re(s_{j }-\mathsf{d}_{j})}|z|_{p}^{\Re(s_{i}-\mathsf{d}_{i})}\,dy\,dz\ll_{\epsilon}p^ {k\epsilon}\max_{\begin{subarray}{c}Y,Z\leq p^{-1:}\\ Y^{u_{j}}\leq p^{-k}Z^{|u_{i}|}\end{subarray}}Y^{\Re(s_{j}-\mathsf{d}_{j}+1)}Z^ {\Re(s_{i}-\mathsf{d}_{i}+1)};\]
but (since the conditions \(Z\leq p^{-1}\) and \(Y^{u_{j}}\leq p^{-k}Z^{|u_{i}|}\) together imply \(Y<1\), say)
\[\max_{\begin{subarray}{c}Y,Z\leq p^{-1:}\\ Y^{u_{j}}\leq p^{-k}Z^{|u_{i}|}\end{subarray}}(\cdots)\leq\max_{\begin{subarray} {c}Y\leq 1,\ Z\leq p^{-1:}\\ Y^{u_{j}}\leq p^{-k}Z^{|u_{i}|}\end{subarray}}(\cdots)=(\cdots)|_{\begin{subarray} {c}Z=p^{-1},\\ Y^{u_{j}}=p^{-k}Z^{|u_{i}|}\end{subarray}}=p^{-\Re(s_{j}-\mathsf{d}_{j}+1)(k+|u_ {i}|)/u_{j}}p^{-\Re(s_{i}-\mathsf{d}_{i}+1)},\]
which is \(\leq p^{-(2u_{j}-\delta+1)(k+|u_{i}|)/u_{j}}p^{-(2u_{i}-\delta+1)}=p^{-2k}p^{( \delta-1)(k+|u_{i}|)/u_{j}}p^{\delta-1}\leq p^{-2k}p^{(\delta-1)k/u_{j}}p^{ \delta-1}\).
**Proposition 4.4**.: _Let \(k,l,m\in\mathbb{R}_{>0}\). Then \(\max(1,\frac{k+m-l}{m})\geq\min(\frac{k}{l},\frac{k}{m})\)._
Proof.: If \(l\leq m\), then \(\frac{k+m-l}{m}\geq\frac{k}{m}\). If \(k\leq l\), then \(1\geq\frac{k}{l}\). If \(k>l>m\), then \(\frac{k+m-l}{m}>\frac{k}{l}\).
**Lemma 4.5** (Numerator bias).: _Suppose \(v_{p}(\alpha)=k>0\). Let_
\[\mathfrak{N}=\mathfrak{N}_{p}(\boldsymbol{s},\lambda,\alpha):=H_{p}^{\vee}( \boldsymbol{s},\lambda,0,\alpha)-\sum_{j\in J_{2}^{*}}p^{-k(s_{j}-\mathsf{d}_{j }+1)/|u_{j}|}\mathbf{1}_{u_{j}|k}.\]
_Then \(\mathfrak{N}\ll_{\epsilon}p^{(2+\epsilon)k}\sum_{j\in J_{2}}(p^{\delta-1}+p^{- 1/2}\mathbf{1}_{u_{j}|k}+\mathbf{1}_{k>|u_{j}|})p^{(\delta-1)k/|u_{j}|}\)._
Proof.: Here \(|\alpha|_{p}=p^{-k}\), so by Lemma 4.1 and the bound \(1+p^{-k\Re(s_{j}-\mathsf{d}_{j}+1)/|u_{j}|}\leq 2p^{2k}p^{(\delta-1)k/|u_{j}|}\) (and the fact \(J_{2}\neq\emptyset\)), we may assume \(p\) is large. Define \(\mathcal{I}_{p}(x_{0})\), \(\mathscr{D}_{j}^{*}\) as before. Then \(H_{p}^{\vee}=\sum_{x_{0}\in\mathscr{X}(\mathbb{F}_{p})}\mathcal{I}_{p}(x_{0})\). As noted in [17, proof of Lemma 5.8], we have
\[\sum_{x_{0}\notin\bigcup_{j\in J_{2}}\mathscr{D}_{j}(\mathbb{F}_{p})}|\mathcal{ I}_{p}(x_{0})|\leq\sum_{x_{0}\notin\bigcup_{j\in J_{2}}\mathscr{D}_{j}( \mathbb{F}_{p})}\int_{\rho^{-1}(x_{0})}|H_{p}(\boldsymbol{s},g)|^{-1}\,dg\ll 1;\]
this is because \(\Re(s_{j}-\mathsf{d}_{j}+1)\geq 1-\delta>0\) for \(j\in J_{1}\cup J_{3}\). This \(O(1)\) bound is satisfactory, since \(J_{2}\neq\emptyset\) and \(1\leq p^{2k}p^{\delta-1}p^{(\delta-1)k/|u_{j}|}\). Thus it remains to study \(\mathcal{I}_{p}(x_{0})\) for \(x_{0}\in\bigcup_{j\in J_{2}}\mathscr{D}_{j}(\mathbb{F}_{p})\). We claim the following, of which the lemma is a direct consequence:
1. If \(j\in J_{2}\) and \(x_{0}\in\mathscr{D}_{j}^{*}(\mathbb{F}_{p})\), then \[p\mathcal{I}_{p}(x_{0})=p^{-(s_{j}-\mathsf{d}_{j}+1)k/|u_{j}|}\mathbf{1}_{u_{j} |k}\mathbf{1}_{j\in J_{2}^{*}}+p^{(2|u_{j}|+\delta-1)k/|u_{j}|}O(p^{-1/2} \mathbf{1}_{u_{j}|k}+p^{-1}).\]
2. If \(x_{0}\in\bigcup_{j\in J_{2}}\mathscr{D}_{j}(\mathbb{F}_{p})\), then \[\mathcal{I}_{p}(x_{0})\ll_{\epsilon}p^{(2+\epsilon)k}\sum_{j\in J_{2}:\,x_{0}\in \mathscr{D}_{j}(\mathbb{F}_{p})}(p^{\delta-1}+\mathbf{1}_{k>|u_{j}|})p^{( \delta-1)k/|u_{j}|}.\]
For (1), say \(j\in J_{2}\), \(x_{0}\in\mathscr{D}_{j}^{*}(\mathbb{F}_{p})\). Then \(\rho^{-1}(x_{0})\) has analytic coordinates \(y,z\in p\mathbb{Z}_{p}\) with \(a=w_{1}y^{u_{j}}\), \(b-c(j)=y^{u_{j}}(w_{2}+z)\) (with \(w_{1},w_{2}\in\mathbb{Z}_{p}^{\times}\)); see Lemma 3.8. So (using \(c(j)\alpha\in\mathbb{Z}_{p}\))
\[\mathcal{I}_{p}(x_{0})=\int_{y,z\in p\mathbb{Z}_{p}:\,v_{p}(y)\leq k/|\mathsf{ u}_{j}|}|y|_{p}^{s_{j}-\mathsf{d}_{j}}e(-y^{\mathsf{v}_{j}}(w_{2}+z)\alpha \bmod\mathbb{Z}_{p})\lambda_{S}(\alpha y^{u_{j}})\,\frac{dy\,dz}{1-p^{-1}}.\]
For any given \(y\), the integral over \(z\) vanishes unless \(y^{\mathsf{v}_{j}}p\alpha\in\mathbb{Z}_{p}\), i.e. \(v_{p}(y)\leq(k+1)/|\mathsf{v}_{j}|\); here \(\mathsf{v}_{j}\leq\mathsf{u}_{j}<0\) by Proposition 1.4, since \(j\in J_{2}\). Integrating over the ranges \(v_{p}(y)=(k+1)/|\mathsf{v}_{j}|\) (i.e. \(y^{\mathsf{v}_{j}}\alpha\in p^{-1}\mathbb{Z}_{p}^{\times}\)) and \(v_{p}(y)\leq k/|\mathsf{v}_{j}|\) (i.e. \(y^{\mathsf{v}_{j}}\alpha\in\mathbb{Z}_{p}\)) separately, we then get
\[p\mathcal{I}_{p}(x_{0}) =p\int_{y,z\in p\mathbb{Z}_{p}:\,(k+1)/|\mathsf{v}_{j}|=v_{p}(y) \leq k/|\mathsf{u}_{j}|}(\cdots)+p\int_{y,z\in p\mathbb{Z}_{p}:\,v_{p}(y)\leq k /|\mathsf{v}_{j}|}(\cdots)\] \[=p^{-\Re(s_{j}-\mathsf{d}_{j}+1)(k+1)/|\mathsf{v}_{j}|}O(p^{-1/2} )\mathbf{1}_{\mathbb{Z}\ni(k+1)/|\mathsf{v}_{j}|\leq k/|\mathsf{u}_{j}|}+\sum_ {1\leq r\leq k/|\mathsf{v}_{j}|}p^{-(s_{j}-\mathsf{d}_{j}+1)r}\lambda_{S}(p^{ k-r\mathsf{u}_{j}});\]
the \(O(p^{-1/2})\) factor comes from cancellation over \(y\), occurring since \(\mathsf{v}_{j}\neq 0\) and \(w_{2}\in\mathbb{Z}_{p}^{\times}\). Since \(-\Re(s_{j}-\mathsf{d}_{j}+1)\leq 2|\mathsf{u}_{j}|+\delta-1\), and any integer \(<k/|\mathsf{u}_{j}|\) is \(\leq(k-1)/|\mathsf{u}_{j}|\), we get
\[p\mathcal{I}_{p}(x_{0})=p^{-(s_{j}-\mathsf{d}_{j}+1)k/|\mathsf{u}_{j}|} \mathbf{1}_{\mathsf{u}_{j}|k}\mathbf{1}_{\mathsf{v}_{j}=\mathsf{u}_{j}}+p^{(2| \mathsf{u}_{j}|+\delta-1)k/|\mathsf{u}_{j}|}O(p^{-1/2}\mathbf{1}_{\mathsf{u}_{ j}|k}+p^{-(2|\mathsf{u}_{j}|+\delta-1)/|\mathsf{u}_{j}|}).\]
This implies (1), since \(2|\mathsf{u}_{j}|+\delta-1\geq|\mathsf{u}_{j}|\).
We now turn to (2). If \(j\in J_{2}\) and \(x_{0}\in\mathscr{D}_{j}(\mathbb{F}_{p})\setminus\bigcup_{i\in J\setminus\{j \}}\mathscr{D}_{i}(\mathbb{F}_{p})\), then
\[\mathcal{I}_{p}(x_{0})\ll\int_{p\mathbb{Z}_{p}^{2}:\,v_{p}(y)\leq k/|\mathsf{u} _{j}|}|y|_{p}^{\Re(s_{j}-\mathsf{d}_{j})}\,dy\,dz\leq\int_{p\mathbb{Z}_{p}^{2}: \,v_{p}(y)\leq k/|\mathsf{u}_{j}|}|y|_{p}^{2\mathsf{u}_{j}-\delta}\,dy\,dz\ll \frac{p^{-(2u_{j}-\delta+1)k/|\mathsf{u}_{j}|}}{p},\]
which suffices. Similarly, if \(x_{0}\in(\mathscr{D}_{j}\cap\mathscr{D}_{i})(\mathbb{F}_{p})\) for some \(j\in J_{2}\) and \(i\in J_{3}\), then
\[\mathcal{I}_{p}(x_{0})\ll\int_{p\mathbb{Z}_{p}^{2}:\,v_{p}(y)\leq k/|\mathsf{u} _{j}|}|y|_{p}^{2\mathsf{u}_{j}-\delta}|z|_{p}^{2\mathsf{u}_{i}-\delta}\,dy\,dz \ll p^{-(2\mathsf{u}_{j}-\delta+1)k/|\mathsf{u}_{j}|}p^{-(1-\delta)}\quad( \text{since }\mathsf{u}_{i}=0).\]
We treat the remaining cases by "\(\mathbb{R}\)-vertex bounding" as before. If \(x_{0}\in(\mathscr{D}_{j}\cap\mathscr{D}_{i})(\mathbb{F}_{p})\) for some distinct \(j,i\in J_{2}\), then \(\mathcal{I}_{p}(x_{0})=0\) unless \(k\geq|\mathsf{u}_{i}|+|\mathsf{u}_{j}|\), in which case
\[\mathcal{I}_{p}(x_{0})\ll\int_{\begin{subarray}{c}y,z\in p\mathbb{Z}_{p}:\\ p^{k}y^{\mathsf{v}_{j}}z^{u_{i}}\in\mathbb{Z}_{p}\end{subarray}}|y|_{p}^{2\mathsf{ u}_{j}-\delta}|z|_{p}^{2\mathsf{u}_{i}-\delta}\,dy\,dz\ll_{\epsilon}p^{k\epsilon} \max_{\begin{subarray}{c}Y,Z\leq p^{-1}:\\ Y^{|\mathsf{u}_{j}|}Z^{|\mathsf{u}_{i}|}\geq p^{-k}\end{subarray}}Y^{2\mathsf{u }_{j}-\delta+1}Z^{2\mathsf{u}_{i}-\delta+1},\]
where \(\arg\max(\cdots)\subseteq\{Y^{|\mathsf{u}_{j}|}Z^{|\mathsf{u}_{i}|}=p^{-k}\}\) (since \(Y^{2\mathsf{u}_{j}-\delta+1}Z^{2\mathsf{u}_{i}-\delta+1}\) is decreasing in \(Y\), \(Z\)) and thus, after switching \(j\), \(i\) if necessary so that \(\arg\max(\cdots)\cap\{Z=p^{-1}\}\neq\emptyset\), we have
\[\max(\cdots)=p^{k\epsilon}p^{(2\mathsf{u}_{j}-\delta+1)(|\mathsf{u}_{i}|-k)/| \mathsf{u}_{j}|}p^{-(2\mathsf{u}_{i}-\delta+1)}=p^{k\epsilon}p^{2k}p^{(\delta-1)( k+|\mathsf{u}_{j}|-|\mathsf{u}_{i}|)/|\mathsf{u}_{j}|}\]
(cf. (4.2)); here \(\frac{k+|\mathsf{u}_{j}|-|\mathsf{u}_{i}|}{|\mathsf{u}_{j}|}=\max(2,\frac{k+| \mathsf{u}_{j}|-|\mathsf{u}_{i}|}{|\mathsf{u}_{j}|})\geq\mathbf{1}_{k\leq\max(| \mathsf{u}_{j}|,|\mathsf{u}_{i}|)}+\frac{k}{\max(|\mathsf{u}_{j}|,|\mathsf{u}_{i}|)}\) by Proposition 4.4. If \(x_{0}\in(\mathscr{D}_{j}\cap\mathscr{D}_{i})(\mathbb{F}_{p})\) for some \(j\in J_{2}\) and \(i\in J_{1}\), then
\[\mathcal{I}_{p}(x_{0})\ll\int_{\begin{subarray}{c}y,z\in p\mathbb{Z}_{p}:\\ p^{k}y^{u_{j}}z^{u_{i}}\in\mathbb{Z}_{p}\end{subarray}}|y|_{p}^{2\mathsf{u}_{j}- \delta}|z|_{p}^{2\mathsf{u}_{i}-\delta}\,dy\,dz\ll_{\epsilon}p^{k\epsilon}\max_{ \begin{subarray}{c}Y,Z\leq p^{-1}:\\ Y^{|\mathsf{u}_{j}|}\geq p^{-k}Z^{\mathsf{u}_{i}}\end{subarray}}Y^{2\mathsf{u }_{j}-\delta+1}Z^{2\mathsf{u}_{i}-\delta+1},\]
where (since \(Y\mapsto Y^{2\mathsf{u}_{j}-\delta+1}\) is decreasing, and \(\{Z\leq p^{-1}\}\cap\{Y^{|\mathsf{u}_{j}|}=p^{-k}Z^{\mathsf{u}_{i}}\}\subseteq\{Y<1\}\))
\[\max_{\begin{subarray}{c}Y,Z\leq p^{-1}:\\ Y^{|\mathsf{u}_{j}|}\geq p^{-k}Z^{\mathsf{u}_{i}}\end{subarray}}(\cdots)=\max_{ \begin{subarray}{c}Y,Z\leq p^{-1}:\\ Y^{|\mathsf{u}_
which is \(=p^{-(2u_{j}-\delta+1)(k+u_{i})/|u_{j}|}p^{-(2u_{i}-\delta+1)}=p^{2k}p^{(\delta-1 )(k+u_{i})/|u_{j}|}p^{\delta-1}\leq p^{2k}p^{(\delta-1)k/|u_{j}|}p^{\delta-1}\).
We now handle the "generic" case of \(p\) coprime to the numerator and denominator of \(\alpha\).
**Lemma 4.6**.: _Say \(v_{p}(\alpha)=0\). Let \(\mathfrak{G}=\mathfrak{G}_{p}(\boldsymbol{s},\lambda,\alpha):=H_{p}^{\vee}( \boldsymbol{s},\lambda,0,\alpha)-1\). Then \(\mathfrak{G}\ll p^{2(\delta-1)}\)._
Proof.: For small \(p\), use Lemma 4.1 (which gives the satisfactory bound \(\mathfrak{G}\ll_{p}1\)). For large \(p\), we roughly follow [17, proof of Lemma 5.10]. Define \(\mathcal{I}_{p}(x_{0})\), \(\mathscr{D}_{j}^{*}\) as before. Since \(\mathcal{I}_{p}(x_{0})=p^{-2}/(1-p^{-1})=1/|G(\mathbb{F}_{p})|\) for all \(x_{0}\notin\bigcup_{j\in J}\mathscr{D}_{j}(\mathbb{F}_{p})\), we have
\[H_{p}^{\vee}(\boldsymbol{s},\lambda,0,\alpha)-1=\sum_{x_{0}\in\bigcup_{j\in J }\mathscr{D}_{j}(\mathbb{F}_{p})}\mathcal{I}_{p}(x_{0})=\sum_{x_{0}\in\bigcup _{j\in J_{1}\cup J_{2}}\mathscr{D}_{j}(\mathbb{F}_{p})}\mathcal{I}_{p}(x_{0}),\]
since \(\mathcal{I}_{p}(x_{0})=0\) for all \(x_{0}\in\mathscr{D}_{j}(\mathbb{F}_{p})\setminus\bigcup_{i\in J_{1}}\mathscr{ D}_{i}(\mathbb{F}_{p})\) if \(j\in J_{2}\). We claim the following, which readily imply the lemma (by summing over \(\mathscr{D}_{j}^{*}\) first, and then \(\mathscr{D}_{j}\setminus\mathscr{D}_{j}^{*}\) separately):
1. If \(j\in J_{1}\cup J_{3}\) and \(x_{0}\in\mathscr{D}_{j}^{*}(\mathbb{F}_{p})\), then \(\mathcal{I}_{p}(x_{0})\ll p^{\delta-3}\).
2. If \(j\in J_{1}\cup J_{3}\) and \(x_{0}\in\mathscr{D}_{j}(\mathbb{F}_{p})\), then \(\mathcal{I}_{p}(x_{0})\ll p^{2\delta-2}\).
For (2), we integrate absolutely. If \(x_{0}\in\mathscr{D}_{j}(\mathbb{F}_{p})\setminus\bigcup_{i\in J\setminus\{j \}}\mathscr{D}_{i}(\mathbb{F}_{p})\) for some \(j\in J_{1}\cup J_{3}\), then
\[\mathcal{I}_{p}(x_{0})\ll\int_{p\mathbb{Z}_{p}^{2}}\!\!|y|_{p}^{\Re(s_{j}- \mathsf{d}_{j})}\,dy\,dz\leq\int_{p\mathbb{Z}_{p}^{2}}\!\!|y|_{p}^{2u_{j}- \delta}\,dy\,dz\ll p^{-(2u_{j}-\delta+1)}p^{-1}=p^{\delta-2-2u_{j}}, \tag{4.3}\]
which is \(\leq p^{\delta-2}\) since \(\mathsf{u}_{j}\geq 0\). If \(x_{0}\in(\mathscr{D}_{j}\cap\mathscr{D}_{i})(\mathbb{F}_{p})\) for some distinct \(j,i\in J\), then
\[\mathcal{I}_{p}(x_{0})\ll\int_{p\mathbb{Z}_{p}^{2}:y^{u_{j}}z^{u_{i}}\in \mathbb{Z}_{p}}\!\!|y|_{p}^{\Re(s_{j}-\mathsf{d}_{j})}|z|_{p}^{\Re(s_{i}- \mathsf{d}_{i})}\,dy\,dz\leq\int_{p\mathbb{Z}_{p}^{2}:y^{u_{j}}z^{u_{i}}\in \mathbb{Z}_{p}}\!\!|y|_{p}^{2u_{j}-\delta}|z|_{p}^{2u_{i}-\delta}\,dy\,dz,\]
which (since \(|y^{u_{j}}z^{u_{i}}|_{p}\leq 1\)) is \(\leq\int_{p\mathbb{Z}_{p}^{2}}\!\!|y|_{p}^{-\delta}|z|_{p}^{-\delta}\,dy\,dz=p^ {2\delta-2}\). Thus (2) holds.
Yet (1) requires cancellation in some cases. Suppose \(x_{0}\in\mathscr{D}_{j}^{*}(\mathbb{F}_{p})\). Claim (1) is clear by (4.3) if \(j\in J_{1}\), because then \(\mathsf{u}_{j}\geq 1\). On the other hand, if \(j\in J_{3}\), then Lemma 3.8 gives
\[\mathcal{I}_{p}(x_{0}) =\int_{p\mathbb{Z}_{p}^{2}}\!\!|y|_{p}^{s_{j}-\mathsf{d}_{j}}e(- w_{1}y^{v_{j}}\alpha\bmod\mathbb{Z}_{p})\,\frac{dy\,dz}{1-p^{-1}}\] \[=\int_{p\mathbb{Z}_{p}^{2}}\!\!|y|_{p}^{s_{j}-\mathsf{d}_{j}+1}e( -w_{1}y^{v_{j}}\alpha\bmod\mathbb{Z}_{p})\,\frac{dy/|y|_{p}}{p-1}=\frac{p^{-( s_{j}-\mathsf{d}_{j}+1)}}{p-1}\cdot\frac{-\mathbf{1}_{|\mathsf{v}_{j}|=1}}{p}\ll p ^{-\Re(s_{j}-\mathsf{d}_{j}+3)}\]
(since \(c(j)\alpha\in\mathbb{Z}_{p}\) and \(\mathsf{v}_{j}<0\)), which suffices for (1) since \(\Re(s_{j}-\mathsf{d}_{j})\geq-\delta\).
Aside from the _estimates_ above, it is essential for us to have a local _constancy_ result:
**Lemma 4.7**.: _There exists \(M\in\mathbb{Z}_{\geq 1}\) such that if \(\alpha^{\prime}/\alpha\equiv 1\bmod p^{\max(1,-v_{p}(\alpha))}M\mathbb{Z}_{p}\), then \(\mathfrak{F}_{p}(\boldsymbol{s},\lambda,\alpha)=\mathfrak{F}_{p}(\boldsymbol{s}, \lambda,\alpha^{\prime})\), where \(\mathfrak{F}=\mathfrak{D}\cdot\mathbf{1}_{v_{p}(\alpha)<0}+\mathfrak{N}\cdot \mathbf{1}_{v_{p}(\alpha)>0}+\mathfrak{G}\cdot\mathbf{1}_{v_{p}(\alpha)=0}\)._
Proof.: Let \(M\geq 1\) be a highly divisible integer. By Lemmas 3.11 and 3.14 and Proposition 3.10, we may partition \(X(\mathbb{Q}_{p})\) into compact open sets \(O_{c}\) (indexed by a finite set of \(c\in\mathbb{Q}\)) such that (1) \(O_{c}\) is invariant under an action of \(x_{1},x_{2}\in 1+pM\mathbb{Z}_{p}\) that is defined on \(G(\mathbb{Q}_{p})\) by scaling \(a\), \(b-c\) by \(x_{1}\), \(x_{2}\), respectively; and (2) \(H_{p}(\boldsymbol{s},g)\) is invariant under this group action. So by (2.13), we may decompose \(H_{p}^{\vee}(\boldsymbol{s},\lambda,0,\alpha)\) (for any \(\alpha\in\mathbb{Q}_{p}^{\times}\)) as a sum of the form \(\sum_{c}e(c\alpha\bmod\mathbb{Z}_{p})f_{c,p}(\boldsymbol{s},\lambda,\alpha)\), where \(f_{c,p}\) is invariant under scaling of \(\alpha\in\mathbb{Q}_{p}^{\times}\) by \(1+pM\mathbb{Z}_{p}\). Since \(e(c\alpha\bmod\mathbb{Z}_{p})\) depends only on \(c\alpha\bmod\mathbb{Z}_{p}\), the desired lemma follows.
(Previously, a similar but more explicit non-archimedean symmetry argument has played an essential role in the circle method over function fields; see [1, Lemma 3.6].)
## 5. Archimedean endgame
In this section, assume \(X\) is split and \(D\) has strict normal crossings. For convenience, let \(\mathbf{u}:=\operatorname{div}(a)=\sum_{j\in J}\mathfrak{u}_{j}D_{j}\in\mathbb{R} ^{J}\) and \(\mathbf{d}:=-\operatorname{div}(\omega)=\sum_{j\in J}\mathsf{d}_{j}D_{j}\in \mathbb{R}^{J}\).
Let \(C^{\infty}(X(\mathbb{R}))\) be the set of smooth \(f\colon X(\mathbb{R})\to\mathbb{C}\) on the (possibly non-orientable) closed manifold \(X(\mathbb{R})\). The _left-invariant_ differential operators \(a\,\frac{\partial}{\partial a}\) and \(a\,\frac{\partial}{\partial b}\) play an important role in [12, proof of Lemma 5.9], but for us the (non-invariant) operators \((b-c)\,\frac{\partial}{\partial b}\) help too.
**Lemma 5.1**.: _Let \(p,q,r\in\mathbb{Z}_{\geq 0}\) and \(f\in C^{\infty}(X(\mathbb{R}))\). Suppose \(f\) is supported on a compact set \(K\subseteq U(\mathbb{R})\) for some \(U\in\mathcal{C}\) (see Proposition 3.10), and choose \(c\in\mathsf{C}(U)\)._
_(1) The function \(((b-c)\,\frac{\partial}{\partial b})^{p}(a\,\frac{\partial}{\partial a})^{q}( a\,\frac{\partial}{\partial b})^{r}f|_{G(\mathbb{R})}\) extends to a function in \(C^{\infty}(X(\mathbb{R}))\)._
_(2) Let \(j\in J\). The function \(H_{D_{j},\infty}(a,b)\cdot((b-c)\,\frac{\partial}{\partial b})^{p}(a\,\frac{ \partial}{\partial a})^{q}(a\,\frac{\partial}{\partial b})^{r}[f(a,b)H_{D_{j},\infty}(a,b)^{-1}]\) on \(G(\mathbb{R})\) extends to a function in \(C^{\infty}(X(\mathbb{R}))\)._
Proof.: (1): Let \(x\in K\). By the chain rule, it suffices to prove that the derivatives of local coordinates \(t,u\in\mathbb{Q}(X)\) near \(x\) are regular. By algebraic Hartogs, it suffices to prove regularity away from a codimension 2 locus (as in [10, proof of Proposition 2.2]). Via analytic coordinates, regularity then follows from Lemma 3.6 (applicable by Lemma 3.9).
(2): By an induction with (1), we may assume \(p+q+r=1\). By the product rule, together with (1) and Definition 2.5, it suffices to show that the derivative of \(t\in\mathbb{Q}(X)\), where \(t=0\) is a local equation for \(D_{j}\), is divisible by \(t\). It suffices to do this away from a codimension 2 locus. Divisibility then follows from Lemma 3.6, much in the same way as before.
Via Proposition 3.10, cover \(X(\mathbb{R})\) by finitely many open sets \(\Omega\in\{U(\mathbb{R}):U\in\mathcal{C}\}\). Take a smooth partition of unity \(1=\sum_{\Omega}w_{\Omega}\) of \(X(\mathbb{R})\) subordinate to \(\{\Omega\}\). For each \(\Omega\), choose a constant \(c=c(\Omega)\in\mathsf{C}(U)\), for any \(U\in\mathcal{C}\) with \(U(\mathbb{R})=\Omega\). Let
\[\mathcal{I}_{\Omega}=\mathcal{I}_{\Omega}(\boldsymbol{s},\lambda,\alpha):= \int_{G(\mathbb{R})}w_{\Omega}(g)H_{\infty}(\boldsymbol{s},g)^{-1}e(\alpha b) \lambda_{\infty}(\alpha a)\,dg,\quad\mathcal{J}_{\Omega}:=\mathcal{I}_{ \Omega}/e(\alpha c).\]
Additionally, we will need to study \(\alpha\ll 1\) and \(|\alpha|\gg 1\) separately; so take a smooth partition of unity \(1=w_{0}+w_{\infty}\) of \(\mathbb{R}\) subordinate to the open sets \((-2,2)\) and \(\mathbb{R}\setminus[-1,1]\), and let
\[\mathcal{I}_{\Omega,0}:=w_{0}(\alpha)\mathcal{I}_{\Omega},\quad\mathcal{I}_{ \Omega,\infty}:=w_{\infty}(\alpha)\mathcal{I}_{\Omega},\quad\mathcal{J}_{ \Omega,0}:=w_{0}(\alpha)\mathcal{J}_{\Omega},\quad\mathcal{J}_{\Omega,\infty}: =w_{\infty}(\alpha)\mathcal{J}_{\Omega}.\]
Then \(H_{\infty}^{\vee}(\boldsymbol{s},\lambda,0,\alpha)=\sum_{\Omega}\mathcal{I}_{ \Omega}=e(\alpha c)\sum_{\Omega}(\mathcal{J}_{\Omega,0}+\mathcal{J}_{\Omega, \infty}).\)
**Lemma 5.2**.: _For all \(A,B\in\mathbb{Z}_{\geq 0}\), \(\alpha\in\mathbb{R}^{\times}\), \(t\in\mathbb{R}\), and \(\boldsymbol{s}\) in the region (4.1), we have_
\[(\alpha\,\frac{\partial}{\partial\alpha})^{B}\mathcal{J}_{\Omega}(\boldsymbol{s} -it\mathbf{u},\lambda,\alpha)\ll_{A,B}(1+\|\boldsymbol{s}\|^{2+A+2B})/ \alpha^{2}(1+|t|^{A}),\quad\text{if $\delta$ is sufficiently small.}\]
_The same estimate holds if we replace \(\mathcal{J}_{\Omega}\) with \(\mathcal{J}_{\Omega,0}\) or \(\mathcal{J}_{\Omega,\infty}\). (Here \(\|\boldsymbol{s}\|:=\max_{j}|s_{j}|\).)_
Proof.: (The case \(B=0\) is essentially [12, Lemma 5.9]. But differentiability and uniform control for \(B\geq 1\) are not obvious; cf. the "junior arc" difficulties in [10]. Nonetheless, "nicer than expected" derivatives have appeared in other settings as well; see e.g. the integral derivative estimates in [11], in the circle method for _homogeneous_ equations.)
By (2.1) (cf. (2.21)), \(\mathcal{I}_{\Omega}(\boldsymbol{s}-it\mathbf{u},\lambda,\alpha)=\int_{G( \mathbb{R})}w_{\Omega}(g)H_{\infty}(\boldsymbol{s},g)^{-1}e(\alpha b)\lambda_{ \infty}(\alpha a)|a|_{\infty}^{-it}\,dg\). Integrating by parts twice in \(b\), and then \(A+B\) times in \(\log|a|\), gives (by Lemma 5.1)
\[\mathcal{I}_{\Omega}(\boldsymbol{s}-it\mathbf{u},\lambda,\alpha)=\alpha^{-2}(2+ it)^{-A-B}\int_{G(\mathbb{R})}w_{\Omega,1}(\boldsymbol{s},g)H_{\infty}( \boldsymbol{s},g)^{-1}e(\alpha b)\lambda_{\infty}(\alpha a)|a|_{\infty}^{-2-it }\,dg\]
where \(w_{\Omega,1}\) is a degree \(2+A+B\) polynomial function of \(\mathbf{s}\) and finitely many functions \(f\in C^{\infty}(X(\mathbb{R}))\); cf. [17, proof of Lemma 5.9]. Letting \((a^{\prime},b^{\prime})=(\alpha a,\alpha(b-c))\), we get
\[\mathcal{J}_{\Omega}(\mathbf{s}-it\mathbf{u},\lambda,\alpha)=\frac{|\alpha|_{ \infty}^{it}}{(2+it)^{A+B}}\int_{G(\mathbb{R})}\frac{w_{\Omega,1}(\mathbf{s},(a^{ \prime}/\alpha,c+b^{\prime}/\alpha))}{H_{\infty}(\mathbf{s},(a^{\prime}/\alpha,c+b ^{\prime}/\alpha))}\frac{e(b^{\prime})\lambda_{\infty}(a^{\prime})}{|a^{\prime }|_{\infty}^{2+it}}\,\frac{db^{\prime}}{|\alpha|_{\infty}}\,\frac{da^{\prime} }{a^{\prime}}.\]
Applying \(\alpha\,\frac{\partial}{\partial\alpha}\) repeatedly (\(B\) times) using Lemma 5.1, and then changing variables back from \((a^{\prime},b^{\prime})\) to \((a,b)\) (and applying Lemma 2.11(1)\(\Rightarrow\)(2)), we get the desired bound.
We now break (2.20) into pieces, based on the structure of Lemmas 4.3, 4.5, 4.6, 4.7, and 5.2. For convenience, let \(\mathfrak{m}\colon\{1,\ldots,k\}\to J_{2}^{*}\) and \(\mathfrak{n}\colon\{1,\ldots,l\}\to\bigcup_{q\in\mathbb{Q}}J_{1}^{q}\) be bijective functions (where \(k,l\in\mathbb{Z}_{\geq 0}\)), let \(c_{j}:=q\) if \(\mathfrak{n}(j)\in J_{1}^{q}\), and for any \(i\in\{1,\ldots,k\}\) or \(j\in\{1,\ldots,l\}\) let
\[\beta_{i}:=s_{\mathfrak{m}(i)}-\mathsf{d}_{\mathfrak{m}(i)}+1,\quad\gamma_{j}: =s_{\mathfrak{n}(j)}-\mathsf{d}_{\mathfrak{n}(j)}+1,\quad\mathfrak{u}_{i}:=| \mathfrak{u}_{\mathfrak{m}(i)}|,\quad\mathfrak{v}_{j}:=\mathfrak{u}_{ \mathfrak{n}(j)},\]
and let \(\mathfrak{F}_{n}:=\prod_{p\mid n}\mathfrak{F}_{p}\) for all \(\mathfrak{F}\in\{\mathfrak{N},\mathfrak{D},\mathfrak{G}\}\). Then for \(\Re(\mathbf{s})\) large, the contribution to (2.20) from \(\alpha>0\) (the case \(\alpha<0\) being completely analogous) equals (in terms of \(\mathscr{S}\) from (2.3))
\[\sum_{f,\lambda}\sum_{\begin{subarray}{c}r,m_{0},\ldots,m_{k},n_{0},\ldots,n_{ l}\geq 1:\\ \text{pairwise coprime},\\ 1/n_{1},\ldots,1/n_{l}\in\mathbb{Z}_{\mathfrak{F}},\\ r\text{ square-free}\end{subarray}}\mathscr{S}\Bigg{(}\frac{(\mathfrak{M}_{m_{0}} \mathfrak{D}_{n_{0}}\mathfrak{G}_{r}f)(\mathbf{s},\lambda,\alpha)e(c\alpha)}{m_{ 1}^{\beta_{1}}\cdots m_{k}^{\beta_{k}}}\prod_{1\leq j\leq l}\frac{e(-c_{j} \alpha\bmod\mathbb{Z}_{n_{j}})}{n_{j}^{\gamma_{j}}}\Bigg{)}, \tag{5.1}\]
where \(f\) ranges over the set \(\bigcup_{\Omega}\{\mathcal{J}_{\Omega,0},\mathcal{J}_{\Omega,\infty}\}\), where \(\lambda\in\mathbf{M}\) with \(\lambda(-1)=1\), and where
\[\alpha=m_{0}m_{1}^{u_{1}}\cdots m_{k}^{u_{k}}/n_{0}n_{1}^{\mathfrak{v}_{1}} \cdots n_{l}^{\mathfrak{v}_{l}}.\]
(Cf. (1.5).) Let \(\mathsf{P}(\alpha)=m_{1}\cdots m_{k}n_{1}\cdots n_{l}\), for convenience.
Before proceeding, we decompose (5.1) into three pieces. By reordering if necessary, assume
\[\{c_{j}:1\leq j\leq l\}=\{c_{1},\ldots,c_{z}\},\]
where \(c_{1},\ldots,c_{z}\) are pairwise distinct and \(z\leq l\). For \(u\in\{1,\ldots,z\}\), let
\[n[u]=\prod_{1\leq j\leq l:\,c_{j}=c_{u}}n_{j}^{\mathfrak{v}_{j}}.\]
Let \(\xi>0\) be small. Let \(\Xi=\xi^{1/2}\). Let \(\mathcal{P}_{3}=1-\mathcal{P}_{1}-\mathcal{P}_{2}\), where
\[\mathcal{P}_{1} =w_{0}\Bigg{(}\frac{rm_{0}n_{0}\alpha}{\mathsf{P}(\alpha)^{\xi}} \Bigg{)}w_{0}\Bigg{(}\frac{rm_{0}n_{0}\alpha^{-1}}{\mathsf{P}(\alpha)^{\xi}} \Bigg{)}\sum_{U\subseteq\{1,\ldots,z\}:\,|U|\leq 1}\,\prod_{u\in U}w_{\infty} \Bigg{(}\frac{n[u]}{\mathsf{P}(\alpha)^{\Xi}}\Bigg{)}\prod_{u\notin U}w_{0} \Bigg{(}\frac{n[u]}{\mathsf{P}(\alpha)^{\Xi}}\Bigg{)},\] \[\mathcal{P}_{2} =w_{0}\Bigg{(}\frac{rm_{0}n_{0}\alpha}{\mathsf{P}(\alpha)^{\xi}} \Bigg{)}w_{0}\Bigg{(}\frac{rm_{0}n_{0}\alpha^{-1}}{\mathsf{P}(\alpha)^{\xi}} \Bigg{)}\sum_{U\subseteq\{1,\ldots,z\}:\,|U|\geq 2}\,\prod_{u\in U}w_{0} \Bigg{(}\frac{n[u]}{\mathsf{P}(\alpha)^{\Xi}}\Bigg{)}\prod_{u\notin U}w_{0} \Bigg{(}\frac{n[u]}{\mathsf{P}(\alpha)^{\Xi}}\Bigg{)}.\]
Using smooth, as opposed to sharp, cutoffs, helps in \(\mathcal{P}_{1}\) (see the proof of Lemma 5.10).
**Definition 5.3**.: The _piece of (5.1) defined by \(\mathcal{P}\)_ is the version of (5.1) where we replace the unweighted sum \(\sum_{r,m_{0},\ldots}\cdots\) with the weighted sum \(\sum_{r,m_{0},\ldots}\mathcal{P}(r,m_{0},\ldots,n_{l})\cdots\).
Since \(\operatorname{Supp}w_{0}\subseteq(-2,2)\) and \(\operatorname{Supp}w_{\infty}\subseteq\mathbb{R}\setminus[-1,1]\), the function \(\mathcal{P}_{1}\) is supported on
\[rm_{0}n_{0}\max(\alpha,\alpha^{-1})<2\mathsf{P}(\alpha)^{\xi},\quad\#\{u\in\{1, \ldots,z\}:n[u]\geq 2\mathsf{P}(\alpha)^{\Xi}\}\leq 1 \tag{5.2}\]
(i.e. every point in the support of \(\mathcal{P}_{1}\) satisfies (5.2)), and \(\mathcal{P}_{2}\) is supported on
\[rm_{0}n_{0}\max(\alpha,\alpha^{-1})<2\mathsf{P}(\alpha)^{\xi},\quad\#\{u\in\{1, \ldots,z\}:n[u]>\mathsf{P}(\alpha)^{\Xi}\}\geq 2. \tag{5.3}\]
Also, since \(w_{0}|_{[-1,1]}=1\) and \((w_{0}+w_{\infty})(\frac{n[u]}{\mathsf{P}(\alpha)^{\Xi}})=1\), we see that if \(rm_{0}n_{0}\max(\alpha,\alpha^{-1})\leq\mathsf{P}(\alpha)^{\xi}\), then \(\mathcal{P}_{1}+\mathcal{P}_{2}=1\). Therefore, \(\mathcal{P}_{3}\) is supported on
\[rm_{0}n_{0}\max(\alpha,\alpha^{-1})>\mathsf{P}(\alpha)^{\xi}. \tag{5.4}\]
Let \(\mathcal{H}_{\star}(\delta):=\{h(\boldsymbol{s}):h(\boldsymbol{z}+\mathbf{d}+ 2\mathbf{u})\in\mathcal{H}_{J}(-\delta,\infty)\}\) (where \(\mathcal{H}_{J}(-\delta,\infty)\) is as in SS2.3).
**Lemma 5.4**.: _Let \(\xi>0\) be small. Then the piece of (5.1) defined by \(\mathcal{P}_{3}\) extends to an element of \(\mathcal{H}_{\star}(\delta)\), provided \(\delta\) is sufficiently small (in terms of \(\xi\))._
Proof.: Assume \(\delta\leq\xi\). We would like to apply Lemma 2.10(4), but will need to treat small and large \(\alpha\) separately, using two different \(\mathbb{R}\operatorname{div}(a)\)-translates of the variable \(\boldsymbol{s}\). Let
\[F_{-1}(\boldsymbol{s})=\sum_{f,\lambda}\sum_{r,m_{0},\ldots}\mathbf{1}_{ \alpha<1}\cdot\mathcal{P}_{3}\cdot\frac{(\mathfrak{N}_{m_{0}}\mathfrak{D}_{n_ {0}}\mathfrak{G}_{r}f)(\boldsymbol{s},\lambda,\alpha)e(c\alpha)}{m_{1}^{\beta _{1}}\cdots m_{k}^{\beta_{k}}}\prod_{1\leq j\leq l}\frac{e(-c_{j}\alpha \bmod\mathbb{Z}_{n_{j}})}{n_{j}^{\gamma_{j}}},\]
and let \(F_{1}(\boldsymbol{s})\) be the corresponding sum with \(\mathbf{1}_{\alpha\geq 1}\) in place of \(\mathbf{1}_{\alpha<1}\). In \(F_{\varsigma}\) (where \(\varsigma\in\{\pm 1\}\)), the weight \(\mathcal{P}_{3}(r,m_{0},n_{0},\ldots)\) implies (via (5.4)) that \(rm_{0}n_{0}\alpha^{\varsigma}>\mathsf{P}(\alpha)^{\xi}\). Therefore, upon taking absolute values in \(F_{\varsigma}\), and plugging in Lemma 5.2 and the inequality \(1\leq(rm_{0}n_{0}\alpha^{\varsigma})^{\xi}/\mathsf{P}(\alpha)^{\xi^{2}}\), we get that \(F_{\varsigma}(\boldsymbol{s}-\varsigma\xi\mathbf{u}-it\mathbf{u})\) is \(\ll_{\xi}(1+\|\boldsymbol{s}\|^{4})/(1+t^{2})\) times
\[\sum_{r,m_{0},\ldots}\frac{|\mathfrak{G}_{r}\mathfrak{N}_{m_{0}}\mathfrak{D}_{ n_{0}}(\boldsymbol{s}-\varsigma\xi\mathbf{u}-it\mathbf{u},\lambda,\alpha)|\,(n_{0}/m_{ 0})^{2}(\alpha n_{0}/m_{0})^{-\varsigma\xi}}{(m_{1}\cdots m_{k}n_{1}\cdots n_{ l})^{1-\delta}}\cdot\frac{(rm_{0}n_{0}\alpha^{\varsigma})^{\xi}}{(m_{1}\cdots m _{k}n_{1}\cdots n_{l})^{\xi^{2}}},\]
for \(\boldsymbol{s}\) in (4.1) and \(t\in\mathbb{R}\). (We have arranged for the powers of \(\alpha^{\varsigma}\) to cancel out.)
When all summation variables but \(n_{0}\) are fixed, Lemma 4.3 implies
\[\sum_{n_{0}\geq 1}n_{0}^{2-\varsigma\xi}n_{0}^{\xi}|\mathfrak{D}_{n_{0}}| \leq\prod_{p}\left(1+O_{\xi}(p^{O(\xi)})\sum_{j\in J_{1}}\left(p^{-(1+1/u_{j} )}+p^{-1/2}p^{-1}+p^{-(u_{j}+1)/u_{j}}\right)\right)\ll_{\xi}1,\]
provided \(\boldsymbol{s}\) lies in (4.1) and \(\xi\) is sufficiently small. If we then similarly use Lemma 4.5 to sum over \(m_{0}\), and Lemma 4.6 to sum over \(r\), we get (suppressing coprimality restrictions)
\[\sum_{r,m_{0},n_{0}\geq 1}(n_{0}/m_{0})^{2-\varsigma\xi}(rm_{0}n_{0})^{\xi}| \mathfrak{G}_{r}\mathfrak{N}_{m_{0}}\mathfrak{D}_{n_{0}}(\boldsymbol{s}- \varsigma\xi\mathbf{u}-it\mathbf{u},\lambda,\alpha)|\ll_{\xi}1. \tag{5.5}\]
So if \(\delta<\xi^{2}\), the sum \(F_{\varsigma}(\boldsymbol{s}-\varsigma\xi\mathbf{u}-it\mathbf{u})\) (and its analog with absolute values everywhere) is
\[\ll_{\xi}\frac{1+\|\boldsymbol{s}\|^{O(1)}}{1+t^{2}}\sum_{m_{1},\ldots,m_{k},n _{1},\ldots,n_{l}\geq 1}\frac{(m_{1}\cdots m_{k}n_{1}\cdots n_{l})^{\delta-1}}{(m_{1} \cdots m_{k}n_{1}\cdots n_{l})^{\xi^{2}}}\ll_{\delta,\xi}\frac{1+\|\boldsymbol {s}\|^{O(1)}}{1+t^{2}}.\]
Applying Lemma 2.10(4) to \(\mathscr{S}(F_{\varsigma})\) now gives Lemma 5.4, since the piece of (5.1) defined by \(\mathcal{P}_{3}\) is precisely \(\sum_{\varsigma\in\{\pm 1\}}\mathscr{S}(F_{\varsigma})(\boldsymbol{s})=\sum_{ \varsigma\in\{\pm 1\}}\mathscr{S}(F_{\varsigma})(\boldsymbol{s}-\varsigma\xi \mathbf{u})\) (when \(\Re(\boldsymbol{s})\) is large).
For \(\mathcal{P}_{2}\), we use a Weyl-type inequality for monomials in several variables. There are many ways to establish such inequalities (see e.g. [13, 14, 15] and references within). However, handling technical issues such as lopsidedness and coprimality requires care.
**Proposition 5.5**.: _Let \(M_{1},\ldots,M_{k},y,q\geq 1\) be integers with \(\gcd(y,q)=1\). Let \(P_{1},\ldots,P_{k}\) be arithmetic progressions contained in \([M_{1},2M_{1}),\ldots,[M_{k},2M_{k})\), respectively, all with modulus \(\leq R\). Let \(K(I)=\sum_{i\in I}(\mathfrak{u}_{i}-1)\). Then for any nonempty set \(I\subseteq\{1,\ldots,k\}\), we have_
\[\sum_{(m_{1},\ldots,m_{k})\in P_{1}\times\cdots\times P_{k}}e(ym_{1}^{u_{1}} \cdots m_{k}^{u_{k}}/q)\ll_{\epsilon}\frac{R^{O(1)}(M_{1}\cdots M_{k})^{1+ \epsilon}}{\min(q\prod_{i\notin I}M_{i}^{-\mathfrak{u}_{i}},\min_{i\in I}(M_{i }),q^{-1}\prod_{i\in I}M_{i}^{u_{i}})^{1/2^{K(I)}}}.\]
Proof.: By fixing \((m_{i})_{i\notin I}\) (and noting that \((\prod_{i\notin I}m_{i}^{u_{i}})/q\) has denominator \(\gg q\prod_{i\notin I}M_{i}^{-u_{i}}\)), we may reduce to the case where \(M_{i}=1\) and \(P_{i}=\{1\}\) for all \(i\notin I\). We then use Weyl differencing \(K=K(I)\) times (to "linearize" the monomial \(\prod_{i\in I}m_{i}^{u_{i}}\)) to get
\[\left|\mathbb{E}_{\boldsymbol{m}\in\prod_{i\in I}P_{i}}\,e(y(\prod_{i\in I}m_{i }^{u_{i}})/q)\right|^{2^{K}}\ll\mathbb{E}_{\boldsymbol{h}\in\prod_{i\in I}(P_{ i}-P_{i})^{u_{i-1}}}\left|\mathbb{E}_{\boldsymbol{m}\in\prod_{i\in I}P_{i}}\,e(yM( \boldsymbol{h},\boldsymbol{m})/q)\boldsymbol{1}_{m_{i}\in Q_{i}(\boldsymbol{ h})}\right|,\]
where \(\mathbb{E}_{v\in A}\) denotes an average over \(v\in A\), where \(Q_{i}(\boldsymbol{h})\) is a real interval defined in terms of \(P_{i}\), \(\boldsymbol{h}\), and where \(M(\boldsymbol{h},\boldsymbol{m}):=\prod_{i\in I}(\mathfrak{u}_{i}!\,m_{i} \prod_{1\leq j\leq u_{i}-1}h_{i,j})\). Fix an \(i_{\star}\in I\), sum over \(m_{i_{\star}}\in P_{i_{\star}}\cap Q_{i_{\star}}(\boldsymbol{h})\), and use the divisor bound as in [10, proof of Lemma 2.2], to get
\[\left|\mathbb{E}_{\boldsymbol{m}\in\prod_{i\in I}P_{i}}\,e(y(\prod_{i\in I}m_ {i}^{u_{i}})/q)\right|^{2^{K}}\ll_{\epsilon}\sum_{i\in I}\frac{1}{|P_{i}|}+ \frac{(M^{\prime})^{\epsilon}}{\prod_{i\in I}|P_{i}|^{u_{i}}}\sum_{1\leq h\leq M ^{\prime}}\min(|P_{i_{\star}}|,\|yhr/q\|_{\mathbb{R}/\mathbb{Z}}^{-1})\]
for some \(r\in\mathbb{Z}\cap[1,R\prod_{i\in I}\mathfrak{u}_{i}!]\), where \(M^{\prime}:=M_{i_{\star}}^{-1}\prod_{i\in I}(2M_{i})^{u_{i}}\) and \(\|\vartheta\|_{\mathbb{R}/\mathbb{Z}}:=\min_{n\in\mathbb{Z}}|\vartheta-n|\). By [20, Lemma 2.2], the sum over \(h\) is \(\ll_{\epsilon}M^{\prime}|P_{i_{\star}}|(r/q+1/|P_{i_{\star}}|+q/M^{\prime}|P_{ i_{\star}}|)(M^{\prime}q)^{\epsilon}\). Multiplying by \(\prod_{i\in I}|P_{i}|^{2^{K}}\), we now find that \(\left|\sum_{\boldsymbol{m}\in P_{1}\times\cdots\times P_{k}}e(ym_{1}^{u_{1}} \cdots m_{k}^{u_{k}}/q)\right|^{2^{K}}\) is
\[\ll_{\epsilon}\sum_{i\in I}\frac{(M_{1}\cdots M_{k})^{2^{K}}}{M_{i}}+(M^{ \prime}q)^{\epsilon}(M^{\prime}M_{i_{\star}}R/q+M^{\prime}+q)\prod_{i\in I} \left|M_{i}\right|^{2^{K}-u_{i}},\quad\text{since $2^{K}\geq\mathfrak{u}_{i}$.}\]
This implies the result up to \(q^{\epsilon}\), which suffices since the result is trivial if \(\prod_{i\in I}M_{i}^{u_{i}}\leq q\).
**Corollary 5.6**.: _In the setting of Proposition 5.5, we have (for some constant \(\eta>0\))_
\[\sum_{(m_{1},\ldots,m_{k})\in P_{1}\times\cdots\times P_{k}}e(ym_{1}^{u_{1}} \cdots m_{k}^{u_{k}}/q)\ll_{\epsilon}\frac{R^{O(1)}(M_{1}\cdots M_{k})^{1+ \epsilon}}{\min(q,M_{1}^{u_{1}}\cdots M_{k}^{u_{k}}/q)^{\eta}}.\]
Proof.: If \(\prod_{1\leq i\leq k}M_{i}^{u_{i}}\leq q\), the desired bound is trivial. So suppose \(\prod_{1\leq i\leq k}M_{i}^{u_{i}}\geq q\). Let \(\mathfrak{Y}=\min(q,M_{1}^{u_{1}}\cdots M_{k}^{u_{k}}/q)\geq 1\); then \(M_{1}^{u_{1}}\cdots M_{k}^{u_{k}}\geq\mathfrak{Y}^{2}\). Let \(I=\{1\leq i\leq k:M_{i}\geq\mathfrak{Y}^{\varrho}\}\) for some small \(\varrho>0\). Then \(I\neq\emptyset\), and \(\min_{i\in I}(M_{i})\geq\mathfrak{Y}^{\varrho}\). Also, \(\min(q\prod_{i\notin I}M_{i}^{-u_{i}},q^{-1}\prod_{i\in I}M_{i}^{u_{i}})\geq \mathfrak{Y}/\prod_{i\notin I}M_{i}^{u_{i}}\). Consequently, by Proposition 5.5, the exponent \(\eta=\varrho/2^{K(I)}\) is admissible.
**Corollary 5.7**.: _In the setting of Proposition 5.5, we have (for some constant \(\eta>0\))_
\[\sum_{\begin{subarray}{c}(m_{1},\ldots,m_{k})\in P_{1}\times\cdots\times P_{k} :\\ \gcd(m_{i},m_{j})=\gcd(m_{j},Q)=1\end{subarray}}e(ym_{1}^{u_{1}}\cdots m_{k}^{u_ {k}}/q)\ll_{\epsilon}\frac{Q^{\epsilon}R^{O(1)}(M_{1}\cdots M_{k})^{1+ \epsilon}}{\min(q,M_{1}^{u_{1}}\cdots M_{k}^{u_{k}}/q)^{\eta}}\quad\text{for all $Q\in \mathbb{Z}_{\geq 1}$.}\]
Proof.: We may assume \(\prod_{1\leq i\leq k}M_{i}^{u_{i}}\geq q\). If \(h_{ij},g_{j}\in\mathbb{Z}\cap[1,B]\), then Corollary 5.6 implies
\[\sum_{(m_{1},\ldots,m_{k})\in P_{1}\times\cdots\times P_{k}:}\boldsymbol{1}_{h_{ ij}|m_{i},m_{j}}\boldsymbol{1}_{g_{j}|m_{j},Q}\,e(ym_{1}^{u_{1}}\cdots m_{k}^{u_{k}}/q) \ll_{\epsilon}\frac{(BR)^{O(1)}(M_{1}\cdots M_{k})^{1+\epsilon}}{\min(q,M_{1}^{u _{1}}\cdots M_{k}^{u_{k}}/q)^{\eta}}.\]
Multiplying by \(\prod_{1\leq i<j\leq k}\mu(h_{ij})\) and \(\prod_{1\leq j\leq k}\mu(g_{j})\), and summing over \(h_{ij},g_{j}\geq 1\), we get
\[\sum_{\begin{subarray}{c}(m_{1},\ldots,m_{k})\in P_{1}\times\cdots\times P_{k}: \\ \gcd(m_{i},m_{j})=\gcd(m_{j},Q)=1\end{subarray}}e(ym_{1}^{u_{1}}\cdots m_{k}^{u_ {k}}/q)\ll_{\epsilon}\frac{(BR)^{O(1)}(M_{1}\cdots M_{k})^{1+\epsilon}}{\min(q,M_{1 }^{u_{1}}\cdots M_{k}^{u_{k}}/q)^{\eta}}+\sum_{\begin{subarray}{c}m_{1},\ldots,m_ {k}\geq 1:\\ M_{j}\leq m_{j}<2M_{j}\end{subarray}}T_{B}(\boldsymbol{m}),\]
where \(T_{B}:=(\sum_{1\leq i<j\leq k}\sum_{h>B}\boldsymbol{1}_{h|m_{i},m_{j}}+\sum_{1\leq j \leq k}\sum_{g>B:\,g|Q}\boldsymbol{1}_{g|m_{j}})\cdot(m_{1}\cdots m_{k})^{\epsilon}\). Here \(\sum T_{B}\ll_{\epsilon}(M_{1}\cdots M_{k})^{1+\epsilon}(1+Q^{\epsilon})/B\). Now take \(B\) to be a small power of \(\min(q,M_{1}^{u_{1}}\cdots M_{k}^{u_{k}}/q)\)
Using reciprocity \((\psi({\mathbb{Q}})=1)\) and partial summation over \((m_{i})_{1\leq i\leq k}\) (involving up to \(k\) derivatives of \(f\in\bigcup_{\Omega}\{{\mathcal{J}}_{\Omega,0},{\mathcal{J}}_{\Omega,\infty}\}\) with respect to \(\alpha\)), we can now handle \({\mathcal{P}}_{2}\).
**Lemma 5.8**.: _Let \(\xi>0\) be small. Then the piece of (5.1) defined by \({\mathcal{P}}_{2}\) extends to an element of \({\mathcal{H}}_{\star}(\delta)\), provided \(\delta\) is sufficiently small (in terms of \(\xi\))._
Proof.: If \(kl=0\), then \(m_{0}n_{0}\max(\alpha,\alpha^{-1})\geq{\mathsf{P}}(\alpha)\), so the piece of (5.1) satisfying (5.3) is a finite sum (if \(\xi\) is sufficiently small), and thus lies in \({\mathcal{H}}_{\star}(\delta)\). So we may assume \(k,l\geq 1\).
If \(z\leq 1\) then (5.3) is impossible, so assume \(z\geq 2\). Fix \(\Omega\), \(f\), \(\lambda\). By a partition of unity on \([n[1]:\cdots:n[z]]\in{\mathbb{R}}^{z}_{>0}/{\mathbb{R}}_{>0}\), we may assume \(n[1]\gg n[2]\gg\cdots\gg n[z]\). Let \(C=c-c_{2}\) and \(C_{j}=c_{j}-c_{2}\). For any \(r,m_{0},\dots\) satisfying (5.3), we have \(n[1]\gg n[2]\gg{\mathsf{P}}(\alpha)^{\Xi}\).
On the range \(n[1]\gg n[2]\gg\cdots\gg n[z]\), the piece of (5.1) we are interested in is (for some weight \(\nu=\nu(n[1],\dots,n[z])\in C^{\infty}({\mathbb{R}}^{z}_{>0}/{\mathbb{R}}_{>0})\) supported on \(n[1]\gg n[2]\gg\cdots\gg n[z]\))
\[\sum_{r,m_{0},\dots}\nu\cdot{\mathcal{P}}_{2}\cdot\mathscr{S}\Bigg{(}\frac{({ \mathfrak{N}}_{m_{0}}{\mathfrak{D}}_{n_{0}}{\mathfrak{G}}_{r}f)({\boldsymbol{ s}},\lambda,\alpha)e(C\alpha)e(c_{2}\alpha\bmod{\mathbb{Z}}_{k_{2}n_{0}})}{m_{1}^ {\beta_{1}}\cdots m_{k}^{\beta_{k}}}\prod_{1\leq j\leq l}\frac{e(-C_{j}\alpha \bmod{\mathbb{Z}}_{n_{j}})}{n_{j}^{\gamma_{j}}}\Bigg{)};\]
here we have rewritten the exponentials in (5.1) using \(\psi(c_{2}\alpha)=1\) (reciprocity) in the form \(e(c_{2}\alpha)=e(c_{2}\alpha\bmod{\mathbb{Z}}_{k_{2}n_{0}})\prod_{1\leq j\leq l }e(c_{2}\alpha\bmod{\mathbb{Z}}_{n_{j}})\), where \(k_{2}\geq 1\) is the denominator of \(c_{2}\). (Note that \(k_{2}n_{0},n_{1},\dots,n_{l}\) are pairwise coprime, by Definition 4.2.)
Let \({\boldsymbol{s}}\) lie in (4.1). The strategy is now to fix \(r\), \(m_{0}\), \(n_{0}\), and to obtain _nontrivial cancellation_ over the variables \((m_{i})_{1\leq i\leq k}\), \(q_{1}:=n[1]n[3]\cdots n[z]\), \(q_{2}:=n[2]\). We have \(\alpha=m_{0}m_{1}^{u_{1}}\cdots m_{k}^{u_{k}}/n_{0}q_{1}q_{2}\); write \(n=q_{1}q_{2}\). Since \(C_{2}=0\) and \(C_{1}C_{3}\cdots C_{z}\neq 0\), the "fractions"
\[e(c_{2}m_{0}/n_{0}n\bmod{\mathbb{Z}}_{k_{2}n_{0}})\prod_{1\leq j\leq l}e(-C_ {j}m_{0}/n_{0}n\bmod{\mathbb{Z}}_{n_{j}})\in e({\mathbb{Q}})\]
have "denominators" \(\asymp n_{0}n[1]n[3]\cdots n[z]=n_{0}q_{1}\). Therefore, by Lemmas 4.7 and 5.2, and Corollary 5.7 with \(Q=rm_{0}n_{0}q_{1}q_{2}\), the penultimate display is7 at most \(1+\|{\boldsymbol{s}}\|^{O(k)}\) times
Footnote 7: and, more precisely, may be rewritten as a sum of holomorphic functions whose sum of absolute values is \(\sum_{m\in{\mathbb{Z}}:\gcd(m,Q)=1}w(m/M)({\boldsymbol{1}}_{m\equiv t\bmod q }-{\mathbb{E}}_{n\in{\mathbb{Z}}_{q}:\gcd(n,Q)=1}[{\boldsymbol{1}}_{n\equiv t \bmod q}])\ll_{\epsilon}(QM)^{\epsilon}AYq\).
(5.6) \[\sum_{r,m_{0},\dots:\,(\ref{eq:1})}\frac{(m_{0}n_{0}r)^{k}|{\mathfrak{N}}_{m_{0 }}{\mathfrak{D}}_{n_{0}}{\mathfrak{G}}_{r}|\,(n_{0}/m_{0})^{2}(1+|\alpha|)^{k} }{(m_{1}\cdots m_{k}n_{1}\cdots n_{l})^{1-\delta}}\frac{(m_{0}n_{0}r)^{O(1)} \smash{\raisebox{-1.0pt}{$\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Proof.: By replacing \(w\) with \(w/Y\), assume \(Y=1\). If \(g\in\mathbb{Z}\cap[1,M^{1-\epsilon}/Aq]\), then Poisson summation over \(m\in\mathbb{Z}\) in residue classes modulo \(gq\) gives (since \(M/gqA\geq M^{\epsilon}\))
\[\sum_{m\in\mathbb{Z}}w(m/M)(\mathbf{1}_{g|m}\mathbf{1}_{m\equiv t\bmod q}- \mathbb{E}_{n\in\mathbb{Z}_{gq}}[\mathbf{1}_{g|n}\mathbf{1}_{n\equiv t\bmod q }])\ll_{\epsilon,B}\sum_{1\leq r\leq gq}M^{-1-B}\leq M^{-B}.\]
Multiplying by \(\mu(g)\), and summing over \(g\mid Q\), we get (cf. the sieving for Corollary 5.7)
\[\sum_{m\in\mathbb{Z}}w(m/M)(\mathbf{1}_{\gcd(m,Q)=1}\mathbf{1}_{m\equiv t\bmod q }-\mathbb{E}_{n\in\mathbb{Z}_{Qq}}[\mathbf{1}_{\gcd(n,Q)=1}\mathbf{1}_{n\equiv t \bmod q}])\ll_{\epsilon,B}M^{-B}+\sum_{m\ll M}T(m),\]
where \(T(m):=\sum_{g>M^{1-\epsilon}/Aq:\,g|Q}(\mathbf{1}_{g|m}+1/g)\). Here \(\sum_{m\ll M}T(m)\ll_{\epsilon}Q^{\epsilon}(1+M/(M^{1-\epsilon}/Aq))\).
On the other hand, the \(q=1\) case of the previous paragraph implies
\[\sum_{m\in\mathbb{Z}}w(m/M)(\mathbf{1}_{\gcd(m,Q)=1}-\mathbb{E}_{n\in \mathbb{Z}_{Q}}[\mathbf{1}_{\gcd(n,Q)=1}])\ll_{\epsilon}Q^{\epsilon}(1+M^{ \epsilon}A)\leq 2(QM)^{\epsilon}A.\]
Multiplying by \(\mathbb{E}_{n\in\mathbb{Z}_{q}:\,\gcd(n,Q)=1}[\mathbf{1}_{n\equiv t\bmod q}]\), then subtracting the penultimate display, gives the result, since \(\mathbb{E}_{n\in\mathbb{Z}_{Qq}}[\mathbf{1}_{\gcd(n,Q)=1}\mathbf{1}_{n\equiv t \bmod q}]=\mathbb{E}_{n\in\mathbb{Z}_{Q}}[\mathbf{1}_{\gcd(n,Q)=1}]\cdot \mathbb{E}_{n\in\mathbb{Z}_{q}:\,\gcd(n,Q)=1}[\mathbf{1}_{n\equiv t\bmod q}]\).
**Lemma 5.10**.: _Let \(\xi>0\) be small. Let \(\alpha_{u}:=m_{1}^{u_{1}}\cdots m_{k}^{u_{k}}/\prod_{j\geq 1:\,c_{j}=c_{u}}n_{j}^ {\mathfrak{p}_{j}}\). If \(\delta\) is sufficiently small in terms of \(\xi\), then the piece of (5.1) defined by \(\mathcal{P}_{1}\) equals (for \(\Re(\boldsymbol{s})\) large)_
\[h_{0}(\boldsymbol{s})+\sum_{1\leq u\leq z}\sum_{m_{1},\ldots,m_{k}\geq 1}\sum_{n_{ j}\geq 1:\,j\geq 1,\,c_{j}=c_{u}}\,\frac{h_{u}(\boldsymbol{s},m_{1},\ldots,m_{k},(n_{ j})_{j\geq 1:\,c_{j}=c_{u}})}{\alpha_{u}^{2}(\alpha_{u}^{\xi}+\alpha_{u}^{-\xi} )\cdot m_{1}^{\beta_{1}}\cdots m_{k}^{\beta_{k}}\prod_{j\geq 1:\,c_{j}=c_{u}}n_{j}^ {\gamma_{j}}}\]
_for some functions \(h_{0},\ldots,h_{z}\) such that uniformly over \(m_{1},\ldots,m_{k},n_{1},\ldots,n_{l}\geq 1\), we have \(h_{0},h_{u}\in\mathcal{H}_{\star}(\delta)\) and \(h_{0},h_{u}\ll_{K}(1+\|\boldsymbol{s}\|)^{O(1)}\) in vertical strips \(\Re(\boldsymbol{s})\in K\) (for compact \(K\))._
Proof.: As in the proof of Lemma 5.8, we may assume \(k,l\geq 1\). Fix \(\Omega\), \(f\), \(\lambda\). Assume \(n[1]\ll n[2]\ll\cdots\ll n[z]\) and \(m_{1}\ll m_{2}\ll\cdots\ll m_{k}\), by introducing a suitable factor \(\nu=\nu(n[1],\ldots,n[z],m_{1},\ldots,m_{k})\in C^{\infty}(\mathbb{R}_{\prec 0} ^{z+k}/\mathbb{R}_{\succ 0})\). Let \(C=c-c_{z}\) and \(C_{j}=c_{j}-c_{z}\). For any \(r,m_{0},\ldots\) satisfying (5.2), we have \(n[u]\ll\mathsf{P}(\alpha)^{\Xi}\) for all \(u<z\).
Let \(k_{z}\in\mathbb{Z}_{\geq 1}\) be the denominator of \(c_{z}\). The piece of (5.1) of interest is \(\mathscr{S}(F)(\boldsymbol{s})\), where
\[F(\boldsymbol{s})=\sum_{r,m_{0},\ldots}\nu\cdot\mathcal{P}_{1}\cdot\frac{( \mathfrak{N}_{m_{0}}\mathfrak{D}_{n_{0}}\mathfrak{G}_{r}f)(\boldsymbol{s}, \lambda,\alpha)e(C\alpha)e(c_{z}\alpha\bmod\mathbb{Z}_{k_{z}n_{0}})}{m_{1}^{ \beta_{1}}\cdots m_{k}^{\beta_{k}}}\prod_{1\leq j\leq l}\frac{e(-C_{j}\alpha \bmod\mathbb{Z}_{n_{j}})}{n_{j}^{\gamma_{j}}}.\]
Write \(\boldsymbol{m}^{\prime}=(m_{0},\ldots,m_{k-1})\) and \(\boldsymbol{n}=(n_{0},\ldots,n_{l})\). Let
\[f_{1}(\boldsymbol{s},\lambda,r,\boldsymbol{m}^{\prime},\boldsymbol{n}):= \mathbb{E}[e(c_{z}\alpha\bmod\mathbb{Z}_{k_{z}n_{0}})\mathfrak{N}_{m_{0}} \mathfrak{D}_{n_{0}}\mathfrak{G}_{r}],\;f_{2,u}(r,\boldsymbol{m}^{\prime}, \boldsymbol{n}):=\mathbb{E}[e(-C_{u}\alpha\bmod\mathbb{Z}_{n[u]})],\]
where for a locally constant function \(\ast\) of \(\alpha\) on \(\mathbb{Q}_{q}\) we let \(\mathbb{E}[\ast]\) denote the average of \(\ast\) over the set \(\{m_{k}\in\mathbb{Z}_{q}:\gcd(m_{k},rm_{0}\cdots m_{k-1}n_{0}\cdots n_{l})=1\}\). Let
\[F_{0}(\boldsymbol{s})=\sum_{r,m_{0},\ldots}\nu\cdot\mathcal{P}_{1}\cdot\frac{f( \boldsymbol{s},\lambda,\alpha)e(C\alpha)f_{1}(\boldsymbol{s},\lambda,r, \boldsymbol{m}^{\prime},\boldsymbol{n})}{m_{1}^{\beta_{1}}\cdots m_{k}^{\beta_ {k}}}\frac{\prod_{u<z}f_{2,u}(r,\boldsymbol{m}^{\prime},\boldsymbol{n})}{n_{1}^ {\gamma_{1}}\cdots n_{l}^{\gamma_{l}}}.\]
Let \(\boldsymbol{z}:=\boldsymbol{s}-\mathbf{d}-2\mathbf{u}\), and assume \(\boldsymbol{s}\) lies in (4.1) with \(\delta=\xi/2\).
The weights \(\nu\), \(\mathcal{P}_{1}\) in \(F(\boldsymbol{s})\) force \(m_{k}\geq(\mathsf{P}(\alpha)\alpha n_{0}/m_{0})^{1/L}\gg\mathsf{P}(\alpha)^{(1-3 \xi)/L}\) for some constant \(L=L_{X}\geq 1\). Introducing in \(F(\boldsymbol{s})\) a smooth dyadic partition of unity on the variable \(m_{k}\), and then using Proposition 5.9 in residue classes \(\mathcal{R}\) modulo \(O(rm_{0}k_{z}n_{0}\prod_{u<z}n[u])\) on which \(e(c_{z}\alpha\bmod\mathbb{Z}_{k_{z}n_{0}})\), \(\mathfrak{N}_{m_{0}}\), \(\mathfrak{G}_{r}\), \(e(-C_{u}\alpha\bmod\mathbb{Z}_{n[u]})\) are constant (by Lemma 4.7), we get
\[(F-F_{0})(\boldsymbol{s})=(F-F_{0})(\boldsymbol{z}+\mathbf{d}+2\mathbf{u}) \in\mathcal{H}_{\dagger,J}(-\delta,\infty), \tag{5.7}\]
since \((m_{k}\,\frac{\partial}{\partial m_{k}})^{B}(\nu\cdot\mathcal{P}_{1}\cdot f( \boldsymbol{s}-it\mathbf{u},\lambda,\alpha)e(C\alpha)/m_{k}^{\beta_{k}+it \mathbf{u}_{k}})\ll_{B}(1+\|\boldsymbol{s}\|)^{O(1+B)}\mathsf{P}(\alpha)^{\xi B }/\alpha^{2}(1+t^{2})\) (by Lemma 5.2), and the modulus of \(\mathcal{R}\) is \(\ll\mathsf{P}(\alpha)^{3\xi+2\Xi}\).8 Therefore, \(\mathscr{S}(F-F_{0})(\boldsymbol{s})\in\mathcal{H}_{\star}(\delta)\) by Lemma 2.10(4). It remains to analyze \(\mathscr{S}(F_{0})\).
Footnote 8: Cancellation over \(m_{k}\) (a dominant variable) in \(F-F_{0}\) introduces a factor of say \(\ll\mathsf{P}(\alpha)^{-1/2L}\), which easily leads to the desired convergence bound (5.7) (in the same way as (5.6) suffices for Lemma 5.8).
Crucially, \(f_{2,u}\ll_{\epsilon}n[u]^{\epsilon-1/2}\) for each \(u<z\) (since \(C_{u}\neq 0\)). On the other hand, we will satisfactorily bound \(f_{1}\) by applying Lemmas 4.3, 4.5, and 4.6 pointwise.
Let \(\varsigma=-1\) if \(f=\mathcal{J}_{\Omega,0}\), and \(\varsigma=1\) if \(f=\mathcal{J}_{\Omega,\infty}\). For \(\Re(\boldsymbol{s})\) large, we have \(\mathscr{S}(F_{0})(\boldsymbol{s})=\mathscr{S}(F_{0})(\boldsymbol{s}-\varsigma \mathfrak{u})\) (cf. the proof of Lemma 5.4). But for \(\boldsymbol{s}\) in (4.1), the Dirichlet coefficient of \(m_{1}^{\beta_{1}}\cdots m_{k}^{\beta_{k}}\prod_{j\geq 1:\,c_{j}=c_{z}}n_{j}^{ \gamma_{j}}\) in \(F_{0}(\boldsymbol{s}-\varsigma\mathfrak{s}-it\mathbf{u})\) is of the form
\[\sum_{r,m_{0},n_{0}\geq 1}\sum_{n_{j}\geq 1:\,j\geq 1,\,\,c_{j}\neq c_{z}}\frac{O(1+\| \boldsymbol{s}\|^{O(1)})\mathbf{1}_{\alpha^{\varsigma}\gg 1}}{\alpha^{2}(1+t^{2})} \frac{f_{1}(\boldsymbol{s}-\varsigma\mathfrak{s}\mathfrak{u}-it\mathbf{u}, \lambda,r,\boldsymbol{m}^{\prime},\boldsymbol{n})}{(\alpha n_{0}/m_{0})^{ \varsigma\xi}}\frac{O(\prod_{u<z}n[u]^{\xi-1/2})}{\prod_{j\geq 1:\,c_{j}\neq c_{z}}n_{j}^{ \gamma_{j}}}.\]
Since \(\alpha^{\varsigma}\gg 1\) implies \(rm_{0}n_{0}\prod_{j\geq 1:\,c_{j}\neq c_{z}}n_{j}^{\mathfrak{v}_{j}}\geq \max(1,(\alpha/\alpha_{z})^{\varsigma})\gg 1+\alpha_{z}^{-\varsigma}\), this display is (summing over \(r\), \(m_{0}\), \(n_{0}\) as in the proof of (5.5), but with \((rm_{0}n_{0})^{\Xi}\) in place of \((rm_{0}n_{0})^{\xi}\))
\[\ll\frac{1+\|\boldsymbol{s}\|^{O(1)}}{\alpha_{z}^{2}(1+t^{2})\alpha_{z}^{ \varsigma\xi}}\cdot\frac{1}{(1+\alpha_{z}^{-\varsigma})^{\Xi/A}}\ll\frac{1+\| \boldsymbol{s}\|^{O(1)}}{\alpha_{z}^{2}(1+t^{2})\alpha_{z}^{\varsigma\xi}} \cdot\frac{1}{1+\alpha_{z}^{-2\varsigma\xi}}=\frac{1+\|\boldsymbol{s}\|^{O(1 )}}{\alpha_{z}^{2}(1+t^{2})}\cdot\frac{1}{\alpha_{z}^{\xi}+\alpha_{z}^{-\xi}}\]
for some constant \(A=A_{X}\geq 1\), provided \(\xi\) is sufficiently small in terms of \(X\).
Integrating over \(t\in\mathbb{R}\) (and summing over \(f\), \(\lambda\), \(\nu\)) gives the lemma.
## 6. Final reductions
In this section, assume \(X\) is strictly split but not that \(D\) has strict normal crossings. We proceed using the blowup strategy of [1, SS6] (see also [12, paragraph after Theorem 5.1]). Let \(\pi\colon\widetilde{X}\to X\) be an equivariant morphism of the form specified in Definition 2.1. Then in particular, \(\widetilde{X}\) is split and its boundary \(\widetilde{D}:=\widetilde{X}\setminus G\) has strict normal crossings.
Index the irreducible components of \(\widetilde{D}\) by \(\widetilde{J}\supseteq J\) so that \(\widetilde{D}_{j}\) is the strict transform of \(D_{j}\) for \(j\in J\), and \(\widetilde{D}_{j}\) is an exceptional divisor (\(\cong\mathbb{P}^{1}\), since \(\widetilde{X}\) is split) for \(j\in\widetilde{J}\setminus J\). Then
\[\operatorname{Pic}(\widetilde{X})=\mathbb{Z}^{\widetilde{J}\setminus J}\oplus \pi^{*}\operatorname{Pic}(X),\quad\operatorname{Pic}^{G}(\widetilde{X})= \mathbb{Z}^{\widetilde{J}}=\mathbb{Z}^{\widetilde{J}\setminus J}\oplus\pi^{*} \operatorname{Pic}^{G}(X),\quad\operatorname{div}(a|_{\widetilde{X}})=\pi^{*} \operatorname{div}(a).\]
Thus we may choose \(\widetilde{H}\) (satisfying Proposition 2.7 for \(\widetilde{X}\)) so that \(\widetilde{H}(\pi^{*}\boldsymbol{s},g)=H(\boldsymbol{s},g)\).
For numerical comparison of \(X\), \(\widetilde{X}\), the following standard lemma is essential:
**Lemma 6.1**.: _In \(\operatorname{Pic}^{G}(\widetilde{X})\), we have \(-\pi^{*}\operatorname{div}(\omega)\in-\operatorname{div}(\pi^{*}\omega)+ \mathbb{Z}_{\geq 1}^{\widetilde{J}\setminus J}\)._
We can now study \(\mathsf{Z}(sK_{X}^{-1},1_{G})=\widetilde{\mathsf{Z}}(s\pi^{*}K_{X}^{-1},1_{G})\). Let \(w\in C_{c}^{\infty}(\mathbb{R})\). Then the Mellin transform \(w^{\vee}(s):=\int_{0}^{\infty}w(x)x^{s-1}\,dx\) is holomorphic on \(\Re(s)>0\), with rapid decay (i.e. \(w^{\vee}(s)\ll_{L}(1+|s|)^{-L}\) for all \(L\geq 0\)) in vertical strips. Let \(\sigma,B\geq 2\) be large. By Mellin inversion,
\[\sum_{x\in G(\mathbb{Q})}w\bigg{(}\frac{H(K_{X}^{-1},x)}{B}\bigg{)}=\frac{1}{2 \pi}\int_{\Re(s)=\sigma}w^{\vee}(s)B^{s}\widetilde{\mathsf{Z}}(s\pi^{*}K_{X}^{-1},1_{G})\,dt,\quad\text{where }t=\Im(s). \tag{6.1}\]
Contour shifting, via Proposition 2.16 (for \(\widetilde{X}\), with \(\boldsymbol{\kappa}=\pi^{*}\mathbf{d}=-\pi^{*}\operatorname{div}(\omega)\)), gives
\[\frac{1}{2\pi}\int_{\Re(s)=\sigma}w^{\vee}(s)B^{s}\widetilde{\mathsf{Z}}_{0}(s \pi^{*}\mathbf{d},1_{G})\,dt=\left(\tilde{c}_{X,H}w^{\vee}(1)+O\bigg{(}\frac{1}{ \log B}\bigg{)}\right)\frac{B(\log B)^{|J|-2}}{(|J|-2)!}, \tag{6.2}\]
where \(\tilde{c}_{X,H}=\mathcal{X}_{\Lambda_{J}(\widetilde{X})}(\mathbf{d})\lim_{ \boldsymbol{s}\to\mathbf{d}}H^{\ast}(\boldsymbol{s},1)\prod_{j\in J}(s_{j}- \mathbf{d}_{j})\). By (2.4), \(\mathcal{X}_{\Lambda_{J}(\widetilde{X})}=\mathcal{X}_{\Lambda_{J}(X)}\). Also, Lemma 2.11(1)\(\Rightarrow\)(2) for \(\widetilde{H}_{v}^{\ast}(\pi^{\ast}\boldsymbol{s},1)\) on \(\widetilde{X}\), followed by Lemma 2.11(2)\(\Rightarrow\)(3) for \(X\), gives \(\lim_{\boldsymbol{s}\to\mathbf{d}}H^{\ast}(\boldsymbol{s},1)\prod_{j\in J}(s_{ j}-\mathbf{d}_{j})=\tau(X,H)\). Therefore, \(\tilde{c}_{X,H}\) equals Peyre's constant (2.8). Meanwhile, \(|J|-2=\operatorname{rank}(\operatorname{Pic}(X))-1\) by Proposition 2.2.
Still, \(\widetilde{\mathbf{Z}}_{1}\) remains. Let \(\boldsymbol{s}:=s(\mathbf{d}+\mathbf{u})+\mathbf{u}=\mathbf{d}+2\mathbf{u}+(s -1)(\mathbf{d}+\mathbf{u})\); note that \(\mathbf{d}+\mathbf{u}\in\mathbb{Z}_{\geq 1}^{J}\) by Proposition 2.3. Define \(\mathfrak{m}\), \(\mathfrak{n}\), \(k\), \(l\), \(c_{j}\), \(z\), \(\alpha_{u}\) in terms of \(X\) as in SS5, even if \(D\) does not have strict normal crossings. Let \(v_{i}:=\mathbf{d}_{\mathfrak{m}(i)}+\mathfrak{u}_{\mathfrak{m}(i)}\geq 1\) for \(i\in\{1,\ldots,k\}\), and \(\nu_{j}:=\mathbf{d}_{\mathfrak{n}(j)}+\mathfrak{u}_{\mathfrak{n}(j)}\geq 1\) for \(j\in\{1,\ldots,l\}\). Let \(\Pi_{u}:=m_{1}^{v_{1}}\cdots m_{k}^{v_{k}}\prod_{j\geq 1:\,c_{j}=c_{u}}n_{j}^{ \nu_{j}}\) and \(\varrho_{u}:=B/\Pi_{u}\) for \(u\in\{1,\ldots,z\}\). By Lemmas 5.4, 5.8, and 5.10 for \(\widetilde{X}\), we have (since \(1=\mathcal{P}_{1}+\mathcal{P}_{2}+\mathcal{P}_{3}\))
\[\int_{\Re(s)=\sigma}w^{\vee}(s)B^{s-1}\widetilde{\mathcal{Z}}_{1}(\pi^{\ast} \boldsymbol{s},1_{G})\,dt\ll B^{-\delta}+\sum_{1\leq u\leq z}\,\sum_{m_{1}, \ldots,m_{k}\geq 1}\,\sum_{n_{j}\geq 1:\,j\geq 1,\,c_{j}=c_{u}}\frac{(\alpha_{u}^{ \delta}+\alpha_{u}^{-\delta})^{-1}(\varrho_{u}^{\delta}+\varrho_{u}^{-\delta}) ^{-1}}{m_{1}\cdots m_{k}\prod_{j\geq 1:\,c_{j}=c_{u}}n_{j}}\]
for some \(\delta=\delta_{X}>0\), where the factor of \((\varrho_{u}^{\delta}+\varrho_{u}^{-\delta})^{-1}\) comes from the bound
\[\int_{\Re(s)=\sigma}w^{\vee}(s)\varrho_{u}^{s-1}h_{u}(\boldsymbol{s},m_{1}, \ldots,m_{k},(n_{j})_{j\geq 1:\,c_{j}=c_{u}})\,dt\ll(\varrho_{u}^{\delta}+ \varrho_{u}^{-\delta})^{-1}\]
proven by shifting \(\Re(s)=\sigma\) to \(\Re(s)=1\pm\delta\).
Suppose \(u\in\{1,\ldots,z\}\). Let \(c_{u}=q\); then \(|\{j\geq 1:c_{j}=c_{u}\}|=|J_{1}^{q}|\). Let \(A,P\in\mathbb{R}_{>0}\). The exponents \(v_{i}\), \(\nu_{j}\) in \(\Pi_{u}\) are positive, and the exponent vectors of \(\alpha_{u}\), \(\Pi_{u}\) are linearly independent over \(\mathbb{R}\), so the number of tuples of \(M_{i},N_{j}\in\{2^{e}:e\in\mathbb{Z}_{\geq 0}\}\) admitting \(m_{i}\in[M_{i},2M_{i})\), \(n_{j}\in[N_{j},2N_{j})\) with \(\alpha_{u}\in[A,2A)\), \(\varrho_{u}\in[P,2P)\) is \(\ll(\log(2+B/P))^{k+|J_{1}^{q}|-2}\). Thus
\[\sum_{m_{1},\ldots,m_{k}\geq 1}\,\sum_{n_{j}\geq 1:\,j\geq 1,\,c_{j}=c_{u}}\frac{( \alpha_{u}^{\delta}+\alpha_{u}^{-\delta})^{-1}(\varrho_{u}^{\delta}+\varrho_{u }^{-\delta})^{-1}}{m_{1}\cdots m_{k}\prod_{j\geq 1:\,c_{j}=c_{u}}n_{j}}\ll\sum_{A,P\in\{2^{e}:e \in\mathbb{Z}\}}\frac{(\log(2+B/P))^{k+|J_{1}^{q}|-2}}{(A^{\delta}+A^{-\delta}) (P^{\delta}+P^{-\delta})},\]
which is \(\ll(\log B)^{k+|J_{1}^{q}|-2}\). But \(k=|J_{2}^{\ast}|\), so \(k+|J_{1}^{q}|\leq\operatorname{rank}(\operatorname{Pic}(X))\) by Proposition 3.4. Summing over \(1\leq u\leq z\), and using the \(\mathbb{C}\pi^{\ast}\mathbf{u}\)-invariance of \(\widetilde{\mathbf{Z}}_{1}(\boldsymbol{z},1_{G})\) for \(\Re(\boldsymbol{z})\) large (which follows from (2.20), (2.21) for \(\widetilde{X}\)), we get (1.2), in view of (6.1), (6.2).
The final sentence of Theorem 1.1 follows from the fact that any continuous function on a compact interval can be approximated from above and below by functions in \(C_{c}^{\infty}(\mathbb{R})\).
It would be interesting to know whether ergodic methods (of e.g. [1]) could give another proof of Theorem 1.1, at least up to \(o(1)\) if not quantitatively.
## 7. Examples of smooth equivariant compactifications over the rationals
Classification of \(X\) is open. Some examples can be found in [12, SS1]; see also [10], which is however written in terms of left actions rather than right actions (which are equivalent via the map \(g\mapsto g^{-1}\)). One can obtain further examples by \(G\)-equivariant blowup.
We focus on constructions where interesting behavior with special divisors may occur.
**Example 7.1**.: Embed \(G\) in \(X:=\mathbb{P}^{1}\times\mathbb{P}^{1}\) by \((a,b)\mapsto([1:a],[a:b])\). The left \(G\)-action on \(G\) does not extend to \(X\). The right \(G\)-action does: \(([x_{0}:x_{1}],[t_{0}:t_{1}])(u,v):=([x_{0}:x_{1}u],[t_{0}u:t_{0}v+t_{1}])\). Here \(a=x_{1}/x_{0}\), \(b=x_{1}t_{1}/x_{0}t_{0}\), \(|J|=3\), \(|J_{1}^{0}|=1\), \(|J_{2}^{\ast}|=1\), \(|J_{3}|=1\).
The following illuminates why \(\mathbb{Q}\)-translates of \(b\), not just \(b\) itself, matter in our work:
**Example 7.2**.: Given \(t\in\mathbb{Q}\) and a compactification \(i\colon G\to X\), the _left_ translate \(i_{t}\colon G\to X\) given by \((^{\prime}a,^{\prime}b)\mapsto i((1,t)(^{\prime}a,^{\prime}b))\) satisfies \({}^{\prime}a=a\), \({}^{\prime}b+t=b\), \({}^{\prime}J=J\), \({}^{\prime}J_{1}^{c}=J_{1}^{c+t}\).
We end with two sorts of _orbit closures_ (cf. [11, first two paragraphs of SS1]): one via explicit projective representations, and one via products of equivariant compactifications.
**Example 7.3**.: Let \(G\) act on \(\mathbb{P}(\{f\in\mathbb{Q}[x]:\deg f\leq n\})\cong\mathbb{P}^{n}\) by \(fg:=f\circ g\). Say \(n\geq 3\), and choose a set \(R\in\binom{\mathbb{Q}}{n}\) on which \(G(\mathbb{Q})\) acts faithfully. The closure \(C\) of the \(G\)-orbit of \([\prod_{\varrho\in R}(x-\varrho)]\in\mathbb{P}^{n}\) is a singular equivariant compactification of \(G\). Its boundary includes at least the twisted curve \([(Ax+B)^{n}]\) via \((a,b)=(\lambda A,\lambda B)\) with \(\lambda\to\infty\), and the line \([Ax+B]\) via \((a,b)=(\epsilon A,\varrho+\epsilon B)\) with \(\varrho\in R\), \(\epsilon\to 0\). Let \(X\to C\) be an equivariant resolution of singularities. Then \(|J_{2}^{*}|\geq 1\), \(|J_{1}^{\varrho}|\geq 1\) (for \(\varrho\in R\)), \(|J|\geq 1+n\). Cf. [10, Example 1.8] on \(\operatorname{SL}_{2}(\mathbb{Z})\)-orbits of binary forms; but our groups and height functions differ substantially.
**Example 7.4**.: Let \(X\), \(Y\) be smooth equivariant compactifications of \(G\) over \(\mathbb{Q}\). Let \(Z\) be the closure of the diagonal \(\{(g,g):g\in G\}\) in \(X\times Y\). Let \(W\to Z\) be an equivariant resolution if necessary; then \(W\) is a smooth equivariant compactification of \(G\) over \(\mathbb{Q}\). Since \(W\) maps onto \(X\), \(Y\), this construction blends the boundary data on \(X\), \(Y\). For example, if \((L,V)\in\{J,J_{1},J_{2},J_{3},I^{c},J_{1}^{c},J_{2}^{*}\}\times\{X,Y\}\), then \(L(W)\) maps onto \(L(V)\).
## Acknowledgements
I thank Yuri Tschinkel for introducing me to the beautiful paper [12] (and associated open questions), and thank him as well as Ramin Takloo-Bighash and Sho Tanimoto for their encouragement and comments. Also, I thank Tim Browning and Dan Loughran for comments and suggestions concerning Manin-Peyre, homogeneous spaces, and splitness. Thanks also to Anshul Adve, Peter Sarnak, Katy Woo, and Nina Zubrilina for some interesting discussions. Finally, I thank the Browning Group and Andy O'Desky for many conversations.
|
2309.06538 | Desenvolvimento de modelo para predição de cotações de
ação baseada em análise de sentimentos de tweets | Training machine learning models for predicting stock market share prices is
an active area of research since the automatization of trading such papers was
available in real time. While most of the work in this field of research is
done by training Neural networks based on past prices of stock shares, in this
work, we use iFeel 2.0 platform to extract 19 sentiment features from posts
obtained from microblog platform Twitter that mention the company Petrobras.
Then, we used those features to train XBoot models to predict future stock
prices for the referred company. Later, we simulated the trading of Petrobras'
shares based on the model's outputs and determined the gain of R$88,82 (net) in
a 250-day period when compared to a 100 random models' average performance. | Mario Mitsuo Akita, Everton Josue da Silva | 2023-09-11T17:32:54Z | http://arxiv.org/abs/2309.06538v1 | Desenvolvimento de modelo para predicao de cotaoes de acao baseada em analise de sentimentos de _tweets_
###### Abstract
O treinamento de modelos de aprendizado de maquina para predicao de cotaoes de acoes tem sido um assunto cada vez mais abordado a medida que o avanceo tecelologico possilito e envio automazidade o instantiateo de ordens de compra e venda dese stivos. Enquanto a grande marioia das abordagens nesta disciplina consiste em treinar modelos de redes Neurals com base somete na potacao historica dos adivos, neste trablab utilizamos a plaatforma _iFeel_ 2.0 para extavir 19 indicadores de sentiments de postagres da plaatforma de _microbles Tweeter_ relacionadas a empresa Petrobras c treinamos modelos XGBoat para prever a cotação da acoes desta empresa. Posteriormete, simulamos o desempenho deste modelo e comparamos a media de outros 100 aleatorios para determinar que houve ganho de **R588**,82 (brutos) no periodo a utilizar o modelo treinado, quando compradao ao rendimento medida dos outros cem modelos alacitorios.
andise de sentimentos, tweets, cotaoes, acoes, Petrobras, (feel
## I Introduction
Desde a digitalizacao das operacoes de compra e venda de ativos financeiros intensificada nos anos 1990s, as operacoes de negociacao de acoes vem sendo alvo de intenso estudo com objetivo de gerar algoritmos que sejam capazes de trazer tetrono financiero as investidores de maneira automazida. Recentemente, data o a evolucao do poder computacional que viabilizou modelos cada vez mais complexos de negociacao, a negociacao automatica de acoes atraves de algoritmos j Eriksentura um montante de pelo menos 50% de todas as negociacoes de acoes nas bolsas de [10].
Tradicionalmente, os modelos para predicao de precos de acao sao construados com base em estatisticas sobre precos, volumes de negociacao, medias moveis, dentre outras informacoes estatisticas e condelaos dotivo financiero em questo, sendo considerados, portanto, amo evolucao da escola de analise tecnica de acoes [3]. A recente disponibilizacao de grandes volumes deados e incrementes no poder de processamento ou um ambiente proprio para o desenvolvimento de novos algoritmos mais complexos[4] como diferentes redes neurais ou combinacao de varios algoritmos classicos que possibilitaram a utilizacao de outros aspects nao estatisticos como indicadores de sentimento emoticias [5], ou tweets [6] e [7].
Neste trabalho, utilizaremos as mais recentes tecnicas de Processamento de Linguaem Natural (NLP) para extrair 19 indicadores de sentimentos atraves de modelos ja existentes na literatura que sao, postormente, utilizados como _features_ juntamente com estatisticas de precos e volumes de negociacao para treinamento de modelo computacional XGBoost com o objetivo de prever octacoes futuras das acoes preferenciasias da empresa Petroleo Brasileiro S.A. - Petrobras (PETR4).
## II Objective
Neste projeto, desenvolvemos modelos computacionais para prever variacoes nos precos da acao preferencial da Petrobras (PETR4). O objetivo do projeto e desenvolver modelos que apresentem heltor desempenho quando comparados a modelos aleatorios e que consigam melhores ganhos financeiros do que a realizacao de operacoes a ac acao dentro do intervalo de tempo reservado para testes.
## III Bases tecoricas
### _iFeel 2.0_
Aplicacao Web implementada por Araujo _et al_. [8] para simplificar a implementacao e o uso de multiplos metodos de analise de sentimentos no nivel de sentencas. Sao suportados 19 modelos de analise de sentimentos no total brevemente explicados a segui:
#### Iii-1 Emoticons
Proposto por Goncalves _et al_. [9] attribuiuma pontucao de sentimentos baseado nos _emoticons_ utilizados dentro da frase.
#### Iii-2 Happiness Index
Proposto por Dodds _et al_. [10], consiste em uma escala de 1 a 9 em que fases sao classificadas de accordo com um uso de 1034 palavras e suas es esas na Affective Norms for English Words (ANEW) [11].
#### Iii-3 SentiWordNet
E uma ferramenta proposta por Esuli _et al_. [12] commente utilizada na classificacao de opiniones baseda em um dicionario lexico chamado WordNet que considera palavras em lingua inplesa. O modelo agrupa palavras em conjuntos chamados de _synsets_. Em seguida, de acordo com as palavras e intensificadores deste conjunto calcula uma pontucao para considerar o sentimento postivo ou negativo de cada _synset_. Ao final, todos os _synsets_ sao ponderados para calcular a polaridade global da sentenca.
#### Iii-4 Senticet
Proposto por Camaria _et al_. [13], e um modelo que utiliza tecnicas de NLP baseda em inteligencia artificial. Comtem 14 mil concetiene que sao utilizados para calcular a polaridade da sentencae e foi incialmente utilizado para avalizar os comcentarios de pa cientes do National Heath System (NHS) na Inglaterra.
#### Iii-5 PANAS-t
E uma escala piscometrica proposta por Goncalves _et al_. [14] para detectar humor baseado no
metodo PANAS (_Positive Affect Negative Affect Scale_) que analisa o texto diante de nove categorias de humor. Na implementacao do _iFeel_, para gerar a escala de polaridade global a nivel de sentenga, foram considerados 4 humores como polaridade positiva, quatro como polaridade negativa e um neutro.
###### Abstract
The aim of the present paper is to develop a novel approach to the study of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of the effect of
* **Close**: Prevo da iltima negociacao do periodo.
* **Tickvol**: Quantitdade de negocios realizados no periodo considerado.
* **Vol**: Quantitdade de acoes negociadas no periodo.
* **Spread**: Indica se houve ou nao _spread_ (dierenca no preco de compra e venda) nos negocios do periodo.
### _Aquisica do tweets_
A aquisica do _tweets_ foi realizada atraves de script em _Python_ desenvolvido pelo autor. Os dados foram obtidos atraves da API v2 ofacial do _Twitter_[33] utilizando a biblioteca _Tweepy_. Foi concedidoido to status de "pesquisador" para o uso academico da API que possibilitou o incremento da quantidade de _tweets_ consultados.
Para construcao da base, utilizmos, inicialmente, todos _tweets_ postados entre 13:30 e 19:50 GMT - o que corresponde a horario habitual de negociacao das acoes na Bolsa de Valores brasileira - do periodo de 23/8/2021 a 30/6/2022. Posteriormente, a base foi aumentada para conter os _tweets_ criados entre 1/1/2021 e 30/11/2022.
A consulta filfrou apenas tweets que continham as _hashtags_ #PETR3 ou #PETR4, alem daqueles que continham o nom de ampresa "Petrobras". Form excluidos os _retweets_ e aqueles es escritos en outra lingua que nao o portuguese.
Os dados contidos no _dataset_ de _tweets_ sao os seguintes:
* **Created_at**: _timestamp_ correspondente a data e hora em que a postagem foi publicada.
* **Text**: o texto da postagem.
* **Like**: quantifdade de "curtidas" da postagem.
* **Quote**: quantifade cita\(\tilde{\text{o}}\)es da postagem.
* **Reply**: quantifdade de respostas da postagem.
* **Retweet**: quantifade de usurios que republicaram a postagem em suas contas.
* **User_followers**: quantifade de seguidores do usuario autor da postagem.
* **User_following**: quantifade de usutirios que o autor da postagem segue.
* **User_tweets**: quantifade total de postagens do autor.
* **User_listed**: quantifade de listas criadas por outros usualrios que contem o autor da postagem.
Devido ao limite imposto pela API do _Twitter_ de retorno de, no maximo, 300 _tweets_ por consulta, 180 consultas a cada 15 minutos e uma consulta por segundo, houve a necessidade de dividir as consultas por periodo de tempo e retardar o processamento das consultas utilizando o comando _sleep()_ da linguaem _Python_ e limitar manualmente as consultas para que apenas um mes fosse consultado a cada execucao do _script_.
### _Pre-processamento dos tweets_
Apos a consulta dos _tweets_ postados nos periodos pretendidos, foram efetuadas operacoes para pre-processar os dados obtidos e retirar alguns elementos que nao seriam utilizados.
A primeira medida tomada foi a filtragem dos _links_ presentes nas postagens. Por se tratarem apenas de endereos de _sites_, foram filtrados pois nao influencaim no calculo de polaridade dos _tweets_. Outro elemento eliminado das bases foi a memcao a outros usuarios. Mencoes sao caracterizadas pelo caracterre """ seguido do nome de algun usuario da rede e servem como una espocie de _link_ para marcar outros usualrios em postagens que podem interessar a eles. Forma filtrados pois nao representam informacao relevante para o calculo da polaridade dos _tweets_.
Foram excluidos, ainda, simbolos nao alfaheticos, _emojs_ e pontuacoes. A ideia por tras dessa eliminacao foi tentar facilit obtenca dos metricas de polaridade dos _nweets_ em um contexte de desenvolimento de modelos de NLP propo especificos para este trabulo (como originalmente pensado). Entretanto, o simples desenvolivimento de um modelo de linguaem e uma tarefa complexa que exige quantitidades massivas de dados para o treinamento consequentemente a ideia do desenvolivimento acabou substituda. Porem, um vez que o dataset jia havida sido processado quando da mudanca de curso, os _emojs_ e simbolos removidos acabaram comprometendo o calculo da polaridade em alguns dos modelos utilizados.
Fig. 1: Representacao esquematica do pipeline implementado neste trabulo.
Outro processamento eftuado foram a separacao de data e hora e conversada para o fuso horario brasieiro (GMT-3). Necessaria para a correta aribuico dos preo e estasticas obitados no tem interiora osto tweets, os procedimentos visam a padronizacao de datas e horas para evitar erros e simplificar visualizacao de pipeline construido. Outra operacao efetuada foi a climimacao de duplexias, que nada mais e doe un procedimento de saneamento de _datasets_ corriqueiro, mas importante para evitar contaminacao dos dados quando divididos em treinamento e teste. Sua falta poderia prejudicar o calculo das estasticas de cada periodo e, consequentemente, o treinamento do modelo como um todo.
Finalmente, foram eliminados os _tweets_ com pouca informacao. Postagens com 2 palavas ouenos assim como aquavelos com menos de 20 caracteres foram eliminadas pela alta chance de nao cometern informacoes relevantes.
### _Traducao do texto dos tweets_
Devido a carencia de modelos treinados especificamente para o idioma portugues brasieiro, o sistema _iFeel_ 2.0 conta com uma ferramenta de traducao embutida. Apesar disso, o modulo de traducao apresentava erro de conexao com a API e estava infolonivel nos periodos de testes. Diante deste cenario, a traducao dos textos dos _tweets_ que foram extrados foi alacancada travers da utilizacao das funcoes integradas ao _Google Spreadsheets_.
Esta funcao utiliza interamente o _Google_ Tradutor para prover as traducoes e foi necessaria uma vez que todos os modelos utilizados pelo iFeel 2.0 utilizam-se de dicionarios lexicos em ingles ou foram treinados utilizando-se bases textuais naquele idioma.
### _Extracao dos indicadores de sentimentos_
A extraacao das polariados de sentimentos foi executada utilizando o program _iFeel_ 2.0. Disponivel para _download_ em uma imagem _Docker_, o programa foi executado mse a mes em tres ambientes: um computador portatil pessoal, um computador pessoal de mesa e um servidor em nuvem.
O processamento se mostrou desafiador, ja que congelamentos e travamentos da maquina virtual Java utilizada no projeto cocerra frequentemente e necessitaram de constante intervencao manual. Ao final deste processo, os _datasets_ mensais foram combinados para foram um grande conjunto de dados de 323.460 amostras e 39 attribuos que abrangem conteudo publicado em um intervalo de tempo de 22 meses.
### _Analise exploratoria_
Apos a extracao de dados, foi efetuada um breve analise exploratoria de dados para melhor compreensao do conjunto de dados a ser trabalhado. A seguir, algunos dos principais achados sao discitudios:
#### Iii-F1 Balanceamento de classes
Ha um leve desbalanceamento de amostras. Enquanto sao 139.885 as amostras para periodos de alta nas ocatacoes (~56,5% do total), as amostras referentes a periodos baixa somam 107.718 amostras (~43,5% do total).
#### Iii-F2 Distribuicao dos atributos de sentimentos
De maneira peral, a grande maioria dos tweets foram classificados como neutros com as demais postagens se distribuindo de maneira razoavlemente equilibrada dentre as duas polariados. Ha exceoes como o metodo _PANAS-t_ que classificoua quase todas as amostras como neutras, _EMOLEX_ e _OPINION LEXICON_ que classificaram unito mais amostras como negativas. Outros modelos, como _EMOITICONS_ e _EMOITICONS_-DS foram, obviamente, prejudcados pela limpeza de _emoticus_ realizada no pre-processamento. A fig 2 traz a distribuicao de algumas das _features_ extrapadas.
Fig. 2: Distribuicao de pontução das amostras de algums modelos.
#### -C3 Estudo de correlacoes
A variavel-alvo nao apresentou correlacao significativa com nenhum dos outros attribuos sendo que as unicas correlacoes importantes foram registradas entre os valores de polaridade calculados - o que, de certa forma, e esperado unma vez que muitos modelos podem ser treinados a partir de bases textuais similares.
### _Feature engineering_
Nesta etapa, com o intuito de aumentar a quantidade de _features_ no dataset, foram criados atributos com o intuito de prover outras informacoes que talvez fossem relevantes durante o treinamento, como a hora de criacao e tamanho dos _nweets_. A ideia e aumentar a variabilidade de _features_ a serem apresentadas para o treinamento com outros dados que nao somente aqueles referentes a analise de sentimentos.
#### -C1 Hora
numero inteiro correspondente a hora em que a postagem foi publicada.
#### -C2 Word count
contagem de palavras na publicacao. Foi utilizada, tambem, para uma limpeza de dados que eliminou todas as postagens com menos de 3 palavras.
#### -C3 Text length
tamanho total do _nweet_. Pensada para ser utilizada como um indicador de confabilidade das _features_ de polaridade durante o treinamento, parte de principo de que quanto maoir a qubilcacoa, maior a quantidade de palavras consideradas no calculo da polaridade e, por consequencia, mais "confiavel" e a metrica.
Para formar o _dataset_ final, foram utilizadas estatisticas de todos os _tweets_ condios em cada intervalo para que cada intervalo de tempo fosse representado por una unica amostra de dados, e, desta forma, criar una base balanceada em numero de amostras por periodo e inferir una sequencia entre as amostras de cada periodo.
Para cada um dos 27 atributos dos tweets e polaridades calculadas, foram utilizadas as seguintes estatisticas:
* Media
* Desvio-padrao
* Minimo
* Maximo
* Soma
* Variancia
* Contagem da quantitdade de amostras do periodo
Alem das estatisticas, foram adicionadas as _features_ com defasagem destas estatisticas e dos atributos de prepos evolumes de negociacao. Desta forma, criou-se una relacao temporal entre os dados, o que e importante pois durante o treinamento o modelo sera exposto a dados de tempo atual assim como dados dos tempos anterores.
Foram utilizadas as seguintes janelas de tempo para adicao dos atributos com atraso:
* Atraso de 5 minutos
* Atraso de 10 minutos
* Atraso de 15 minutos
* Atraso de 20 minutos
### _Cruzamento de dados de catacoes com tweets_
Devido as mudancas de estatregias ocorridas durante o desenvolimento do prejeto, este passo foi realizado em diferentes pontos do _pipeline_ de dados executableado. Apesar disto, por se tratar de un cruzamento de dos conjuntos de dados, coorre apenas a adicao de novas _features_ para cada amostra, e sua presenca ou ausencia nao interfere nos processamentos das etapas anteriores.
Este passo busca atribuir os precos e estatisticas de negociacoes da aaco com os _nweets_ que foram publicados naquele momento, portanto, o horario de postagem de cada _tweet_ e arredondo para o multipo de 5 mediatamente inferior e as cotaacoes desse tempo sao atribuidas a postagem.
Adicionalmente, o conjunto de dados tambem rechee seu atributo-alvo. Como o objettivo do trabalho e prever a cotacao da acao no proximo periodo de tempo (5 minutos), atribuimos o prevo de fechamento em t + 5min a cada amostra para utilizamos como alvo durante o treinamento.
### _Treinamento dos modelos_
Foi treinado um modelo XGBoost para realizar todas as previsoes do dia utilizando os seguintes parametros de treinamento:
* ETA: 0.01
* N_ESTIMATORS: 300
* RANDOM_STATE: 4321
* SCALE_POS_WEIGHT: 0,6
* MAX_DEPTH: 5
* OBJECTIVE: "binary_logistic"
#### -C1 Separacao entre conjunto de teste e validacao
Antes de definir quais os dados serao considerados na hora da divisao, e necessario definir a quantidade de dias a serem inculidos em cada grupo. Por padrao, foram utilizados os seguintes valores:
* Dias de treinamento: 200
* Dias de validacao: 1
* Dias de teste: 1
#### -C2 Treinamento do modelo de treinamento
Desta forma, por exemplo, para treinar o primeiro modelo de validacao, no _dataset_ criado, utilizariamos, como dados de treinamento, todas as amostras comperendias entre 4/1/2021 e 26/10/2021. As amostras do dia segutime (27/10/2021) sao utilizadas para validacao de forma a otimirar o modelo para este periodo. Note-se, entretanto, que os dados a serem de fato classificados sao os do dia 28/10/2021 - que nao so utilizados neste primeiro treinamento.
#### -C3 Treinamento do modelo de validacao
Para gerar o modelo que fara a previsao de dados, retrena-se o melhor modelo obido no passo anterior com dados dos dias de treinamento actionados dos dados do dia de validacao. O desempenho somente e aferido utilizando os dados do dia reservado para testes.
### _Simulacao de desempenho_
Foi implementado um calculador de desempenho para verificar qual seria o resultado financeiro ao se aplicar o algoritmo de predicao treando simulando operacoes de compra e venda da ação, de acordo como n resultado previsto pelo modelo. Foi utilizado um conjunto de 100 modelos aleatorios como _baseline_ para entender se o modelo treinado realmente produz melhores resultados quando comparado a media dos modelos aleatorios.
Para tanto, foi utilizada a propria tabela de octaacoes e as seguintes regras:
* Casco o valor predito pelo modelo (ou o valor escolhido aleatoriamente, para os modelos aleatorios) seja 1, e simulada uma compra da acao no preco de abertura seguida de una venda ao final do periodo.
* Casco contrario faz-se una venda descoberta do papel, que consiste na venda do papel pelo preco de abertura seguida de una compra ao final do periodo.
* Calcula-se a rentabilidade diaria de cada modelo de acordo com a tabela a seguit:
Para todos os efeitos de calculo de rentabilidade, foram desconsideradas eventuais taxas de corretagem, emolumentos, impostos, juros e quasiquer outras obcrancas. A rentabilidade final foi calculada semore considerando a negociacao de lote de acoes (100 acoes).
### _Apresentacao e Analise de resultados_
O modelo treinado apresentou excesso de rentabilidade bruta de RS77,00 frente a media de -RS11,82 dos modelos aleatorios durante o periodo de testes entre 27/10/2021 e 28/10/2022. Ou seja, no periodo considerado houve um ganho medio de RS 88,82 a escolher o modelo treinado em detrimento de um modelo aleatorio hipoticto aqui representado pela media de 100 modelos aleatorios.
Um grafico com a rentabilidade bruta obtida em cada periodo pode ser observado na figura 3. Nele, podemos observar que na maioria do periodo estudado, o modelo treinado esteve acima do modelo aleatorio medio. Isto evidencia uma relativa consistencia do modelo treinado em entregar os resultados melhores que o aleatorio.
A seguit apresentamos as metricas _Macro_ do modelo treinado durante o periodo de validacao:
* Precisao: 0,51
* _Recall_: 0,52
* F-1: 0,40
* AUC: 0,5153
* _Loglass_: 0,6923
### _Breve discussao de resultados_
Apesar das metricas _macro_ nao serem consideradas boas, o modelo proposto conseguiu obter algorum resultado positivo frente a um conjunto de modelos operando aleatoriamente, o que demonstra uma certa consistencia do modelo em entregar resultados diante du intervalo de tempo relativamente grande (cerca de 1 ano).
Quando consideramos os periodos em que o modelo ganhou ou perdeu da media dos modelos aleatorios, podemos observar que o modelo treinado ganhou em 215 periodos e perdeu em 37 deles.
Houve um lurov bruto calculado de RS 77,00 durante o periodo reservado para testes, o que representa um ganho medio de RS0,31 por operacao, considerando a compre e venda de una lote de acoes (100 acoes). A media dos emm modelos que perparam aleatoriamente, gerou um ganho total no periodo de RS -11,82, o que equivale um ganho medio de RS -0,05 por operacao.
Algumas conjecturas de fatores que podem ter interferido com o desempenho do modelo sao:
* Dados talvez nao tenham correlacao suficiente com a cotacao de acoes especificas: A platatforma _Tweeter_ em geral, tende a ser centario de disputas e reclamacoes, o que nos levaria a ter muto mais contenedo de opiniones negativas do que positivas e isto nem sempre condiz com as cotacoes de acoes. Este fenomeno pode ter acontecido ja vez que apesar do _dataset_ de precos ser levemente desbalancado para o campo positivo, as polaridades sao levemente mais negativas que positivas. Alem disso, situacoes como aumento do preco de combustiveis, por exemplo, podem aumentar a quantidade de criticas nas redes socias enquanto as acoes sobem, pelo possivel aumento no faturamento e, posteriormente, nos lucros da empresa.
* Periodo eleitoral: Parte consideravel dos dados de testes se deu em periodo eleitoral, o que pode ter
Fig. 3: Gridico mostrando a rentabilidade no periodo de testes.
gerado um descolamento entre os dados coletados e as cotaacoes ja que a Petrobris e uma empresa estatal. Este fato e mellor observado no periolo etmo que o desempenho de doudelo ficou abazio da media dos modelos aleatorios ja na parte final do grafico da figuess 3. O periodo (agosto/setembro de 2022) coincide exatamente com o incio das campanhas eletitorais para as eleclecos gerais daquele ano.
* Traducao: A traducao automatica dos tweets de portugues para a lingua ingelos pode ter provocado a perda de conteudo semantico e prejudicado o computo das polaritonades, una vez que expressoes comunente utilizadas no Brasil podem ter carga sentimental maior ou menor do que suas equivalentes em ingles.
## V Conclusione trabalinos futuros
Considerando toda a extensao do trabalho desenvolvido ate aqui, que contemplou desde aquisica de dados passando pela extraaco de caracteristicas, amplo estudo sobre os dados aquidiur este o treinamento e menustacao de resultados, podemos afirmar que foi um trabalho desenvolvido de ponta-a-ponta utilizando apenas recursos disponiveis na internet gratuitamente (exceto _Google Colab PRO_).
Neste sentiido, o objetivo de desenvolver um modelo que pudesse superar o aleatorio para negociacao de acoes baseada em indicacores de sentimentos de _nveets_ foi considerado alcancado.
Ha, entretanto, uma vasta quantitidade de pontos em que futuros trabalho es abordagens alternativas poderio corrigir ou melhorar as falhas cometidas na abordagem aqui descrita e, consequentemente, melhorar os resultados. A seguit, algunas sugestoes para futuras continuacoes deste trabalho.
### _Expansao do intervalo de tempo analisado_
Inicialmente pensado para cobriur una granularidade major de dados (intervalo de 1 minuto), encontrar dados disponiveis gratuitamente ou em preco mutio baxo para este intervalo de tempo movstu-se uma tarefa infruitfera. Entretanto, ao migramos para o intervalo de 5 minutos, pudemos achar com facilitade dados ate o ano de 2018. Uma das ideias e utilizar essa base maior para adquiir mais dados de periodos nio considerados neste trabalho e, eventualmente, gerar modelos melhores.
### _Expansao para acoes de outras empresas_
Neste trabalho focamos nas acoes da empresa Petrobras por ser a acao mais liquida e, por ampla margem, a empresa negociada mais comentenda da rede social - principalmente aposos os sucessivo reajustes de precoso corridos em 2021 e 2022. Entretanto, a mesem estratiga poderia ser implementada para outras empresas (ou mesmo un conjunto de empresas) em novos trabalho, aproximando-se da abordagem de publicacoes que utilizaram indices de acoes[5].
### _Utilizacao de outras estrateigias de modelagem_
Neste trabalho focamos na utilizacao do _XGBoost_ como algoritmo de preferencia para treinarmos o nosso modelo, porem, ha outros algoritmos que tambem poderiam ser implementados, como por exemplo:
#### V-C1stm
Muito utilizado para predicao de dados que possuam um componente temporal entre as amostras[34], podoria ser utilizado para predicao de cotaacoes de acoes;
#### V-C2 Sofnn
Rede neural que utilizao ligica _Fuzzy_ ja foi utilizada para prever indices de acoes atares de dados obtidos com analise de sentimentos de postagens de _twitter_[5] e poderia ser aplicado na resolucao deste problema.
### _Utilizacao de outras fontees de dados_
Outras redes sociaia ou mesmo _feeds_ de jornais e noticias poderiam ser utilizados para adicionar confibalidade aos dados.
### _Utilizacao de outras funcao para medir o desempenho do algoritmo no treinamento_
Como pudemos observar na apresentacao dos resultados, nem sempre periodos de maior acerto ou mesmo maior AUC apresentam eschhores resultados. por padrao, a biblioteca _XGBoot_ vom com duas funcoes para aufierr o desempenho durante o treinamento: _AUC_ e _LogLoss_. Talevez fosse necessaria a aticao de algum componente de peso a estas funcoes ou desenvolvimento de funcao baseada na rentabilidade para melhorar as metricas de rentabilidade dos modelos gerados.
|
2305.19915 | Source Code Data Augmentation for Deep Learning: A Survey | The increasingly popular adoption of deep learning models in many critical
source code tasks motivates the development of data augmentation (DA)
techniques to enhance training data and improve various capabilities (e.g.,
robustness and generalizability) of these models. Although a series of DA
methods have been proposed and tailored for source code models, there lacks a
comprehensive survey and examination to understand their effectiveness and
implications. This paper fills this gap by conducting a comprehensive and
integrative survey of data augmentation for source code, wherein we
systematically compile and encapsulate existing literature to provide a
comprehensive overview of the field. We start with an introduction of data
augmentation in source code and then provide a discussion on major
representative approaches. Next, we highlight the general strategies and
techniques to optimize the DA quality. Subsequently, we underscore techniques
useful in real-world source code scenarios and downstream tasks. Finally, we
outline the prevailing challenges and potential opportunities for future
research. In essence, we aim to demystify the corpus of existing literature on
source code DA for deep learning, and foster further exploration in this
sphere. Complementing this, we present a continually updated GitHub repository
that hosts a list of update-to-date papers on DA for source code modeling,
accessible at \url{https://github.com/terryyz/DataAug4Code}. | Terry Yue Zhuo, Zhou Yang, Zhensu Sun, Yufei Wang, Li Li, Xiaoning Du, Zhenchang Xing, David Lo | 2023-05-31T14:47:44Z | http://arxiv.org/abs/2305.19915v4 | # Data Augmentation Approaches for Source Code Models: A Survey
###### Abstract
The increasingly popular adoption of source code models in many critical tasks motivates the development of data augmentation (DA) techniques to enhance training data and improve various capabilities (e.g., robustness and generalizability) of these models. Although a series of DA methods have been proposed and tailored for source code models, there lacks a comprehensive survey and examination to understand their effectiveness and implications. This paper fills this gap by conducting a comprehensive and integrative survey of data augmentation for source code, wherein we systematically compile and encapsulate existing literature to provide a comprehensive overview of the field. We start with an introduction of data augmentation in source code and then provide a discussion on major representative approaches. Next, we highlight the general strategies and techniques to optimize the DA quality. Subsequently, we underscore techniques that find utility in widely-accepted source code scenarios and downstream tasks. Finally, we outline the prevailing challenges and potential opportunities for future research. In essence, this paper endeavors to demystify the corpus of existing literature on DA for source code models, and foster further exploration in this sphere. Complementing this, we present a continually updated GitHub repository that hosts a list of update-to-date papers on DA for source code models, accessible at [https://github.com/terryyz/DataAug4Code](https://github.com/terryyz/DataAug4Code).
+
Footnote †: \(\dagger\) Corresponding author.
## 1 Introduction
Data augmentation (DA) is a technique used to increase the variety of training examples without collecting new data. It has gained popularity in recent machine learning (ML) research, with methods like back-translation (Sennrich et al., 2015; Shiri et al., 2022), Mixup (Zhang et al., 2018), and synthetic audio (Asyrofi et al., 2021) being widely adopted in natural language processing (NLP), computer vision (CV), and speech recognition. These techniques have significantly improved the performance of data-centric models in low-resource domains. For example, Fadaee et al. (2017) obtain substantial improvements for low-resource machine translation via DA, where the translation system is trained with the bilingual pairs synthesized from a limited training corpus.
However, DA has not yet been fully explored in source code modeling, which is the intersection of ML and software engineering (SE). Source code modeling is an emerging area that applies ML techniques to solve various source code tasks such as code completion (Yin and Neubig, 2017), code summarization (McBurney and McMillan, 2014), and defect detection (Wang et al., 2016), by training models on a vast amount of data available in open-source repositories (Allamanis et al., 2017). Source code data typically has two modalities: the programming language (e.g., Python and Java) and the natural language (e.g., doc-strings and code comments), which complement each other. Such dual-modality nature of source code data presents unique challenges in tailoring DA for NLP to source code models. For example, the context of a sentence can be relatively standalone or derived from a few surrounding sentences in many NLP tasks (). However, in source code, the context can span across multiple functions or even different files, due to the widespread use of function calls, object-oriented programming, and modular design. Therefore, we argue that DA methods for source code would need to take this extended context into account, to avoid introducing errors or changing the original program's behavior. In addition, source code follows strict syntactic rules that are specified using context-free grammar. Consequently, conventional NLP data augmentation methods, such as token substitu
tion with similar words, may make the augmented source code fail to compile and introduce erroneous knowledge for training models.
Despite such challenges, there has been increasing interest and demand for DA for source code models. With the growing accessibility of large, off-the-shelf, pre-trained source code models via learning from large-scale corpora (Chen et al., 2021; Li et al., 2023; Allal et al., 2023), there is a growing focus on applying these models to real-world software development. For instance, (Husain et al., 2019) observe that many programming languages are low-resource, emphasizing the importance of DA to improve model performance and robustness on unseen data.
This study aims to bring attention from both ML and SE communities to this emerging field. As depicted in Figure 1, the relevant publications have been increasing in the recent five years. More precisely, we have compiled a list of 60 core papers from the past five years, mainly from premier conferences and journals in both the ML and SE disciplines (with 50 out of 60 papers published in Core Rank A/A* venues1). Given the escalating interest and burgeoning research in this domain, it is timely for our survey to (1) provide a comprehensive overview of DA for source code models, and (2) pinpoint key challenges and opportunities to stimulate and guide further exploration in this emerging field. To the best of our awareness, our paper constitutes the first comprehensive survey offering an in-depth examination of DA techniques for source code models.
Footnote 1: We refer to the venues listed at [http://portal.core.edu.au/conf-ranks/](http://portal.core.edu.au/conf-ranks/) and [http://portal.core.edu.au/jnl-ranks/](http://portal.core.edu.au/jnl-ranks/).
The structure of this paper is organized as follows:
* Section 3 offers a thorough review of three categories of DA for source code models: rule-based (3.1), model-based (3.2), and example interpolation-based (3.3) techniques.
* Section 4 provides a summary of prevalent strategies and techniques designed to enhance the quality of augmented data, encompassing method stacking (4.1) and optimization (4.2).
* Section 5 articulates various beneficial source code scenarios for DA, including adversarial examples for robustness (5.1), low-resource domains (5.2), retrieval augmentation (5.3), and contrastive learning (5.4).
* Section 6 delineates DA methodologies for common source code tasks, such as code authorship attribution (6.1), clone detection (6.2), defect detection (6.3), code summarization (6.4), code search (6.5), code completion (6.6), code translation (6.7), code question answering (6.8), problem classification (6.9), method name prediction (6.10), and type prediction (6.11).
* Section 7 expounds on the challenges and future prospects in the realm of DA for source code models.
Through this work, we hope to emulate prior surveys which have analyzed DA techniques for other data types, such as text (Feng et al., 2021), time series (Wen et al., 2020), and images (Shorten and Khoshgoftaar, 2019). Our intention is to pique further interest, spark curiosity, and encourage further research in the field of data augmentation, specifically focusing on its application to source code.
## 2 Background
### What are source code models?
Source code models are trained on large-scale corpora of source code and therefore able to model the contextual representations of given code snippets (Allamanis et al., 2017). In the early stage, researchers have attempted to leverage deep learning architectures like LSTM (Gu et al., 2016) and Seq2Seq (Yin and Neubig, 2017) to model the source code like plain text, and shown that these models can achieve great performance on specific downstream tasks of source code. With the development of pre-trained language models in NLP, many pre-trained source code models are proposed
Figure 1: Yearly publications on the topic of “DA for Source Code Models”. Data Statistics as of March 2023.
to enhance the source code representations and efficiently be scaled to any downstream tasks (Feng et al., 2020; Guo et al., 2021; Nijkamp et al., 2023). Some of these models incorporate the inherent structure of code. For example, instead of taking the syntactic-level structure of source code like ASTs, Guo et al. (2021) consider program data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of "where-the-value-comes-from" between variables. In this survey, we focus DA methods designed for all the deep-learning-based source code models.
### What is data augmentation?
Data augmentation (DA) techniques aim to improve the model's performance in terms of various aspects (e.g., accuracy and robustness) via increasing training example diversity with data synthesis. Besides, DA techniques can help avoid model overfitting in the training stage, which maintains the generability of the model. In CV, DA techniques with predefined rules are commonly adopted when training models, such as image cropping, image flipping, and color jittering (Shorten and Khoshgoftaar, 2019). These techniques can be classified as _rule-based_ DA. Furthermore, some attempts like Mixup have been made to create new examples by fusing multiple examples together, which is categorized as _example interpolation_ DA. Compared to CV, DA techniques for NLP greatly rely on language models that can help paraphrase the given context by word replacing or sentence rewriting (Feng et al., 2021). As most of these language models are pre-trained and can capture the semantics of inputs, they serve as reasonable frameworks to modify or paraphrase the plain texts. We denote such DA methods as _model-based_ DA.
### How does data augmentation work in source code?
Compared to images and plain texts, source code is less flexible to be augmented due to the nature of strict programming syntactic rules. Hence, we observe that most DA approaches in source code must follow the predetermined transformation rules in order to preserve the functionality and syntax of the original code snippets. To enable the complex processing of the given source code, a common approach is to use a parser to build a concrete syntax tree from the code, which represents the program grammar in a tree-like form. The concrete syntax tree will be further transformed into an abstract syntax tree (AST) to simplify the representation but maintain the key information such as identifiers, if-else statements, and loop conditions. The parsed information is utilized as the basis of the _rule-based_ DA approaches for identifier replacement and statement rewrite (Quiring et al., 2019). From a software engineering perspective, these DA approaches can emulate more diverse code representation in real-world scenarios and thus make source code models more robust by training with the augmented data (Yefet et al., 2020).
## 3 Data Augmentation Methods for Source Code Models
This section categorizes the mainstream DA techniques specifically designed for source code models into three parts: rule-based, model-based, and example-interpolation techniques. We explain studies of different branches as follows.
### Rule-based Techniques
A large number of DA methods utilize _predetermined rules_ to transform the programs without breaking syntax rules and semantics. Specifically, these rules mainly implicitly leverage ASTs to transform the code snippets. The transformations can include operations such as replacing variable names, renaming method names, and inserting dead code. Besides the basic program syntax, some code transformations consider deeper structural information, such as control-flow graph (CGF) and use-define chains (UDG) (Quiring et al., 2019). Additionally, a small part of rule-based DA techniques focuses on augmenting the natural language context in the code snippets, including doc-strings and comments (Bahrami et al., 2021; Song et al., 2022; Park et al., 2023). We illustrate a rule-based DA example relying on program grammars in Figure 2.
Zhang et al. (2020) propose MHM, a method of iteratively renaming identifiers in the code snippets. Considered as the approach to generate examples for adversarial training, MHM greatly improves the robustness of source code models. Later, Srikant et al. consider program obfuscations as adversarial perturbations, where they rename program variables in an attempt to hide the program's intent from a reader. By applying these perturbed examples to the training stage, the source code models become more robust to the adversarial attack. Instead of just renaming identifiers, BUGLAB-Aug(Allamanis et al., 2021) contains more rules
to augment code snippets, emphasizing both the programming language and natural language, such as comment deletion, comparison expression mirroring, and if-else branch swapping. The evaluation on \(\mathsf{BUGLAB\text{-}Aug}\) demonstrates that DA methods can be exploited for self-supervised bug detection and repair. Similarly, Jain et al. (2021) use compiler transforms as data augmentation, called \(\mathsf{Transplier}\), automatically generating a dataset of equivalent functions. Specifically, they define 11 compiler transforms by exploiting ASTs of the programs. Rule-based DA later has been widely used for source code models to capture code representation effectively via contrastive learning (Ding et al., 2021; Liu et al., 2023).
Brockschmidt et al. (2019) present a generative source code model by augmenting the given AST with additional edges to learn diverse code expressions. Instead of the direct augmentation on AST, Quiring et al. (2019) propose three different augmentation schemes via the combination of AST and CFG, UDG and declaration-reference mapping (DRM), named as \(\mathsf{Control}\)\(\mathsf{Transformations}\), \(\mathsf{Declaration}\)\(\mathsf{Transformations}\) and \(\mathsf{API}\)\(\mathsf{Transformations}\). \(\mathsf{Control}\)\(\mathsf{Transformations}\) rewrite control-flow statements or modifies the control flow between functions. In total, the family contains 5 transformations. This transformation involves passing variables as function arguments, updating their values, and changing the control flow of the caller and callee. \(\mathsf{Declaration}\)\(\mathsf{Transformations}\) consist of 14 transformers that modify, add or remove declarations in source code. \(\mathsf{Declaration}\)\(\mathsf{Transformations}\) make DA necessary to update all usages of variables which can be elegantly carried out using the DRM representation.
API Transformations contain 9 transformations and exploits the fact that various APIs can be used to solve the same problem. Programmers are known to favor different APIs and thus tampering with API usage is an effective strategy for changing stylistic patterns.
Another line of work is augmenting the natural language context in source code. \(\mathsf{QRA}\)(Huang et al., 2021) augments examples by rewriting natural language queries when performing code search and code question answering. It rewrites queries with minor rule-based modifications that share the same semantics as the original one. Specifically, it consists of three ways: randomly deleting a word, randomly switching the position of two words, and randomly copying a word. Inspired by this approach, Park et al. (2023) recently devised \(\mathsf{KeyDAC}\) with an emphasis on the query keywords. \(\mathsf{KeyDAC}\) augments on both natural language and programming language. For natural language query, it follows the rules in \(\mathsf{QRA}\) but only modifies non-keywords. In terms of programming language augmentation, \(\mathsf{KeyDAC}\) simply uses ASTs to rename program variables, similar to the aforementioned works.
### Model-based Techniques
A series of DA techniques for source code target training various models to augment data. Intuitively, Mi et al. (2021) utilize Auxiliary Classifier Generative Adversarial Networks (\(\mathsf{AC-GAN}\)) (Odena et al., 2017) to generate augmented programs. In order to increase the training data for code summarization, \(\mathsf{CDA\text{-}CS}\)(Song et al., 2022) uses the pre-trained BERT model (Devlin et al., 2019) to replace synonyms for non-keywords in code comments, which benefits the source code downstream tasks.
While these methods largely adapt the existing model-based DA techniques for general purposes, most DA approaches are specifically designed for source code models. Li et al. (2022) introduce \(\mathsf{IRGen}\), a genetic-algorithm-based model using compiler intermediate representation (LLVM IR) to augment source code embeddings, where \(\mathsf{IRGen}\) generates a piece of source code into a range of semantically identical but syntactically distinct IR codes for improving model's contextual understanding. Ahmad et al. (2023) investigate the suitability of the multilingual generative source code models for unsupervised programming language translation via \(\mathsf{Back\text{-}translation}\), in the sim
Figure 2: Rule-based DA to transform code snippets, Wang et al. (2022).
ilar scope of the one for NLP (Sennrich et al., 2016). However, unlike the one in NLP, Back-translation here is defined as translating between two programming languages via the natural language as an intermediate language. Pinku et al. (2023) exploit another generative source code model, Transcoder (Roziere et al., 2020), to perform source-to-source translation for augmenting cross-language source code.
### Example Interpolation Techniques
Another category of data augmentation (DA) techniques, originated by Mixup(Zhang et al., 2018), involves interpolating the inputs and labels of two or more actual examples. For instance, given that a binary classification task in CV and two images of a dog and a cat respectively, these DA approaches like Mixup can blend these two image inputs and their corresponding labels based on a randomly selected weight. This collection of methods is also termed Mixed Sample Data Augmentation. Despite trials in the context of text classification problems, such methods are hard to be deployed in the realm of source code, as each code snippet is constrained by its unique program grammar and functionality.
In contrast to the aforementioned surface-level interpolation, the majority of example-interpolation DA methods are enhanced to fuse multiple real examples into a single input via model embeddings (Feng et al., 2021). As an illustration in Figure 3, Dong et al. (2023) merge rule-based techniques for source code models with Mixup to blend the representations of the original code snippet and its transformation. This approach is commonly regarded as the linear interpolation technique deployed in NLP classification tasks.
Li et al. (2022) introduce two novel interpolation techniques for source code models, namely Binary Interpolation and Linear Extrapolation. Binary Interpolation serves as a data augmentation strategy, which interchangeably swaps features between samples using elements acquired from a Bernoulli distribution. On the other hand, Linear Extrapolation is another data augmentation approach that generates new data points beyond the existing feature space by extending current features in accordance with a uniform distribution.
## 4 Strategies and Techniques
In real-world applications, the design and efficacy of DA techniques for source code models are influenced by a variety of factors, such as computing cost, example diversity, and models' robustness. This section highlights these factors, offering insights and techniques for devising and optimizing suitable DA methods.
### Method Stacking
As discussed in Section 3, numerous DA strategies are proposed concurrently in a single work, aiming to enhance the models' performance. [Add one sentence to define method stacking] Typically, the combination entails two types: same-type DA or a mixture of different DA methods. The former is typically applied in rule-based DA techniques, stemming from the realization that a single code transformation cannot fully represent the diverse code style and implementation found in the real world.
Several works (Shi et al., 2023; Huang et al., 2021) demonstrate that merging multiple types of DA techniques can enhance the performance of source code models. Mi et al. (2021) combined rule-based code transformation schemes with model-based DA using AC-GAN to create an augmented corpus for model training. Instead of augmenting on programming language, CDA-CS(Song et al., 2022) encompasses two kinds of DA techniques: rule-based non-keyword extraction and model-based non-keyword replacement. Empirical evidence from Chen and Lampouras (2023) shows that combining Back-translation and variable renaming can result in improved code completion performance.
### Optimization
In certain scenarios such as enhancing robustness and minimizing computational cost, optimally se
Figure 3: MixCode, Dong et al. (2023).
lecting specific augmented example candidates is crucial. We denote such goal-oriented candidate selections in DA as _optimization_. Subsequently, we introduce three types of strategies: probabilistic, model-based, and rule-based selection. Probabilistic selection is defined as the optimization via sampling from a probability distribution, while model-based selection is guided by the model to select the most proper examples. In terms of rule-based selection, it is an optimization strategy where specific predetermined rules or heuristics are used to select the most suitable examples.
#### 4.2.1 Probabilistic Selection
We introduce three representative probabilistic selection strategies, HMM, QMDP, and BUGLAB-Aug. MHMZhang et al. (2020) adopts the Metropolis-Hastings probabilistic sampling method, which is a Markov Chain Monte Carlo technique, to choose adversarial examples via identifier replacement. Similarly, QMDPTian et al. (2021) uses a Q-learning approach to strategically select and execute rule-based structural transformations on the source code, thereby guiding the generation of adversarial examples. In BUGLAB-Aug, Allamanis et al. (2021) model the probability of applying a specific rewrite rule at a location in a code snippet similar to the pointer net Merity et al. (2020).
#### 4.2.2 Model-based Selection
Several DA techniques employing this strategy use the model's gradient information to guide the selection of augmented examples. An emblematic approach is the DAMP method Yefet et al. (2020), which optimizes based on the model loss to select and generate adversarial examples via variable renaming. Another variant, SPACELi et al. (2022), performs selection and perturbation of code identifiers' embeddings via gradient ascent, targeting to maximize the model's performance impact while upholding semantic and grammatical correctness of the programming language. A more complex technique, ALERTYang et al. (2022), uses a genetic algorithm in its gradient-based selection strategy. It evolves a population of candidate solutions iteratively, guided by a fitness function that calculates the model's confidence difference, aiming to identify the most potent adversarial examples.
#### 4.2.3 Rule-based Selection
Rule-based selection stands as a powerful approach, featuring predetermined fitness functions or rules. This method often relies on evaluation metrics for decision-making. For instance, IRGenLi et al. (2022) utilizes a Genetic-Algorithm-based optimization technique with a fitness function based on IR similarity. On the other hand, ACCENTZhou et al. (2022) and RADAR apply evaluation metrics such as BLEU Papineni et al. (2002) and CodeBLEU Ren et al. (2020) respectively to guide the selection and replacement process, aiming for maximum adversarial impact. Finally, STRATASpringer et al. (2021) employs a rule-based technique to select high-impact subtokens that significantly alter the model's interpretation of the code.
## 5 Scenarios
This section delves into several commonplace scenarios of source code scenarios, where DA approaches can be applied.
### Adversarial Examples for Robustness
Robustness presents a critical and complex dimension of software engineering, necessitating the creation of semantically-conserved adversarial examples to discern and mitigate vulnerabilities within source code models. There is a surge in designing more effective DA techniques for generating these examples in recent years. Several studies Yefet et al. (2020); Li et al. (2022); Srikant et al. (2022); Li et al. (2022) have utilized rule-based DA methods for testing and enhancing model robustness. Wang et al. (2023) have gone a step further to consolidate universally accepted code transformation rules to establish a benchmark for source code model robustness.
### Low-Resource Domains
In the domain of software engineering, the resources of programming languages are severely imbalanced Orlanski et al. (2023). While some most popular programming languages like Python and Java play major roles in the open-source repositories, many less popular ones are starkly low-resource. As source code models are trained on open-source repositories and forums, the programming language resource imbalance can adversely impact their performance on the resource-scarce programming languages. Furthermore, the application of DA methods within low-resource domains is a recurrent theme within the CV and NLP communities Shorten and Khoshgoftaar (2019); Feng et al. (2019);
2021). Yet, this scenario remains underexplored within the source code discipline.
In order to increase data in the low-resource domain for representation learning, Li et al. (2022) tend to add more training data to enhance source code model embeddings by unleashing the power of compiler IR. Ahmad et al. (2023) propose to use source code models to perform Backtranslation DA, taking into consideration the scenario of low-resource programming languages. Meanwhile, Chen and Lampouras (2023) underscore the fact that source code datasets are markedly smaller than their NLP equivalents, which often encompass millions of instances. As a result, they commence investigations into code completion tasks under this context and experiment with Backtranslation and variable renaming. Shen et al. contend that the generation of bash comments is hampered by a dearth of training data and thus explore model-based DA methods for this task.
### Retrieval Augmentation
Increasing interest has been observed in the application of DA for retrieval augmentation within NLP (Mialon et al., 2023) and source code (Lu et al., 2022). These retrieval augmentation frameworks for source code models incorporate retrieval-augmented examples from the training set when pre-training or fine-tuning source code models. This form of augmentation enhances the parameter efficiency of models, as they are able to store
\begin{table}
\begin{tabular}{l|c c c c c c c c}
**DA Method** & Category & PL & NL & Optimization & Preprocess & Parsing & Level & TA & LA \\ \hline
**ComputeEdge** (Brockschmidt et al., 2019) & Rule & ✓ & ✗ & — & — & AST & AST & ✓ & ✓ \\ RefineRepresentation (Bieik and Vechev, 2020) & Rule & ✓ & ✗ & Model & — & AST & AST & ✓ & ✓ \\ Control Transformations (Quigia et al., 2019) & Rule & ✓ & ✗ & Prob & — & AST+GR+UDG & Input & ✗ \\
**Declaration Transformations** (Quiring et al., 2019) & Rule & ✓ & ✗ & Prob & — & AST+DRM & Input & ✗ \\
**API Transformations** (Quiring et al., 2019) & Rule & ✓ & ✗ & Prob & — & AST+CFG+DRM & Input & ✓ & ✗ \\
**DAMP** (Yefef et al., 2020) & Rule & ✓ & ✗ & Model & — & AST & Input & ✓ & ✓ \\
**TBA** (Huang et al., 2021) & Rule & ✗ & ✓ & — & Tok & — & Embed & ✗ & ✓ \\
**QRA** (Huang et al., 2021) & Rule & ✓ & ✗ & — & Tok & — & Input & ✗ & ✓ \\
**MRI** (Zhang et al., 2020) & Rule & ✗ & ✓ & Prob & — & AST & Input & ✗ & ✓ \\
**Mossard Devro-McDonald and Berger, 2020) & Rule & ✓ & ✗ & Rule & Tok & AST & Input & ✓ & ✓ \\
**AugmentedCode** (Bahrami et al., 2021) & Rule & ✓ & ✗ & — & Tok & — & Input & ✗ & ✓ \\
**MMDP** (Tian et al., 2021) & Rule & ✓ & ✗ & Prob & — & AST & Input & ✗ & ✗ \\
**TransPixel** (Juin et al., 2021) & Rule & ✓ & ✗ & Prob & — & AST & Input & ✓ & ✗ \\
**BUGLAB-Aug** (Allamans et al., 2021) & Rule & ✓ & ✗ & Prob & — & AST & Input & ✗ & ✓ \\
**SPAT** (Yu et al., 2022b) & Rule & ✓ & ✗ & Model & — & AST & Input & ✓ & ✗ \\
**RoPo** (Li et al., 2022) & Rule & ✓ & ✗ & Model & — & AST & Input & ✗ & ✓ \\
**ACCEPT** (Zuo et al., 2022) & Rule & ✓ & ✗ & Rule & — & AST & Input & ✓ & ✓ \\
**SPACE** (Li et al., 2022c) & Rule & ✓ & ✗ & Model & Tok & AST & Embed & ✓ & ✓ \\
**ALERT** (Yang et al., 2022) & Rule & ✓ & ✗ & Model & Tok & AST & Input & ✓ & ✓ \\
**TRGE** (Li et al., 2022) & Rule & ✓ & ✗ & Rule & — & AST+IR & IR & ✓ & ✓ \\
**Binary Interpolation** (Li et al., 2022a) & EI & ✓ & ✓ & — & — & — & Embed & ✓ & ✓ \\ Linear Extrapolation (Li et al., 2022a) & EI & ✓ & ✓ & — & — & — & Embed & ✓ & ✓ \\
**Gaussian Scaling** (Li et al., 2022a) & Rule & ✓ & ✓ & Model & — & — & Embed & ✓ & ✓ \\
**CodeTransformer** (Zubkov et al., 2022) & Rule & ✓ & ✗ & Rule & — & AST & Input & ✓ & ✗ \\
**RADAR** (Yang et al., 2022a) & Rule & ✓ & ✗ & Rule & — & AST & Input & ✓ & ✗ \\
**AC-GAN** (Mih et al., 2021) & Model & ✓ & ✗ & — & — & — & — & Input & ✓ & ✓ \\
**CDAS** (Song et al., 2022) & Model & ✓ & Model & KWE & — & — & Input & ✗ & ✓ \\
**srcML-embed** (Li et al., 2022c) & Rule & ✓ & ✗ & — & — & AST & Embed & ✓ & ✗ \\
**MultiPPA** (Orvalho et al., 2022) & Rule & ✓ & ✗ & — & — & AST & Input & ✓ & ✗ \\
**ProgramTransformer** (Rabin and Alipour, 2022) & Rule & ✓ & ✗ & — & — & AST & Input & ✓ & ✗ \\
**Back-translation** (Ahmad et al., 2023) & Model & ✓ & ✗ & — & Tok & — & Input & ✗ & ✓ \\
**MixCode** (Dong et al., 2023a) & Rule+EI** & ✓ & ✓ & — & — & — & Embed & ✓ & ✓ \\
**WD-GD** (Shen et al.) & Model & ✓ & ✗ & Model & Tok & — & Embed & ✓ & ✓ \\
**ExploitGen** (Yang et al., 2023) & Rule & ✗ & ✓ & — & — & — & Input & ✓ & ✗ \\
**SoDa** (Shi et al., 2023) & Model & ✓ & ✓ & — & — & AST & Input & ✓ & ✓ \\
**Transcopcoplier** (Pinku et al., 2023) & Model & ✓ & ✗ & — & — & — & Input & ✓ & ✗ \\
**STRATA** (Springer et al., 2021) & Rule & ✓ & ✗ & Model & Tok & AST & Input & ✓ & ✓ \\
**KeyDAC** (Park et al., 2023) & Rule & ✓ & ✓ & — & KWE & AST & Embed & ✗ & ✓ \\
**Simplex Interpolation** (Zhang et al., 2022) & EI & ✓ & ✗ & — & — & AST+IR & Embed & ✗ & ✓ \\ \end{tabular}
\end{table}
Table 1: Comparing a selection of DA methods by various aspects relating to their applicability, dependencies, and requirements. _PL_, _NL_, _TA_, _LA_, _EI_, _Prob_, _Tok_, and _KWE_ stand for Programming Language, Natural Language, Example Interpolation, Probability, Tokenization, Keyword Extraction, Task-Agnostic, and Language-Agnostic. _PL_ and _NL_ determine if the DA method is applied to the programming language or natural language context. _Preprocess_ denotes preprocessing required besides the program parsing. _Parsing_ refers to the type of feature used by the DA method during program parsing. _Level_ denotes the depth at which data is modified by the DA. _TA_ and _LA_ represent whether the DA method can be applied to different tasks or programming languages. As most papers do not clearly state if their DA methods are _TA_ and _LA_, we subjectively denote the applicability.
less knowledge within their parameters and instead retrieve it. It is shown as a promising application of DA in various source code downstream tasks, such as code summarization [22, 23], code completion [27] and program repair [24].
### Contrastive Learning
Another source code scenario to deploy DA methods is contrastive learning, where it enables models to learn an embedding space in which similar samples are close to each other while dissimilar ones are far apart [28, 29, 25]. As the training datasets commonly contain limited sets of positive samples, DA methods are preferred to construct similar samples as the positive ones. liu2020learning make use of contrastive learning with DA to devise superior pre-training paradigms for source code models, while some works study the advantages of this application in some source code tasks like defect detection [20], clone detection [23, 24] and code search [25, 26, 27].
## 6 Downstream Tasks
In this section, we discuss several DA works for common source code tasks and evaluation datasets.
### Code Authorship Attribution
Code authorship attribution is the process of identifying the author of a given code, usually achieved by source code models. yang2020learning initially investigate generating adversarial examples on the _Google Code Jam_ (GCJ) dataset, which effectively fools source code models to identify the wrong author of a given code snippet. By training with these augmented examples, the model's robustness can be further improved. li2020learning propose another DA method called RoPGen for the adversarial attack and demonstrate its efficacy on GCJ. dong2020learning empirically study the effectiveness of several existing DA approaches for NLP on several source code tasks, including authorship attribution on _GCJ_.
### Clone Detection
Code clone detection refers to the task of identifying if the given code snippet is cloned and modified from the original sample, and can be called plagiarism detection in some cases. This is a challenging downstream task as it needs the source code model to understand the source code both syntactically and semantically. jain2021learning propose correct-by-construction DA via compiler information to generate many variants with equivalent functionality of the training sample and show its effectiveness of improving the model robustness on _BigCloneBench_[28] and a self-collected JavaScript dataset. jia2023learning show that when training with adversarial examples via obfuscation transformation, the robustness of source code models can be significantly improved. zubkov2022learning provide the comparison of multiple contrastive learning, combined with rule-based transformations for the clone detection task. pink2023learning later use Transcompiler to translate between limited source code in Python and Java and therefore increase the training data for cross-language code clone detection.
### Defect Detection
Defect Detection, in other words, bug or vulnerability detection, is to capture the bugs in given code snippets. The task can be considered as the binary classification task, where the labels are either true or false. allamanis2021learning implement BUGLAB-Aug, a DA framework of self-supervised bug detection and repair. BUGLAB-Aug has two sets of code transformation rules, one is a bug-inducing rewrite and the other one is rewriting as DA. Their approach boosts the performance and robustness of source code models simultaneously. cheng2022learning present a path-sensitive code embedding technique called ContraFlow, which uses self-supervised contrastive learning to detect defects based on value-flow paths. ContraFlow utilizes DA to generate contrastive value-flow representations of three datasets (namely _D2A_[25], Fan [26] and _FFM-Peg+Qemu_[27]) to learn the (dis)similarity among programs. ding2021learning present a novel self-supervised model focusing on identifying (dis)similar functionalities of source code, which outperforms the state-of-the-art models on _REVEAL_[1] and _FFMPeg+Qemu_[27]. Specifically, they design code transformation heuristics to automatically create bugged programs and similar code for augmenting pre-training data.
### Code Summarization
Code summarization is considered as a task that generates a comment for a piece of the source code, and is thus also named code comment generation. Zhang et al. (2020) apply MHM to perturb training examples and mix them with the original ones for adversarial training, which effectively improves the robustness of source code models in summarizing the adversarial code snippets. Zhang et al. (2020) develop a retrieval-augmentation framework for code summarization, relying on similar code-summary pairs to generate the new summary on _PCSD_ and _JCSD_ datasets Miceli-Barone and Sennrich (2017); Hu et al. (2018). Based on this framework, Liu et al. (2018) leverage Hybrid GNN to propose a novel retrieval-augmented code summarization method and use it during model training on the self-collected CCSD dataset. Zhou et al. (2022) generate adversarial examples of a Python dataset Wan et al. (2018) and _JSCD_ to evaluate and enhance the source code model robustness.
### Code Search
Code search, or code retrieval, is a text-code task that searches code snippets based on the given natural language queries. The source code models on this task need to map the semantics of the text to the source code. Bahrami et al. (2021) increase the code search queries by augmenting the natural language context such as doc-string, code comments and commit messages. Shi et al. (2022) use AST-focused DA to replace the function and variable names of the data in _CodeSearchNet_Husain et al. (2019) and _CoSQA_Huang et al. (2021). Shi et al. (2023) introduce soft data augmentation (SoDa), without external transformation rules on code and text. With SoDa, the model predicts tokens based on dynamic masking or replacement when processing _CodeSearchNet_. Instead of applying rule-based DA techniques, Li et al. (2022) manipulate the representation of the input data by interpolating examples of _CodeSearchNet_.
### Code Completion
Code completion requires source code models to generate lines of code to complete given programming challenges. Anand et al. suggest that source code models are vulnerable to adversarial examples which are perturbed with transformation rules. Lu et al. (2022) propose a retrieval-augmented code completion framework composed of the rule-based DA module to generate on _PY150_Raychev et al. (2016) and _GitHub Java Corpus_ datasets Allamanis and Sutton (2013). Wang et al. (2023) customize over 30 transformations specifically for code on docstrings, function and variable names, code syntax, and code format and benchmark generative source code models on _HumanEval_Chen et al. (2021) and _MBPP_Austin et al. (2021). Yang et al. (2022) devise transformations on functional descriptions and signatures to attack source code models and show that their performances are susceptible.
### Code Translation
Similar to neural machine translation in NLP Stahlberg (2020), the task is to translate source code written in a specific programming language translation to another one. Ahmad et al. (2023) apply data augmentation through back-translation to enhance unsupervised code translation. They use pre-trained sequence-to-sequence models to translate code into natural language summaries and then back into code in a different programming language, thereby creating additional synthetic training data to improve model performance. Chen and Lampouras (2023) utilize Back-translation and variable augmentation techniques to yield the improvement in code translation on _CodeTrans_Lu et al. (2021).
### Code Question Answering (CQA)
CQA can be formulated as a task where the source code models are required to generate a textual answer based on given a code snippet and a question. Huang et al. (2021) incorporate two rule-base DA methods on code and text to create examples for contrastive learning. Li et al. (2022) explore the efficacy of adversarial training on the continuous embedding space with rule-based DA on _CodeQA_Liu and Wan (2021), a free-form CQA dataset. Park et al. (2023) evaluate KeyDAC, a framework using query writing and variable renaming as DA, on _WebQueryTest_ of CodeXGLUE Lu et al. (2021). Different from _CodeQA_, _WebQueryTest_ is a CQA benchmark only containing Yes/No questions.
### Code Classification
The task performs the categorization of programs regarding their functionality. Wang et al. (2022) propose a novel AST hierarchy representation for contrastive learning with the graph neural network. Specifically, they augment the node embeddings in
AST paths on _OJ_, a dataset containing 104 classes of programs. Zhang et al. (2022) incorporate simplex interpolation, an example-interpolation DA approach on IR, to create intermediate embeddings on _POJ-104_ from CodeXGLUE (Lu et al., 2021). Dong et al. (2023) also explore the example-interpolation DA to fuse the embeddings of code snippets. They evaluate the method on two datasets, _JAVA250_ and _Python800_(Puri et al., 2021).
### Method Name Prediction
The goal of method name prediction is to predict the name of a method given the program. Yefet et al. (2020) attack and defense source code models by using variable-name-replaced adversarial programs on the _Code2Seq_ dataset (Alon et al., 2019). Pour et al. (2021) propose a search-based testing framework specifically for adversarial robustness. They generate adversarial examples of Java with ten popular refactoring operators widely used in Java. Rabin et al. (2021) and Yu et al. (2022) both implement data augmentation frameworks and various transformation rules for processing Java source code on the _Code2Seq_ dataset.
### Type Prediction
Type prediction, or type interference, aims to predict parameter and function types in programs. Bielik and Vechev (2020) conduct adversarial attacks on source code models with examples of transformed ASTs. They instantiate the attack to type prediction on JavaScript and TypeScript. Jain et al. (2021) apply compiler transforms to generates many variants of programs in DeepTyper (Hellendoorn et al., 2018), with equivalent functionality with 11 rules. Li et al. (2022) incorporate srcML (Collard et al., 2013) meta-grammar embeddings to augment the syntactic features of examples in three datasets, _DeepTyper_, _Typilus Data_(Allamanis et al., 2020) and _CodeSearchNet_(Husain et al., 2019).
## 7 Challenges and Opportunities
When it comes to source code, DA faces significant challenges. Nonetheless, it's crucial to acknowledge that these challenges pave the way for new possibilities and exciting opportunities in this area of work.
Discussion on theory.Currently, there's a noticeable gap in the in-depth exploration and theoretical understanding of DA methods in source code. Most existing research on DA is centered around image processing and natural language fields, viewing data augmentation as a way of applying pre-existing knowledge about data or task invariance (Dao et al., 2019; Wu et al., 2020; Shi et al., 2022). When shifting to source code, much of the previous work introduces new methods or demonstrates how DA techniques can be effective for subsequent tasks. However, these studies often overlook why and how particularly from a mathematical perspective. With source code being discrete by nature, having a theoretical discussion becomes even more important. It allows us to understand DA from a broader perspective, not just by looking at experimental results. By exploring DA in this way, we can better understand its underlying principles without being solely dependent on experimental validation.
More study on pre-trained models.In recent years, pre-trained source code models have been widely applied in source code, containing rich knowledge through self-supervision on a huge scale of corpora (Feng et al., 2020; Guo et al., 2021; Zhuo, 2023). Numerous studies have been conducted utilizing pre-trained source code models for the purpose of DA, yet, most of these attempts are confined to mask token replacement (Shi et al., 2023), direct generation after fine-tuning (Ahmad et al., 2023; Pinku et al., 2023). An emergent research opportunity lies in exploring the potential of DA in the source code domain with the help of large language models (LLMs) trained on a large amount of text and source code (Chen et al., 2021; Li et al., 2023). LLMs have the capability of context generation based on prompted instructions and provided examples, making them a choice to automate the DA process in NLP (Yoo et al., 2021; Wang et al., 2021). Different from the previous usages of pre-trained models in DA, these works open the era of "prompt-based DA". In contrast, the exploration of prompt-based DA in source code domains remains a relatively untouched research area. Another direction is to harness the internal knowledge encoded in pre-trained source code models. For example, Karmakar and Robbes (2021); Wan et al. (2022) show that ASTs and code semantics can be induced from these models without the static analysis tools. As most DA methods for source code models tend to predefine the code transformation rules via program analysis, it is expected that the programming knowledge inside these pre-trained source code
models can automate the rule designs.
Working with domain-specific data.Our paper focus on surveying DA techniques for common downstream tasks involving processing source code. However, we are aware that there are a few works on other task-specific data in the field of source code. For instance, API recommendation and API sequence generation can be considered a part of source code tasks Huang et al. (2018); Gu et al. (2016). DA methods covered by our survey can not be directly generalized to these tasks, as most of them only target program-level augmentation but not API-level. We observe a gap of DA techniques between these two different layers Treude and Robillard (2016); Xu et al. (2020); Wang et al. (2021), which provides opportunities for future works to explore. Additionally, the source code modeling has not fully justified DA for out-of-distribution generalization. Previous studies Hajipour et al. (2022); Hu et al. (2022) assume the domain as the programs with different complexity, syntax, and semantics. We argue that this definition is not natural enough. Similar to the subdomains in NLP, like biomedical and financial texts, the application subdomains of source code can be diverse. For example, the programs to solve data science problems can significantly differ from those for web design. We encourage SE and ML communities to study the benefits of DA when applied to various application subdomains of source code.
More exploration on project-level source code and low-resource programming languages.The existing methods have made sufficient progress in function-level code snippets and common programming languages. The emphasis on code snippets at the function level fails to capture the intricacies and complexities of programming in real-world scenarios, where developers often work with multiple files and folders simultaneously. Therefore, we highlight the importance of exploring DA approaches on the project level. The DA on source code projects can be distinct from the function-level DA, as it may involve more information such as the interdependencies between different code modules, high-level architectural considerations, and the often intricate relationship between data structures and algorithms used across the project Mockus et al. (2002). At the same time, limited by data resources Husain et al. (2019); Orlanski et al. (2023), augmentation methods of low-resource languages are scarce, although they have more demand for DA. Exploration in these two directions is still limited, and they could be promising directions.
Mitigating social bias.As source code models have advanced software development, they may be used to develop human-centric applications such as human resources and education, where biased programs may result in unjustified and unethical decisions for underrepresented people Zhuo et al. (2023). While social bias in NLP has been well studied and can be mitigated with DA Feng et al. (2021), the social bias in source code has not been brought to attention. For example, Zhuo et al. (2023) and Liu et al. (2023) find that LLMs of source code have server bias in various demographics such as gender, sexuality, and occupation when performing code generation based on the natural language queries. To make these models more responsible in source code, we urge more research on mitigating bias. As prior works in NLP suggested, DA may be an effective technique to make source code models more responsible.
Few-shot learning.In few-shot scenarios, models are required to achieve performance that rivals that of traditional machine learning models, yet the amount of training data is extremely limited. DA methods provide a direct solution to the problem. However, limited works in few-shot scenarios have adopted DA methods Nashid et al. (2023). Mainstream pre-trained source code models obtain rich semantic knowledge through language modeling. Such knowledge even covers to some extent the semantic information introduced by traditional paraphrasing-based DA methods. In other words, the improvement space that traditional DA methods bring to pre-trained source code models has been greatly compressed. Therefore, it is an interesting question how to provide models with fast generalization and problem-solving capability by generating high-quality augmented data in few-shot scenarios.
Multimodal applications.It is important to note that the emphasis on function-level code snippets does not accurately represent the intricacies and complexities of real-world programming situations. In such scenarios, developers often work with multiple files and folders simultaneously.s have also been developed. Wang et al. (2021) and Liu et al.
(2023a) explore the chart derendering with an emphasis on source code and corresponding APIs. Suris et al. (2023) propose a framework to generate Python programs to solve complex visual tasks including images and videos. Although such multimodal applications are more and more popular, no study has yet been conducted on applying DA methods to them. A potential challenge for the multimodal source code task technique is to effectively bridge between the embedding representations for each modality in source code models, which has been investigated in vision-language multimodal tasks Ray et al. (2019); Tang et al. (2020); Hao et al. (2023).
Lack of unification.The current body of literature on data augmentation (DA) for source code presents a challenging landscape, with the most popular methods often being portrayed in a supplementary manner. A handful of empirical studies have sought to compare DA methods for source code models de Paula Rodrigues et al. (2023); Dong et al. (2023). However, none of these works leverages most of the existing advanced DA methods for source code models. Whereas there are well-accepted frameworks for DA for CV (e.g. default augmentation libraries in PyTorch, RandAugment Cubuk et al. (2020)) and DA for NLP (e.g. NL-Augment Dhole et al. (2021)), a corresponding library of generalized DA techniques for source code models is conspicuously absent. Furthermore, as existent DA methods are usually evaluated with various datasets, it is hard to truly determine the efficacy. Therefore, we posit that the progression of DA research would be greatly facilitated by the establishment of standardized and unified benchmark tasks, along with datasets for the purpose of contrasting and evaluating the effectiveness of different augmentation methods. This would pave the way towards a more systematic and comparative understanding of the benefits and limitations of these methods.
## 8 Conclusion
Our paper comprehensively analyzes data augmentation techniques in the context of source code. We first explain the concept of data augmentation and its function. We then examine the primary data augmentation methods commonly employed in source code research and explore augmentation approaches for typical source code applications and tasks. Finally, we conclude by outlining the current challenges in the field and suggesting potential directions for future source code research. In presenting this paper, we aim to assist source code researchers in selecting appropriate data augmentation techniques and encourage further exploration and advancement in this field.
## Limitations
While the work presents in this paper has its merits, we acknowledge the several limitations. Firstly, our work only surveys imperative programming languages used for general-purpose programming and does not cover DA methods for declarative languages including SQL Zhuo et al. (2023). Secondly, our focus has been primarily on function-level DA within the source code context. As such, there remains a need for future development in project-level DA methods. Nonetheless, this paper offers a valuable collection of general-purpose DA techniques for source code models, and we hope that it can serve as an inspiration for further research in this area. Thirdly, given the page limits, the descriptions presented in this survey are essentially brief in nature. Our approach has been to offer the works in meaningful structured groups rather than unstructured sequences, to ensure comprehensive coverage. This work can be used as an index where more detailed information can be found in the corresponding works. Lastly, it is worth noting that this survey is purely qualitative and does not include any experiments or empirical results. To provide more meaningful guidance, it would be helpful to conduct comparative experiments across different DA strategies. We leave this as a suggestion for future work.
|
2308.16402 | GDD type Spanning Bipartite Block Designs | There is a one-to-one correspondence between the point set of a group
divisible design (GDD) with $v_1$ groups of $v_2$ points and the edge set of a
complete bipartite graph $K_{v_1,v_2}$. A block of GDD corresponds to a
subgraph of $K_{v_1,v_2}$. A set of subgraphs of $K_{v_1,v_2}$ is constructed
from a block set of GDDs. If the GDD satisfies the $\lambda_1, \lambda_2$
concurrence condition, then the set of subgraphs also satisfies the spanning
bipartite block design (SBBD) conditions. We also propose a method to construct
SBBD directly from an $(r,\lambda)$-design and a difference matrix over a
group. Suppose the $(r,\lambda)$-design consists of $v_2$ points and $v_1$
blocks. When $v_1 >> v_2$, we show a method to construct a SBBD with $v_1$ is
close to $v_2$ by partitioning the block set. | Shoko Chisaki, Ryoh Fuji-Hara, Nobuko Miyamoto | 2023-08-31T02:12:08Z | http://arxiv.org/abs/2308.16402v1 | # GDD type Spanning Bipartite Block Designs
###### Abstract
There is a one-to-one correspondence between the point set of a group divisible design (GDD) with \(v_{1}\) groups of \(v_{2}\) points and the edge set of a complete bipartite graph \(K_{v_{1},v_{2}}\). A block of GDD corresponds to a subgraph of \(K_{v_{1},v_{2}}\). A set of subgraphs of \(K_{v_{1},v_{2}}\) is constructed from a block set of GDDs. If the GDD satisfies the \(\lambda_{1},\lambda_{2}\) concurrence condition, then the set of subgraphs also satisfies the spanning bipartite block design (SBBD) conditions [3]. We also propose a method to construct SBBD directly from an \((r,\lambda)\)-design and a difference matrix over a group. Suppose the \((r,\lambda)\)-design consists of \(v_{2}\) points and \(v_{1}\) blocks. When \(v_{1}>>v_{2}\), we show a method to construct a SBBD with \(v_{1}\) is close to \(v_{2}\) by partitioning the block set.
**Keyword.** group divisible design, \((r,\lambda)\)-design, difference matrix, spanning bipartite block design
**AMS classification. 05B05, 05B10, 51E30**
## 1 Introduction
Let \(V_{1}=\{1,2,\ldots,v_{1}\}\) and \(V_{2}=\{1,2,\ldots,v_{2}\}\) be disjoint two point sets, and \(E=\{e_{ij}\,|\,i\in V_{1},j\in V_{2}\}\) be the edge set between \(V_{1}\) and \(V_{2}\). \(K_{v_{1},v_{2}}=(V_{1},V_{2}\,;\,E)\) is the complete bipartite graph with \(v_{1},v_{2}\) point sets. Let \(\mathcal{B}=\{B_{1},B_{2},\ldots,B_{N}\}\) be a collection of subgraphs of \(K_{v_{1},v_{2}}\) called spanning bipartite blocks (SB-blocks). If \(\mathcal{B}\) satisfies the following five conditions, then we call \((K_{v_{1},v_{2}}\ ;\ \mathcal{B})\) a _spanning bipartite block design_ (SBBD):
1. Each SB-block \(B_{i}\) of \(\mathcal{B}\) is incident with all points of \(V_{1}\) and \(V_{2}\) (spanning condition).
2. Each edge of \(K_{v_{1},v_{2}}\) appears in \(\mathcal{B}\) exactly \(\mu\) times.
3. Any two edges \(e_{ij},e_{ij^{\prime}}\) such that \(i\in V_{1}\), \(j,j^{\prime}\in V_{2},(j\neq j^{\prime})\) are contained together in \(\lambda_{12}\) SB-blocks in \(\mathcal{B}\).
4. Any two edges \(e_{ij},e_{i^{\prime}j}\) such that \(i,i^{\prime}\in V_{1},(i\neq i^{\prime})\), \(j\in V_{2}\) are contained together in \(\lambda_{21}\) SB-blocks in \(\mathcal{B}\).
5. Any two edges \(e_{ij}\), \(e_{i^{\prime}j^{\prime}}\) such that \(i,i^{\prime}\in V_{1},(i\neq i^{\prime})\), \(j,j^{\prime}\in V_{2},(j\neq j^{\prime})\) are contained together in \(\lambda_{22}\) SB-blocks in \(\mathcal{B}\).
A spanning bipartite block design is first proposed in Chisaki et al. [3]. This design is for a statistical model to estimate treatment parameters with the structure of a complete bipartite graph. In [3], it is proved that SBBDs with some conditions is A-optimum. SBBD can also be used as a kind of sparsing method to prevent over-fitting of deep learning. Compared to the random dropconnect method in Wan et al. [18], the spanning condition plays an important role in sparsing neural networks to drop out connections independently at each layer. And the balancing properties work for reducing the variances of weight estimators.
There is a similar block design called a _balanced bipartite block design_ (BBBD). A BBBD \((V_{1},V_{2}:\mathcal{B})\) is defined below:
1: let \(K_{v_{1},v_{2}}\) be a complete bipartite graph with \(V_{1}\), and \(V_{2}\) point sets, \(|V_{1}|=v_{1},|V_{2}|=v_{2}\),
2: \(\mathcal{B}=\{B_{1},B_{2},\ldots,B_{N}\}\) be a set of complete sub-bipartite graphs \(K_{k_{1},k_{2}}\) of \(K_{v_{1},v_{2}}\) (called blocks) and block size is \(k=k_{1}+k_{2}\).
3: for any \(t_{1}\) points from \(V_{1}\) and \(t_{2}\) points from \(V_{2}\), there are exactly \(\mu_{t_{1},t_{2}}\) blocks in \(\mathcal{B}\) containing those points.
Although this design is similar in name to SBBD, even the blocks of BBBD are all complete sub-bipartite graphs \(K_{k_{1},k_{2}}\). In Chisaki et al. [4, 5], different designs from SBBD were proposed for the same purpose of deep learning. Those designs are rather close to BBBD. Many papers, including Kageyama and Sinha [9], Mishima et al. [12] and Jaggi et al. [7] show constructions satisfying the third condition for \(\mu_{2,0},\mu_{0,2}\) and \(\mu_{1,1}\). Ozawa et al. [13] show constructions of BBBD (it is called a _split-block designs_ in [13] ) satisfying the third condition of \(\mu_{t_{1},t_{2}}\) for \(0\leq t_{1},t_{2}\leq 2\). Martin [11] defined a type of BBBD (it is called _Mixed \(t\)-design_) satisfying the third condition for any \(t_{1},t_{2}\) such that \(t_{1}+t_{2}=t\). He shows some constructions for \(t=2\) and \(t=3\).
In this paper, we show that an SBBD with a certain condition and a GDD can be considered equivalent, and propose a construction for the SBBDs using an \((r,\lambda)\)-design and a difference matrix. Additionally, we describe the E-optimality of SBBDs and show some examples.
## 2 Design matrix
In this section, we introduce a matrix representation of SBBDs and re-express the five conditions. First, we define a \((0,1)\)-matrix \(X\) from the SB-blocks called a _design matrix_.
* Suppose that the edges \(e_{ij}\) of \(K_{v_{1},v_{2}}\) are arranged in the following lexicographical order: \[(e_{11},e_{12},\ldots,e_{1v_{2}}\ ;\ e_{21},e_{22},\ldots,e_{2v_{2}}\ ;\ \cdots\ ;\ e_{v_{1}1},\ldots,e_{v_{1}v_{2}}).\] This sequence of edges corresponds to the columns of \(X\). Denote \((e_{ij})\) for the column number corresponding to the edge \(e_{ij}\).
* Put \(X=[x_{k,(e_{ij})}]\), then \(x_{k,(e_{ij})}\) is the element of the \(k\)-th row and the \((e_{ij})\)-th column of \(X\). The design matrix \(X\) is defined by the SB-blocks \(B_{1},B_{2},\ldots,B_{N}\) as follows: \[x_{k,(e_{ij})}=\begin{cases}1&\text{ if }\ e_{ij}\in B_{k}\\ 0&\text{ otherwise}\end{cases}\]
* \(X\) is an \((N\times v_{1}v_{2})\)-matrix.
Let \(X_{i}\) be an \((N\times v_{2})\)-submatrix consisting of \(v_{2}\) columns of \(X\) corresponding to \((e_{i1},e_{i2},\ldots,\)\(e_{iv_{2}})\). Then the design matrix \(X\) is partitioned into \(v_{1}\) submatrices expressed as \(X=(X_{1}|X_{2}|\cdots\ |X_{v_{1}})\). If \((K_{v_{1},v_{2}}\,;\,\mathcal{B})\) is a spanning bipartite block design then \(X=(X_{1}|X_{2}|\cdots|X_{v_{1}})\) has the following property:
1. any row of \(X_{i}\) is not zero-vector for \(1\leq i\leq v_{1}\) and \(\sum_{i=1}^{v_{1}}X_{i}\) does not contain a zero element (spanning condition),
2. \(\operatorname{diag}(X_{i}^{t}X_{i})=(\mu,\mu,\ldots,\mu)\) for \(1\leq i\leq v_{1}\),
3. all off-diagonal elements of \(X_{i}^{t}X_{i}\) are \(\lambda_{12}\) for \(1\leq i\leq v_{1}\),
* \(\operatorname{diag}(X_{i}^{t}X_{j})=(\lambda_{21},\lambda_{21},\ldots,\lambda_{21})\) for \(1\leq i\neq j\leq v_{1}\),
* all off-diagonal elements of \(X_{i}^{t}X_{j}\) are \(\lambda_{22}\) for \(1\leq i\neq j\leq v_{1}\).
\(X^{t}X\) is called an _information matrix_. The information matrix of SBBD is expressed as follows:
\[X^{t}X =I_{v_{1}}\otimes(X_{i}^{t}X_{i})+(J_{v_{1}}-I_{v_{1}})\otimes(X_ {i}^{t}X_{j})\] \[=I_{v_{1}}\otimes\left[\begin{array}{cccc}\mu&\lambda_{12}& \cdots&\lambda_{12}\\ \lambda_{12}&\mu&\cdots&\lambda_{12}\\ \vdots&\vdots&\ddots&\vdots\\ \lambda_{12}&\lambda_{12}&\cdots&\mu\end{array}\right]+(J_{v_{1}}-I_{v_{1}}) \otimes\left[\begin{array}{cccc}\lambda_{21}&\lambda_{22}&\cdots&\lambda_{ 22}\\ \lambda_{22}&\lambda_{21}&\cdots&\lambda_{22}\\ \vdots&\vdots&\ddots&\vdots\\ \lambda_{22}&\lambda_{22}&\cdots&\lambda_{21}\end{array}\right],\]
where \(I_{n}\) is the identity matrix of size \(n\) and \(J_{n}\) is the \((n\times n)\) all-ones matrix. A matrix expressed by \(aI_{n}+b(J_{n}-I_{n})\) is called _completely symmetric_. The information matrix above has a double structure of a completely symmetric matrix. The spanning bipartite block design is denoted as SBBD\((v_{1},v_{2},N;\Lambda)\), where \(\Lambda=(\mu,\lambda_{12},\lambda_{21},\lambda_{22})\).
**Example 2.1**.: _Let_
\[X=(X_{1}|X_{2}|X_{3})=\left[\begin{array}{ccccc|cccc}0&1&1&1&1&0&1&1&0\\ 1&0&1&0&1&1&0&1&1\\ 1&1&0&1&0&1&1&0&1\\ 0&1&1&0&1&1&0&1\\ 1&0&1&1&0&1&1&0\\ 1&1&0&1&1&0&0&1&1\\ 1&0&1&1&1&0&1&0&1\\ 1&0&1&1&0&1&1&0\\ \end{array}\right]\]
_be a design matrix of an SBBD. Then the information matrix is_
\[X^{t}X=I_{3}\otimes\left[\begin{array}{cccc}6&3&3\\ 3&6&3\\ 3&3&6\\ \end{array}\right]+(J_{3}-I_{3})\otimes\left[\begin{array}{cccc}4&4&4\\ 4&4&4\\ 4&4&4\\ 4&4&4\\ \end{array}\right].\]
_The design matrix \(X\) satisfies the spanning condition since any row of \(X_{i}\) is no zero-vector, and \(X_{1}+X_{2}+X_{3}\) does not contain \(0\). So we have an SBBD(\(3,3,9;\Lambda\)), \(\Lambda=(6,3,4,4)\)._
As you can see from the above example, the spanning condition can not be confirmed from the information matrix \(X^{t}X\). If \(v_{1}\ll v_{2}\), there is a high possibility that the spanning condition is not met. Such a design in which the spanning condition is not guaranteed is denoted by SBBD\({}^{*}\).
## 3 Group Divisible Designs and SBBDs
**Definition 3.1** (Group Divisible Design, see Beth et al. [1]).: _Let \(V\) be the \(v\)-point set which is partitioned into \(G_{1},G_{2},\ldots,G_{m}\), called groups, and \(\mathcal{B}=\{B_{1},B_{2},\ldots,B_{N}\}\) (blocks) is a collection of subsets of \(V\). If \((V,\mathcal{B})\) satisfies the following conditions, it is called a group divisible design or simply GDD:_
1. _any pair of distinct two points in the same group is contained in precisely_ \(\lambda_{1}\) _blocks._
2. _any pair of two points in distinct groups is contained in precisely_ \(\lambda_{2}\) _blocks._
_In this paper, we add the following two conditions:_
_._
3. _each group has the same number of points,_ \(|G_{i}|=g\)_, for_ \(i=1,2,\ldots,m\)_, i.e._ \(v=mg\)_,_
4. _each point of_ \(V\) _is contained in exactly_ \(r\) _blocks, i.e._ \(r=(\sum_{i=1}^{N}|B_{i}|)/v\)__
_It is denoted by GD\({}_{\lambda_{1},\lambda_{2}}(K,g\,;\,v)\), where \(K\) is the set of block sizes, or by GD\({}_{\lambda_{1},\lambda_{2}}(k,g\,;\,v)\) if \(K=\{k\}\). A GD\({}_{0,\lambda}(m,g\,;\,mg)\) is said to be a transversal design or an orthogonal array._
**Property 3.2** (Bose and Connor [2]).: _The parameters of GD\({}_{\lambda_{1},\lambda_{2}}(k,g\,;\,v)\) with \(N\) blocks and \(v=mg\) have the following relation:_
\[kN=vr,\ \ (g-1)\lambda_{1}+g(m-1)\lambda_{2}=r(k-1),\ \ r\geq\lambda_{1}, \lambda_{2}.\]
Let \(V_{1}=\{1,2,\ldots,v_{1}\}\) and \(V_{2}=\{1,2,\ldots,v_{2}\}\). Consider the complete bipartite graph \(K_{v_{1},v_{2}}=(V_{1},V_{2};E)\), where the edge set is \(E=\{e_{ij}\,|\,\,i\in V_{1},j\in V_{2}\}\). Let \(V=\{p_{11},p_{12},\ldots,p_{v_{1}v_{2}}\}\) be the point set of a \(GD_{\lambda_{1},\lambda_{2}}(k,v_{2};v_{1}v_{2})\) with \(v_{1}\) groups, where \(G_{i}=\{p_{i1},p_{i2},\ldots,p_{iv_{2}}\}\). Then there is a one-to-one correspondence between the point set \(V\) and the edge set \(E\) of \(K_{v_{1},v_{2}}\) such as:
\[p_{ij}\in V\Leftrightarrow e_{ij}\in E.\]
From this correspondence, a block of GDD is considered an SB-block. A GDD satisfies the conditions of SBBD except for the spanning condition. We can easily see the following result:
**Property 3.3**.: _If \((V,\mathcal{B})\) is a GD\({}_{\lambda_{1},\lambda_{2}}(K,v_{2}\,;\,v_{1}v_{2})\), then it is also an SBBD*\((v_{1},v_{2},N;\Lambda)\), \(\Lambda=(\mu,\lambda_{12},\lambda_{21},\)\(\lambda_{22})\) with the following relations:_
\[r=\mu\,\ \lambda_{1}=\lambda_{12}\,\ \lambda_{2}=\lambda_{21}=\lambda_{22}.\]
If a GDD satisfies the following conditions, the SBBD\({}^{*}\) is an SBBD:
* For every block \(B\in\mathcal{B}\) and every group \(G_{i}\), \(|B\cap G_{i}|\geq 1\),
* Every element of \(V_{2}\) appears at least once in the set of the second subscripts of points in \(B\) for every block \(B\in\mathcal{B}\), i.e. \(\{j\ |\ p_{ij}\in B\}=V_{2}\).
A GDD not satisfying the second condition may be able to adjust to satisfy the spanning condition using the following property:
**Property 3.4**.: _Let \(\delta\) be a permutation on \(\{1,2,\ldots,v_{2}\}\). Even if the points within a group \(G_{i}=\{p_{i1},p_{i2},\ldots,\)\(p_{iv_{2}}\}\) are rearranged by \(\delta\) as:_
\[\{p_{i\delta(1)},p_{i\delta(2)},\ldots,p_{i\delta(v_{2})}\},\]
_they remain a GDD with the same parameters._
An SBBD satisfying \(\lambda_{21}=\lambda_{22}\) is called a _GDD-type_ SBBD.
**Example 3.5**.: _Consider GD\({}_{3,4}(6,3\,;9),N=9\). The points of the groups are represented here as \(G_{1}=\{1_{1},1_{2},1_{3}\},\ \ G_{2}=\{2_{1},2_{2},2_{3}\},\ G_{3}=\{3_{1},3_{2},3_{3}\}\), and the blocks are:_
\[B_{1}=\{1_{2},1_{3}\ ;\ 2_{2},2_{3}\ ;\ 3_{2},3_{3}\},\ \ B_{2}=\{1_{1},1_{3}\ ;\ 2 _{1},2_{3}\ ;\ 3_{1},3_{3}\},\ \ B_{3}=\{1_{1},1_{2}\ ;\ 2_{1},2_{2}\ ;\ 3_{1},3_{2}\},\] \[B_{4}=\{1_{1},1_{3}\ ;\ 2_{1},2_{2}\ ;\ 3_{2},3_{3}\},\ \ B_{5}=\{1_{2},1_{3}\ ;\ 2_{1},2_{3}\ ;\ 3_{1},3_{2}\},\ \ B_{6}=\{1_{1},1_{2}\ ;\ 2_{2},2_{3}\ ;\ 3_{1},3_{3}\},\] \[B_{7}=\{1_{1},1_{2}\ ;\ 2_{1},2_{3}\ ;\ 3_{2},3_{3}\},\ \ B_{8}=\{1_{2},1_{3}\ ;\ 2 _{1},2_{2}\ ;\ 3_{1},3_{3}\},\ \ B_{9}=\{1_{1},1_{3}\ ;\ 2_{2},2_{3}\ ;\ 3_{1},3_{2}\}.\]
_This is from AG\((2,3)\), the group is a parallel class of lines, and the blocks are the complement of lines that transverse the parallel lines. Let \(\psi(B)=\{j\,|\,i_{j}\in B\}\). \(\psi(B_{1})\) is missing 1, \(\psi(B_{2})\) is missing 2 and \(\psi(B_{3})\) is
_missing 3. This does not satisfy the spanning conditions. By a cyclic permutation \(\delta=(123)\) on the subscripts of \(G_{3}\) points, i.e. \(3_{1}\mapsto 3_{2},\ 3_{2}\mapsto 3_{3},\ 3_{3}\mapsto 3_{1}\), we have the following GDD:_
\[B_{1} =\{1_{2},1_{3}\ ;\ 2_{2},2_{3}\ ;\ 3_{3},3_{1}\}, B_{2} =\{1_{1},1_{3}\ ;\ 2_{1},2_{3}\ ;\ 3_{2},3_{1}\}, B_{3} =\{1_{1},1_{2}\ ;\ 2_{1},2_{2}\ ;\ 3_{2},3_{3}\},\] \[B_{4} =\{1_{1},1_{3}\ ;\ 2_{1},2_{2}\ ;\ 3_{3},3_{1}\}, B_{5} =\{1_{2},1_{3}\ ;\ 2_{1},2_{3}\ ;\ 3_{2},3_{3}\}, B_{6} =\{1_{1},1_{2}\ ;\ 2_{2},2_{3}\ ;\ 3_{2},3_{1}\},\] \[B_{7} =\{1_{1},1_{2}\ ;\ 2_{1},2_{3}\ ;\ 3_{3},3_{1}\}, B_{8} =\{1_{2},1_{3}\ ;\ 2_{1},2_{2}\ ;\ 3_{2},3_{1}\}, B_{9} =\{1_{1},1_{3}\ ;\ 2_{2},2_{3}\ ;\ 3_{2},3_{3}\}.\]
_Their information matrices are both_
\[\mathbf{X}^{t}\mathbf{X}=I_{3}\otimes\left[\begin{array}{ccc}6&3&3\\ 3&6&3\\ 3&3&6\end{array}\right]+(J_{3}-I_{3})\otimes\left[\begin{array}{ccc}4&4&4\\ 4&4&4\\ 4&4&4\end{array}\right].\]
_The second example is a GDD-type SBBD\((3,3,9;\Lambda)\), \(\Lambda=(6,3,4,4)\)._
## 4 Construction from an \((r,\lambda)\)-design and a Difference Matrix
In this section, we show a construction of GDD type SBBD that is not from a group divisible design using an \((r,\lambda)\)-design and a difference matrix. Our idea for constructing SBBD consists of the following three steps:
First:We select an incidence matrix \(H\) of an \((r,\lambda)\)-design,
Second:Using the incidence matrix \(H\) as a seed, a set of matrices called tile matrices are generated by the operation of a group,
Third:A design matrix \(X\) of SBBD can be constructed by pasting the tile matrices on a combinatorial array called a difference matrix over the group.
**Definition 4.1** (\((r,\lambda)\)-design, Stanton and Mullin [15]).: _Let \(V\) be a \(v\)-point set and \(\mathcal{B}\) a collection of subsets (blocks) of \(V\). If \((V,\mathcal{B})\) holds the following conditions, it is called an \((r,\lambda)\)-design with \(v\) points and \(b\) blocks:_
* _each point of_ \(V\) _is contained in exactly_ \(r\) _blocks of_ \(\mathcal{B}\)_,_
* _any two distinct points of_ \(V\) _are contained in precisely_ \(\lambda\) _blocks of_ \(\mathcal{B}\)_._
If the block size is a constant \(k\) for each block, then it is called a _balanced incomplete block design_, denoted by \((v,k,\lambda)\)-BIBD, and if \(|V|=|\mathcal{B}|\), it is called _symmetric design_, then \(r=k\).
**Definition 4.2** (Difference Matrix over a group \(\mathbf{E}_{b}\), Jungnickel [8]).: _Le \(D=[d_{ij}]\) be an \((\eta b\times s)\)-matrix on a group \(\mathbf{E}_{b}\) of order \(b\), and \(D(i,j)=\{(d_{ki},d_{kj})\,|\,k=1,2,\ldots,\eta v\}\), \(1\leq i\neq j\leq s\). If the multi-set_
\[\{d-d^{\prime}\,|\,(d,d^{\prime})\in D(i,j)\,\}\]
_contains each element of \(\mathbf{E}_{b}\) precisely \(\eta\) times for any \(1\leq i\neq j\leq s\), then \(D\) is called a \((b,s;\eta)\)-difference matrix (DM) over \(\mathbf{E}_{b}\)._
If \(s=b\eta\), then \(D\) may be called a _generalized Hadamard matrix_. On difference matrices, we have the following well-known properties, see Beth et al. [1]:
**Property 4.3** (Beth et al. [1]).: _Let \(D\) be a difference matrix. A matrix \(D^{\prime}\) obtained by adding an element \(c\in\mathbf{E}_{b}\) to all elements of a column of \(D\) is also a difference matrix._
\[D^{\prime}=[d^{\prime}_{ij}]\text{ such that }d^{\prime}_{ij}\equiv d_{ij}+c\ \text{ for }i=1,2,\ldots,\eta b.\]
Using this property, it can be adjusted to satisfy the spanning condition of SBBD.
**Property 4.4** (Beth et al. [1]).: _For any prime power \(q\), there exists a \((q,q;1)\)-DM._
Many examples of existence, such as \((r,\lambda)\)-designs and difference matrices, are shown in Colbourn and Dinitz [6].
Let \((V,\mathcal{B})\) be an \((r,\lambda)\)-design with \(v\) points and \(b\) blocks,
\[H=\left[\begin{array}{c}\mathbf{h}_{x_{0}}\\ \mathbf{h}_{x_{1}}\\ \vdots\\ \mathbf{h}_{x_{b-1}}\end{array}\right],\]
where \(\mathbf{h}_{x_{0}},\mathbf{h}_{x_{1}},\ldots,\mathbf{h}_{x_{b-1}}\) are the row vectors of \(H\) and their subscripts are described by the elements of \(\mathbf{E}_{b}=\{x_{0},x_{1},\ldots,x_{b-1}\}\) arranged in a certain order. The tile matrix \(T_{y}\) is an array of rows rearranged by adding the element \(y\) of \(\mathbf{E}_{b}\) to the subscripts of each row of \(H\) as follows:
\[T_{y}=\left[\begin{array}{c}\mathbf{h}_{x_{0}+y}\\ \mathbf{h}_{x_{1}+y}\\ \vdots\\ \mathbf{h}_{x_{b-1}+y}\end{array}\right]\text{ for }y\in\mathbf{E}_{b}. \tag{1}\]
Assume \(x_{0}=0\) (identity) in \(\mathbf{E}_{b}\), that is, \(T_{x_{0}}=H\). Each \(T_{y}\) has the following properties:
* \(T_{y}\) is a \((b\times v)\)-matrix for \(y\in\mathbf{E}_{b}\),
* the set of rows of \(T_{y}\) is precisely equal to the set of rows of \(H\), which implies \[T_{y}^{t}\,T_{y}=H^{t}H=rI_{v}+\lambda(J_{v}-I_{v}),\text{ for any }y\in \mathbf{E}_{b},\] (2)
Then we have following equations about the tile matrix \(T_{y}\):
**Lemma 4.5**.: _For any \(x,y,d\in\mathbf{E}_{b}\), it holds_
\[T_{x}^{t}\,T_{y}=(T_{x+d})^{t}\,T_{y+d}\,. \tag{3}\]
_For any \(x\in\mathbf{E}_{b}\), it holds_
\[\sum_{y\in\mathbf{E}_{b}}T_{x}^{t}\,T_{y}=r^{2}J_{v}\,. \tag{4}\]
ProofLet \(\mathbf{E}_{b}=\{x_{0},x_{1},\ldots,x_{b-1}\}\) be a group of order \(b\). Let a pair of the \(x_{i}\)-th rows of \(T_{x}\) and \(T_{y}\) be \((\mathbf{h}_{x_{i}+x},\mathbf{h}_{x_{i}+y})\). Similar pair of \(T_{x+d}\) and \(T_{y+d}\) is described as \((\mathbf{h}_{y_{i}+x+d},\mathbf{h}_{y_{i}+y+d})\), \(y_{i}\in\mathbf{E}_{b}\). If \(y_{i}=x_{i}-d\), then those two pairs are equal. That is, the set of pairs \(\{(\mathbf{h}_{x_{i}+x},\mathbf{h}_{x_{i}+y})\,;\,x_{i}\in\mathbf{E}_{b}\}\) is the same as the set of pairs \(\{(\mathbf{h}_{y_{i}+x+d},\mathbf{h}_{y_{i}+y+d})\,;\,y_{i}\in\mathbf{E}_{b}\}\), which implies that
\[T_{x}^{t}\,T_{y}=(T_{x+d})^{t}\,T_{y+d}\,.\]
Next, it is easy to see that any row of \(\sum_{y\in\mathbf{E}_{b}}T_{y}\) equals to \(\sum_{i=0}^{b-1}\mathbf{h}_{x_{i}}\), and therefore \(\sum_{y\in\mathbf{E}_{b}}T_{y}=rJ_{b,v}\). Hence we have
\[\sum_{y\in\mathbf{E}_{b}}T_{x}^{t}\,T_{y}=r^{2}J_{v}\,.\]
Let \(D=[d_{i,j}]\) be an \((\eta b\times s)\)-matrix of \((b,s;\eta)\)-DM over \({\bf E}_{b}\). We paste the tile matrices \(T_{0},T_{x_{1}},\ldots,T_{x_{b-1}}\) on \(D\) to make a design matrix
\[X=[T_{d_{i,j}}]=(X_{1}|X_{2}|\cdots|X_{s}). \tag{5}\]
This \(X\) is an \((\eta b^{2}\times sv)\)-matrix, and
\[X_{j}=\begin{bmatrix}T_{d_{1,j}}\\ T_{d_{2,j}}\\ \vdots\\ T_{d_{\eta b,j}}\end{bmatrix}\quad\text{for $1\leq j\leq s$}. \tag{6}\]
We have the next theorem regarding each row of \(X\) as a new SB-block.
**Theorem 4.6**.: _If there exists an \((r,\lambda)\)-design with \(v\) points and \(b\) blocks, and a \((b,s\,;\eta)\)-DM over \({\bf E}_{b}\), then we have a GDD-type spanning bipartite block design SBBD\({}^{*}(s,v,N\,;\Lambda)\), where_
\[N=\eta b^{2},\quad\Lambda=(\mu,\lambda_{12},\lambda_{21},\lambda_{22})=(\eta br,\,\eta b\lambda,\,\eta r^{2},\,\eta r^{2}).\]
_It has a \((\eta b^{2}\times sv)\)-design matrix. If \(s>b-r\), then it satisfies the spanning condition._
**Proof** Let \(H\) be the incidence matrix of an \((r,\lambda)\)-design with \(v\) points and \(b\) blocks. Let \(T_{x_{0}},T_{x_{1}},\ldots,T_{x_{b-1}}\), \(x_{i}\in{\bf E}_{b}\) be tile matrices which are defined in Equation (1) from \(H\). Suppose \(D=[d_{i,j}]\) is an \((\eta b\times s)\)-matrix of \((b,s;\eta)\)-DM over \({\bf E}_{b}\). Let \(X\) be the information matrix from \(T_{x_{0}},T_{x_{1}},\ldots,T_{x_{b-1}}\) using Equations (5) and (6). A diagonal submatrix of the information matrix \(X^{t}X\) is from Equation (2):
\[X^{t}_{j}X_{j}=\sum_{i=1}^{\eta b}T^{t}_{d_{i,j}}T_{d_{i,j}}=\eta b\cdot(rI_{v }+\lambda(J_{v}-I_{v}))\ \ \text{for any $1\leq i\leq s$}.\]
Next, consider an off-diagonal submatrix \(X^{t}_{j}X_{j^{\prime}}\), \(j\neq j^{\prime}\). Let \(L_{d}=\{(x,y)\}\) be a set of pairs such that every difference \(d=x-y,d\in{\bf E}_{b}\) occurs exactly \(\eta\) times. From Lemma 4.5, we have
\[X^{t}_{j}X_{j^{\prime}}=\sum_{(x,y)\in L_{d}}T^{t}_{x}\,T_{y}=\eta\sum_{x\in{ \bf E}_{b}}T^{t}_{0}\,T_{x}\ =\eta\,r^{2}J_{v},\ j\neq j^{\prime}.\]
Each row of \(X\) is an SB-block having a form \(({\bf x}_{1},{\bf x}_{2},\ldots,{\bf x}_{s})\), where each \({\bf x}_{i}\) is a row of \(H\). If these \({\bf x}_{i}\)'s consist of the all vectors of \(H\), \(s=b\), then each row of \(\sum_{i=1}^{s}X_{i}\) is \((r,r,\ldots,r)\). For the spanning condition, zeros must not occur in the vector. If at least \(b-(r-1)\) different rows of \(H\) appear in an SB-block, then the spanning condition is guaranteed. When the spanning condition is not satisfied, we show Example 3.5 to adjust a difference matrix using Property 4.3. In the following example, an adjustment of a difference matrix over \({\bf F_{2}}^{3}\) will be seen.
**Example 4.7**.: _Consider a \((4,2)\)-design with 7 points and 8 blocks_
\[\{\{1,3,5\},\{0,3,4\},\{2,3,6\},\{0,1,2\},\{1,4,6\},\{0,5,6\},\{2,4,5\},\{0,1,2,3,4,5,6\}\}.\]
_Let \({\bf E}_{b}={\bf F}_{2}\times{\bf F}_{2}\times{\bf F}_{2}\). The incidence matrix is expressed as_
\[H=\begin{bmatrix}\mathbf{h}_{(0,0,0)}\\ \mathbf{h}_{(0,0,1)}\\ \mathbf{h}_{(0,1,0)}\\ \mathbf{h}_{(0,1,1)}\\ \mathbf{h}_{(0,1,0)}\\ \mathbf{h}_{(1,0,0)}\\ \mathbf{h}_{(1,1,0)}\\ \mathbf{h}_{(1,1,0)}\\ \mathbf{h}_{(1,1,1)}\\ \mathbf{h}_{(1,1,0)}\\ \mathbf{h}_{(1,1,1)}\\ \end{bmatrix}=\begin{bmatrix}1&1&1&1\\ 1&&1&1&\\ 1&1&1&&\\ 1&1&1&&\\ 1&&1&1&1\\ 1&1&1&1&1\\ 1&1&1&1&1&1\end{bmatrix}.\]
_Then it holds \(H^{t}H=4I_{7}+2(J_{7}-I_{7})\)._
_Using Equation (1), the tile matrices \(T_{(0,0,0)},T_{(1,0,0)},\ldots,T_{(1,1,1)}\) are as follows:_
\[T_{(0,0,0)}=H,\ \ \ T_{(1,0,0)}=\begin{bmatrix}1&&1&1\\ 1&&&1&1\\ 1&1&1&1&1\\ 1&1&1&1&1\\ 1&&1&1&\\ 1&&1&1&\\ 1&1&1&&1\\ 1&1&1&\end{bmatrix},\ \ \ T_{(0,1,0)}=\begin{bmatrix}&1&1&&1\\ 1&1&1&\\ 1&1&1&1\\ 1&&1&1&\\ 1&1&1&1&1\\ 1&1&&1&1\\ 1&&&1&1\end{bmatrix},\ldots.\]
_From Property 4.4, there is an \((8,8;1)\)-DM over \(\mathbf{F}_{2}\times\mathbf{F}_{2}\times\mathbf{F}_{2}\). The following difference matrix \(D\) is basically from the multiplication table over \(\mathbf{F}_{2^{3}}\), and the \(6\)-th, \(7\)-th, and \(8\)-th columns are added by \((1,0,0),(1,1,0)\), and \((1,0,1)\), respectively._
\[D=\begin{bmatrix}(0,0,0)&(0,0,0)&(0,0,0)&(0,0,0,0)&(1,0,0)&(1,1,0)&(1,0,1)\\ (0,0,0)&(1,0,0)&(0,1,0)&(1,1,0)&(0,0,1)&(1,0,1)&(0,1,0)\\ (0,0,0)&(0,1,0)&(0,1,1)&(1,1,0)&(0,0,0)&(0,0,1)&(0,0,0)\\ (0,0,0)&(1,1,0)&(0,1,1)&(1,0,1)&(1,1,1)&(1,0,1)&(0,1,1)&(1,1,1)\\ (0,0,0)&(0,1)&(1,1,0)&(1,1,1)&(0,1,1)&(1,1,0)&(0,1,1)&(0,0,1)\\ (0,0,0)&(1,0,1)&(1,0,0)&(0,1,0)&(0,1,1)&(0,0,0)&(1,0,0)\\ (0,0,0)&(1,1,1)&(1,1,1)&(1,0,0)&(1,0,1)&(1,0,1)&(1,1,1)&(0,1,1)\end{bmatrix}\]
_By pasting the tile matrices \(T_{(0,0,0)},T_{(1,0,0)},\ldots,T_{(1,1,1)}\) into the above difference matrix, we have a \(64\times 56\) design matrix \(X\), and the following \(56\times 56\) information matrix:_
\[X^{t}X=I_{8}\otimes\begin{bmatrix}32&16&16&16&16&16&16\\ 16&32&16&16&16&16\\ 16&16&16&32&16&16&16\\ 16&16&16&16&32&16&16\\ 16&16&16&16&16&32&16\\ 16&16&16&16&16&32\\ \end{bmatrix}+(J_{8}-I_{8})\otimes\begin{bmatrix}16&16&16&16&16&16&16\\ 16&16&16&16&16&16\\ 16&16&16&16&16&16\\ 16&16&16&16&16&16\\ 16&16&16&16&16&16\\ 16&16&16&16&16&16\\ 16&16&16&16&16&16\\ \end{bmatrix}.\]
Table 1 is a list of existing BIBDs with \(b\) blocks, where \(b\) is a prime power less than \(100\) selected from the table in Colbourn and Dinitz [6]. From Property 4.4, there exists a \((b,b,1)\)-DM over the group \(\mathbf{E}_{b}\). We can construct a GDD type SBBD\((b,v,b^{2};\Lambda),\Lambda=(br,\,b\lambda,\,r^{2},\,r^{2})\).
## 5 Decomposition method
Let \((V,\mathcal{B})\) be an \((r,\lambda)\)-design with \(v\) points and \(b\) blocks. When \(b>>v\), the method described in Section 3 can only construct SBBDs in which \(v_{1}\) and \(v_{2}\) are significantly different. In this section, we propose a construction method to meet the requirement to have SBBDs in which \(v_{1}\) and \(v_{2}\) are relatively close using an \((r,\lambda)\)-design with \(b>>v\).
Let \(\mathcal{B}_{1},\mathcal{B}_{2},\ldots,\mathcal{B}_{m}\) be a partition of \(\mathcal{B}\), where \(|\mathcal{B}_{i}|=b_{i}\) and each point of \(V\) is contained in \(\mathcal{B}_{i}\) at least one for \(i=1,2,\ldots,m\). Let \(\mathbf{E}_{b_{i}}^{(i)}\) be a group of order \(b_{i}\), \(1\leq i\leq m\). Then the \((b_{i}\times v)\)-incidence matrix \(H_{i}\) between \(\mathcal{B}_{i}\) and \(V\) is described as
\[H_{i}=\left[\begin{array}{c}\mathbf{h}_{x_{0}}^{(i)}\\ \mathbf{h}_{x_{1}}^{(i)}\\ \vdots\\ \mathbf{h}_{x_{b_{i}-1}}^{(i)}\end{array}\right],\text{ where }x_{j}\in\mathbf{E}_{b_{i}}^{(i)} \text{ for }1\leq i\leq m.\]
**Property 5.1**.: _If \((V,\mathcal{B})\) is an \((r,\lambda)\)-design with \(v\) points and \(b\) blocks, then_
\[\sum_{i=1}^{m}H_{i}^{t}H_{i}=rI_{v}+\lambda(J_{v}-I_{v}).\]
For each \(H_{i}\), \(1\leq i\leq m\), we generate \(b_{i}\) tile matrices \(T_{y}^{(i)},y\in\mathbf{E}_{b_{i}}^{(i)}\), of the size \((b_{i}\times v)\) by adding an element of \(\mathbf{E}_{b_{i}}^{(i)}\) to the subscripts, same as Equation (1). Let
\[T_{y}^{(i)}=\left[\begin{array}{c}\mathbf{h}_{x_{0}+y}^{(i)}\\ \mathbf{h}_{x_{1}+y}^{(i)}\\ \vdots\\ \mathbf{h}_{x_{b_{i}-1}+y}^{(i)}\end{array}\right],\text{ where }y\in\mathbf{E}_{b_{i}}^{(i)} \tag{7}\]
It is not difficult to see the following for the tile matrices \(T_{y}^{(i)}\), \(y\in\mathbf{E}_{b_{i}}^{(i)}\):
* \(T_{y}^{(i)}\) is a \(b_{i}\times v\) matrix for any \(y\in\mathbf{E}_{b_{i}}^{(i)}\),
* \((T_{y}^{(i)})^{t}\,T_{y}^{(i)}=H_{i}^{t}H_{i}\) for any \(y\in\mathbf{E}_{b_{i}}^{(i)}\).
\begin{table}
\begin{tabular}{c c c c|c|c|c} \(v\) & \(b\) & \(r\) & \(k\) & \(\lambda\) & Remark \\ \hline \hline
7 & 7 & 3 & 3 & 1 & PG(2,2) \\
11 & 11 & 5 & 5 & 2 & & \\
13 & 13 & 4 & 4 & 1 & PG(2,3) \\
19 & 19 & 9 & 9 & 4 & & \\
23 & 23 & 11 & 11 & 5 & & \\
25 & 25 & 9 & 9 & 3 & & \\
27 & 27 & 13 & 13 & 6 & 27=3\({}^{3}\) \\
31 & 31 & 6 & 6 & 1 & PG(2,5) \\
31 & 31 & 10 & 10 & 3 & & \\
31 & 31 & 15 & 15 & 7 & PG(4,2) \\
37 & 37 & 9 & 9 & 2 & \\
41 & 41 & 16 & 16 & 6 & & \\
43 & 43 & 21 & 21 & 10 & & \\
47 & 47 & 23 & 23 & 11 & & \\ \hline \end{tabular}
\begin{tabular}{c c c c|c|c} \(v\) & \(b\) & \(r\) & \(k\) & \(\lambda\) & Remark \\ \hline \hline
7 & 49 & 21 & 3 & 7 & & \\
49 & 49 & 16 & 16 & 5 & 49=7\({}^{2}\) \\
59 & 59 & 29 & 29 & 14 & & \\
61 & 61 & 16 & 16 & 4 & & \\
61 & 61 & 25 & 25 & 10 & & \\
67 & 67 & 33 & 33 & 16 & & \\
71 & 71 & 15 & 15 & 3 & \\
71 & 71 & 21 & 21 & 6 & & \\
71 & 71 & 35 & 35 & 17 & & \\
73 & 73 & 9 & 9 & 1 & PG(2,8) \\
79 & 79 & 13 & 13 & 2 & \\
79 & 79 & 27 & 27 & 9 & \\
79 & 79 & 39 & 39 & 19 & \\ \hline \end{tabular}
\end{table}
Table 1: BIBD with prime power \(b\) blocks
* the set of rows of \(T_{y}^{(i)}\) is exactly equal to the set of rows of \(H_{i}\) for any \(y\in\mathbf{E}_{b_{i}}^{(i)}\), which implies \[\sum_{i=1}^{m}(T_{y}^{(i)})^{t}\,T_{y}^{(i)}=\sum_{i=1}^{m}{H_{i}}^{t}H_{i}=rI_{ v}+\lambda(J_{v}-I_{v})\ \ \mbox{for any $y\in\mathbf{E}_{b_{i}}^{(i)}$}.\] (8)
**Lemma 5.2**.: _Let \((V,\mathcal{B})\) be an \((r,\lambda)\)-design and let \(T_{y}^{(i)}\), \(1\leq i\leq m\), \(y\in\mathbf{E}_{b_{i}}^{(i)}\), be the tile matrices of \(\mathcal{B}_{i}\), \(|\mathcal{B}_{i}|=b_{i}\), where \(\{\mathcal{B}_{1},\mathcal{B}_{2},\ldots,\mathcal{B}_{m}\}\) is a partition of \(\mathcal{B}\). If every element of \(V\) appears in \(\mathcal{B}_{i}\) exactly \(r_{i}\) times for \(1\leq i\leq m\), then the following two equations hold:_
\[(T_{x}^{(i)})^{t}\,T_{y}^{(i)}=(T_{x+d}^{(i)})^{t}\,T_{y+d}^{(i)} \ \ \mbox{for any $x,y,d\in\mathbf{E}_{b_{i}}^{(i)}$, $1\leq i\leq m$}, \tag{9}\] \[\sum_{z\in\mathbf{E}_{b_{i}}^{(i)}}(T_{y}^{(i)})^{t}\,T_{y+z}^{(i )}={r_{i}}^{2}J_{v}\ \ \mbox{for any $y\in\mathbf{E}_{b_{i}}^{(i)}$, $1\leq i\leq m$}, \tag{10}\]
**Proof** First, we prove Equation (9) from Equation (3). Suppose that \(H\) is divided into \(H_{1},H_{2},\ldots\), \(H_{m}\), and every column of \(H_{i}\) has \(r_{i}\) ones. Let \(W_{i}=\{T_{x+z}^{(i)}\,|\,z\in\mathbf{E}_{b_{i}}^{(i)}\}\) be the set of tile matrices produced from \(H_{i}\). Then each row of \(H_{i}\) appears exactly once in the same rows of tile matrices in \(W_{i}\). Using the same approach as in the proof of (3), we have
\[(T_{x}^{(i)})^{t}\,T_{y}^{(i)}=(T_{x+d}^{(i)})^{t}\,T_{y+d}^{(i)}\ \ \mbox{for any $x,y,d\in\mathbf{E}_{b_{i}}^{(i)}$, $1\leq i\leq m$}.\]
Next, Equation (10) can be proved in the same manner as the proof of Equation (4),
\[\sum_{z\in\mathbf{E}_{b_{i}}^{(i)}}(T_{y}^{(i)})^{t}\,T_{y+z}^{(i)}={r_{i}}^{ 2}J_{v}\ \ \mbox{for $y\in\mathbf{E}_{b_{i}}^{(i)}$}.\]
Let \(D^{(i)}=\left[d_{pq}^{(i)}\right]\) be \((b_{i},s;\eta_{i})\)-DM over \(\mathbf{E}_{b_{i}}^{(i)}\) of size \((\eta b_{i}\times s)\), \(1\leq i\leq m\). We paste the tile matrices \(T_{1}^{(i)},T_{2}^{(i)},\ldots,T_{b_{i}}^{(i)}\) on the difference matrix \(D^{(i)}=\left[d_{pq}^{(i)}\right]\), and denote it by
\[X^{(i)}=\left[\,T_{d_{pq}^{(i)}}^{(i)}\,\right]. \tag{11}\]
Then we have an \((\eta\sum_{i=1}^{m}b_{i}^{2}\times sv)\)-design matrix
\[X=\begin{bmatrix}X^{(1)}\\ X^{(2)}\\ \vdots\\ X^{(m)}\end{bmatrix}=(X_{1}|X_{2}|\cdots|X_{s}). \tag{12}\]
**Theorem 5.3**.: _If there is an \((r,\lambda)\)-design \((V,\mathcal{B})\) with \(v\) points and \(b\) blocks which is partitionable into \(\mathcal{B}_{1},\mathcal{B}_{2},\ldots,\mathcal{B}_{m}\) such that every point of \(V\) appears in \(\mathcal{B}_{i}\) exactly \(r_{i}\) times, and if there exist \((b_{i},s;\eta)\)-difference matrices, \(i=1,2,\ldots,m\), satisfying \(b=b_{1}+b_{2}+\cdots+b_{m}\), then there exists a GDD-type SBBD \((s,v,N;\Lambda)\), where \(N=\eta\sum_{i=1}^{m}b_{i}^{2}\) and_
\[\Lambda=(\mu,\lambda_{12},\lambda_{21},\lambda_{22})=(\eta pb,\,\eta\lambda b, \,\eta\sum_{i=1}^{m}{r_{i}}^{2},\,\eta\sum_{i=1}^{m}{r_{i}}^{2}).\]
ProofFirst, we compute the diagonal submatrix \(X_{j}^{t}X_{j}\) of \(X^{t}X\). From Equation (8), we have
\[X_{j}^{t}\,X_{j}=\sum_{i=1}^{m}\sum_{p=1}^{\eta b_{i}}T_{a_{p,j}^{(i)}}^{(i)\,t} \,T_{a_{p,j}^{(i)}}^{(i)}=\eta b\cdot(I_{v}+\lambda(J_{v}-I_{v}))\text{ for any }1\leq i\leq s.\]
Second, we compute an off-diagonal submatrix \(X_{j}^{t}X_{j^{\prime}},\;1\leq j\neq j^{\prime}\leq s.\) The following equation holds regardless of the elements of \(x_{i}\in\mathbf{E}_{b_{i}}^{(i)}\), \(1\leq i\leq m\), from Equations (9) and (10):
\[X_{j}^{t}X_{j^{\prime}}=\eta\sum_{i=1}^{m}\sum_{z\in\mathbf{E}_{b_{i}}^{(i)}} (T_{x_{i}}^{(i)})^{t}T_{x_{i}+z}^{(i)}=\eta\,\sum_{i=1}^{m}r_{i}^{2}J_{v}.\]
Suppose we want to have SBBDs of \(K_{v_{1},v_{2}}\) such that \(v_{1}\) and \(v_{2}\) are as close as possible. Let \((V,\mathcal{B})\) be an \((r,\lambda)\)-design with \(v\) points and \(b\) blocks, and let \(\mathcal{B}_{1},\mathcal{B}_{2},\ldots,\mathcal{B}_{m}\) be a partition of \(\mathcal{B}\). When decomposing the block set, the following should be considered:
* \(b=b_{1}+b_{2},+\cdots+b_{m}\), where \(b_{i}=|\mathcal{B}_{i}|\), \(1\leq i\leq m\),
* every point of \(V\) appears in \(\mathcal{B}_{i}\) exactly \(r_{i}\) times, \(1\leq i\leq m\),
* each \(b_{i}\) is as close to \(v(=v_{2})\) as possible,
* each \(b_{i}\) is a prime or prime power (When it is hard to decompose into such \(b_{i}\)'s, we can have new \((r+1,\lambda+1)\)-design with \(b+1\) blocks by adding a block \(B_{b+1}=V\)),
* an integer \(s(=v_{1})\) in \((b_{i},s;\eta)\)-DM is \(s=\min\{b_{1},b_{2},\ldots,b_{m}\}\).
**Example 5.4**.: _Consider a \((5,3,3)\)-BIBD with \(10\) blocks. The set of blocks is divided into two parts, each consisting of 5 blocks. Their incidence matrices \(H_{1},H_{2}\) of those two parts are as follows:_
\[H_{1}=\begin{bmatrix}0&0&1&1&1\\ 1&0&0&1&1\\ 0&1&1&1&0\\ 1&1&0&0&1\\ 1&1&1&0&0\end{bmatrix},\;H_{2}=\begin{bmatrix}1&1&0&1&0\\ 1&0&1&1&0\\ 0&1&1&0&1\\ 1&0&1&0&1\\ 0&1&0&1&1\end{bmatrix}.\]
_Naturally, \(H_{1}{}^{t}H_{1}+H_{2}{}^{t}H_{2}=6I_{5}+3(J_{5}-I_{5}).\) Since \(b_{1}=b_{2}=5\), there exist the following difference matrices \(D^{(1)}\), \(D^{(2)}\) over the group \(\mathbf{Z}_{5}=\{0,1,2,3,4\}\):_
\[D^{(1)}=\begin{bmatrix}0&0&1&4&3\\ 0&1&3&2&2\\ 0&2&0&0&1\\ 0&3&2&3&0\\ 0&4&4&1&4\end{bmatrix},\;D^{(2)}=\begin{bmatrix}0&0&1&4&3\\ 0&1&3&2&2\\ 0&2&0&0&1\\ 0&3&2&3&0\\ 0&4&4&1&4\end{bmatrix}.\]
_From \(H_{1}\), we can produce tile matrices \(T_{0}^{(1)},T_{1}^{(1)},\ldots,T_{4}^{(1)}\) by the method of Equation (7),_
\[T_{0}^{(1)}=H_{1},\;T_{1}^{(1)}=\begin{bmatrix}1&0&0&1&1\\ 0&1&1&1&0\\ 1&1&0&0&1\\ 1&1&1&0&0\\ 0&0&1&1&1\end{bmatrix},\;T_{2}^{(1)}=\begin{bmatrix}0&1&1&1&0\\ 1&1&0&0&1\\ 1&1&1&0&0\\ 0&0&1&1&1\\ 1&0&0&1&1\end{bmatrix},\ldots,\;T_{4}^{(1)}=\begin{bmatrix}1&1&1&0&0\\ 0&0&1&1&1\\ 1&0&0&1&1\\ 0&1&1&1&0\\ 1&1&0&0&1\end{bmatrix},\]
_and similarly \(T_{0}^{(2)},T_{1}^{(2)},\ldots,T_{4}^{(2)}\) from \(H_{2}\). Finally, we paste the tile matrices \(T_{i}^{(1)}\) and \(T_{i}^{(2)}\) onto the difference matrices \(D^{(1)}\) and \(D^{(2)}\), respectively. Then we have a GDD type SBBD\((5,5,50;\Lambda)\), where \(\Lambda=(30,15,18,18)\). Its information matrix is_
\[X^{t}X=I_{5}\otimes\begin{bmatrix}30&15&15&15&15\\ 15&30&15&15&15\\ 15&15&30&15&15\\ 15&15&15&30&15\\ 15&15&15&15&30\end{bmatrix}+(J_{5}-I_{5})\otimes\begin{bmatrix}18&18&18&18 \\ 18&18&18&18&18\\ 18&18&18&18&18\\ 18&18&18&18&18\\ 18&18&18&18&18\end{bmatrix}\.\]
## 6 Optimal design and existence
Takeuchi [16] shows that a specific type group divisible design is optimum in a statistical model. In this section, we discuss statistical models of the GDD-type SBBD and optimality.
Let \(\mathbf{y}\), \(\boldsymbol{\tau}\), and \(\boldsymbol{\epsilon}\) be vectors of data, main effects, and errors, respectively. \(\mu\) is the central effect. \(X=[x_{ij}]\) is an \(N\times v\)\((0,1)\)-matrix called a design matrix. Each data is obtained as the sum of some effects. Then the model can be represented as
\[\begin{split}\mathbf{y}&=\mu\mathbf{1}_{N}+X\boldsymbol{\tau}+ \boldsymbol{\epsilon},\\ \boldsymbol{\tau}^{t}\mathbf{1}_{v}&=0.\end{split} \tag{13}\]
When evaluating efficient designs, the smaller the variance of the estimator, the better. The goodness is quite different for design matrices of the same size and the same number of 1s. Since there is usually more than one estimator, there are several criteria for design optimality. Here we show a criterion of optimality called _E-optimal_.
**Definition 6.1** (E-optimality, Kiefer [10], Shar and Sinha [14]).: _Let \(\Omega\) be a class of \(N\times v\) (0,1)-matrices \(X\) having the same number of ones. If the following function has a maximum value for \(X\) in \(\Omega\), then the design matrix \(X\) is called E-optimum relative to \(\Omega\):_
\[\min_{1\leq i\leq v-1}\{\theta_{i}\},\]
_where \(\theta_{1},\theta_{2},\ldots,\theta_{v-1}\), \(\theta_{i}>0\), are the eigenvalues of \(X^{t}X\)._
The optimality of group divisible designs is discussed in Takenchi [16]. The statistical models for group divisible designs do not consider the group structure of the variety set \(V\) at all. That is the model of (13) is assumed with \(N\) blocks and \(v=mg\). Let \(\Omega\) is the class of \(N\times v\)\((0,1)\)-matrices \(X\) which contain exactly \(kN\) ones.
**Theorem 6.2** (Takeuchi [16, 17]).: _A group divisible design \(\text{GD}_{\lambda_{1},\lambda_{2}}(k,g\,;\,v)\) with \(\lambda_{2}=\lambda_{1}+1\) is E-optimum relative to \(\Omega\)._
Naturally, this theorem applies to SBBDs constructed from the GDDs.
**Theorem 6.3**.: _An GDD type SBBD\({}^{*}(v_{1},v_{2},N;\Lambda)\), \(\Lambda=(r,\lambda_{1},\lambda_{2},\lambda_{2})\), where \(v=v_{1}v_{2}\) and \(\lambda_{2}=\lambda_{1}+1\) is E-optimum relative to \(\Omega\)._
Many group divisible designs with \(\lambda_{2}=\lambda_{1}+1\) are known. We introduce some well-known constructions in this section. Suppose here \((V,\mathcal{B})\) be a \((v,k,1)\)-BIBD. Let \(\Pi=\{\Pi_{1},\Pi_{2},\ldots,\Pi_{n}\}\), \(|\Pi_{i}|=g\), be a partition of \(V\). Each block of \(\mathcal{B}\) intersects \(\Pi_{i}\) at no points, one point, or at all points in the block. If \(\mathcal{B}^{\prime}\subset\mathcal{B}\) consists of the blocks not contained in any \(\Pi_{i}\), then \((V,\,\mathcal{B}^{\prime})\) is a \(GD_{0,1}(k,g\,;v)\).
**Example 6.4**.: _The points and the lines of \(\text{PG}(n,q)\), \(q\) a prime power, form a \(((q^{n+1}-1)/(q-1),q+1,1)\)-BIBD. There exists a parallel class of \(t\)-flats (equivalently a \(t\)-spread of \(\text{PG}(t,q)\)) if and only if \((t+1)\mid(n+1)\). That is, there exists a \(GD_{0,1}(q+1,(q^{t+1}-1)/(q-1);v)\)._
**Example 6.5**.: _In \(\text{AG}(n,q)\), \(q\) a prime power, there is a parallel class of t-flats \(\Pi_{i}\) (= \(\text{AG}(t,q)\)) for \(1\leq t\leq n-1\). The points and the lines is a (\(q^{n},q,1\))-BIBD with parallel class of \(\Pi_{i},\ |\ \Pi_{i}|=g=q^{t}\). That is, there is a \(GD_{0,1}(q,q^{t};q^{n})\)._
**Example 6.6**.: _For any \(q\) prime power, there is an orthogonal array \(\text{GD}_{0,1}(q+1,q;q(q+1))\)._
**Definition 6.7** (Complement design).: _The complement design of \((V,\mathcal{B})\) is \((V,\overline{\mathcal{B}})\), where \(\overline{\mathcal{B}}=\{V\backslash B\mid B\in\mathcal{B}\}\)._
**Property 6.8**.: _The complement design of \(GD_{\lambda_{1},\lambda_{2}}(k,v_{2};v_{1}v_{2})\) with \(N\) blocks is \(\text{GD}_{\lambda^{\prime}_{1},\lambda^{\prime}_{2}}(v-k,v_{2};\,v_{1}v_{2})\), where \(\lambda^{\prime}_{1}=N-2r+\lambda_{1}\), \(\lambda^{\prime}_{2}=N-2r+\lambda_{2}\) and \(r=kN/(v_{1}v_{2})\). Therefore if \(\lambda_{2}=\lambda_{1}+1\) then \(\lambda^{\prime}_{2}=\lambda^{\prime}_{1}+1\)._
**Property 6.9**.: _Let \(V\) be the point set of \(AG(n,q)\), \(n\geq 2\), \(q\) a prime power. The set of groups \(G_{1},G_{2},\ldots,G_{q}\) is a parallel class of hyperplanes. The blocks \(\mathcal{B}\) is the set of hyperplanes not any of \(G_{i}\). Then \((V,\mathcal{B})\) is a \(GD_{\lambda_{1},\lambda_{2}}(q^{n-1},q^{n-1};\,q^{n})\), where_
\[\lambda_{1}=\frac{q^{n-1}-q}{q-1},\ \ \lambda_{2}=\frac{q^{n-1}-1}{q-1}.\]
_That is, \(\lambda_{2}=\lambda_{1}+1\)._
**Proof** Any hyperplane of \(\mathcal{B}\) meets each \(G_{i}\) in a \((n-2)\)-flat and these \((n-2)\)-flats are parallel. Suppose \(p_{1},p_{2}\) are two points in \(G_{1}\). There are \((q^{n-2}-1)/(q-1)\)\((n-2)\)-flats containing \(p_{1}\) and \(p_{2}\). Let \(\pi\) be one of them. Consider hyperplanes of \(\mathcal{B}\) containing \(\pi\). There are \(q\)\((n-2)\)-flats in \(G_{2}\) parallel to \(\pi\). One flat of them and \(\pi\) determine a unique hyperplane of \(\mathcal{B}\). So, there are \(\lambda_{1}=(q^{n-2}-1)/(q-1)\times q\) hyperplanes of \(\mathcal{B}\) containing \(p_{1}\) and \(p_{2}\). Next, consider two points \(p_{1}\) in \(G_{1}\) and \(p_{2}\) in \(G_{2}\). There are \((q^{n-1}-1)/(q-1)\)\((n-2)\)-flats in \(G_{1}\) containing \(p_{1}\). An \((n-2)\)-flat in \(G_{1}\) and the point \(p_{2}\) in \(G_{2}\) determines a unique hyperplane of \(\mathcal{B}\). That is, \(\lambda_{2}=(q^{n-1}-1)/(q-1)\).
### Acknowledgments
This work was supported by JSPS KAKENHI Grant Numbers JP19K11866 and 21K13845.
|
2309.15236 | Probability distributions of atomic scattering lengths | The probability distribution of the real and imaginary parts of atomic
scattering lengths $a$ are derived, in a two-channel model that allows for
inelastic scattering to occur. While the real part of $a$ remains
Cauchy-distributed, as predicted for single channel scattering in the classic
work of Gribakin and Flambaum, the imaginary part of $a$ is seen to be strongly
peaked near zero. Two-body inelastic scattering rates may therefore be smaller
in general than a naive estimate would suggest. | John L. Bohn, Reuben R. W. Wang | 2023-09-26T20:02:15Z | http://arxiv.org/abs/2309.15236v1 | # Probability distributions of atomic scattering lengths
###### Abstract
The probability distribution of the real and imaginary parts of atomic scattering lengths \(a\) are derived, in a two-channel model that allows for inelastic scattering to occur. While the real part of \(a\) remains Cauchy-distributed, as predicted for single channel scattering in the classic work of Gribakin and Flambaum, the imaginary part of \(a\) is seen to be strongly peaked near zero. Two-body inelastic scattering rates may therefore be smaller in general than a naive estimate would suggest.
## I Introduction
This note serves as both a greeting to Ravi Rau and a dispatch from the world of dilute, ultracold gases, where neutral atoms and molecules collide at typically sub-microKelvin temperatures. Even though Ravi never explicitly published in this area, nevertheless his influence is strongly felt.
The atoms and molecules in these gases collide at sufficiently low energy that they are in the Wigner threshold limit, hence their scattering is strongly dominated by the familiar threshold laws. Ravi has been a tireless champion of threshold physics, summarized in his famous work with Fano Fano and Rau (1986), and in an influential review article Sadeghpour _et al._ (2000). His classic treatment of the Wannier threshold law for double ionization highlights the ability of a single quantity, the exponent characterizing the energy dependence \(E^{\alpha}\) of the process, to reveal information on detailed correlations of the charged particles Rau (1971).
In the somewhat more pedestrian world of ultracold collisions of neutral atoms, the relevant Wigner threshold laws are well known. Very typically, the collision is dominated by the lowest, \(s\) partial wave, and the elastic scattering phase shift is linear in wave number, \(\delta_{0}=-ak\). This \(k\)-dependence is standard; what varies from atom to atom, and what matters most in the context of ultracold gases, is the value of the prefactor, the scattering length \(a\).
At stake is the nature of the scattering cross section, which is responsible for bringing the gas to thermal equilibrium, and which therefore determines the ability to make an ultracold gas at all. Famously, the scattering length of \({}^{87}\)Rb is approximately \(a=100a_{0}\), \(a_{0}\) being the Bohr radius. This value is sufficiently large that evaporative cooling of this atom successfully led to the first Bose-Einstein condensate Anderson _et al._ (1995). By contrast, the isotope \({}^{85}\)Rb has a negative naturally-occurring scattering length Boesten _et al._ (1997). This leads to an unfortunately-placed Ramsauer-Townsend minimum in its cross section, limiting evaporative cooling Burke _et al._ (1998). (By various devices, scattering lengths can be altered to necessary values, but that is a different story for another Festschrift.)
Scattering lengths tend to be extremely sensitive functions of the potential energy surface, meaning that even for alkali atoms, they cannot be predicted from _ab initio_ theory (with the exception of one heroic recent result in Gronowski _et al._ (2020)). Generally, they are determined by thoughtful iterations of theory and experiment. In this way, scattering lengths for most combinations of alkali atoms are now known, some to high precision. To do so requires the evaluation of two scattering lengths, for the singlet and triplet Born-Oppenheimer potentials existing between these atoms. By extension, three scattering lengths of higher-spin chromium atoms have also been extracted Werner _et al._ (2005); Pavovic _et al._ (2005), and have proven adequate for describing data.
Beyond this, the situation quickly becomes untenable. Certain lanthanide atoms are perfectly amenable to laser cooling, yet their interactions are quite complex. For example, interactions of open-shell dysprosium atoms would require 81 distinct Born-Oppenheimer potentials, each with its own scattering length that presumably contributes to observed scattering Kotochigova and Petrov (2011). Extracting a quantitative model from data is, at present, considered inconceivable. Even more challenging will be the equivalent procedure in collisions of ultracold molecules, which represents a rapidly growing area of endeavor.
Faced with the difficulties of detailed analysis, it may prove useful to consider instead trends that one could follow over the breadth of possibilities among many collision partners. In the present note we will consider a statistical overview. Statistics in ultracold collisions was introduced by Gribakin and Flambaum Gribakin and Flambaum (1993), who derived, from semiclassical theory, the most likely value of the scattering length for long-range potentials that fall off as a power law, \(-1/r^{n}\), of the distance \(r\) between to atoms. The true scattering
lengths of various species should vary around this most-likely value, in such a way that, according to Gribakin and Flambaum, three-quarters of all naturally occurring scattering lengths should be positive, for an ordinary van der Waals potential with \(n=6\). What these authors did not quite do (but surely could have) is to describe the full distribution function of scattering lengths. In Sec. II we will complete this derivation, in preparation for the remainder of the article.
In addition to elastic scattering, it is extremely important to track inelastic scattering in the ultracold environment. Even the smallest of atomic energy spacings, say hyperfine energies, are orders of magnitude larger than the translational temperature of the gas. Thus an inelastic collision that releases this energy is a disaster: the products either leave the trap, or, perhaps worse, heat the remaining gas. The atoms are like waiters in a busy restaurant, delicately balancing trays full of cocktails. Should they collide, there will be a real mess.
In the argon of cold collisions, these disruptive events are denoted by the technical term "bad." Generally, it is accepted that any bad collisions that are allowed by energy conservation and symmetry considerations tend to happen at high collision rates, and should therefore be avoided if possible. (Much ingenuity has gone toward finding ways to mitigate bad collisions, but again, this is not our story here.) Exceptions of course exist. In a serendipitous experiment at JILA, it was found that a mixture of \({}^{87}\)Rb atoms in two distinct hyperfine states not only survived evaporative cooling, but could be simultaneously Bose-condensed (Myatt _et al._, 1997). The anomalously low inelastic spin-exchange rate that allowed this miracle was quickly understood to rely on an interference between singlet and triplet scattering, that is, on the near-coincidence of singlet and triplet scattering lengths for this isotope (Kokkelmans _et al._, 1997; Julienne _et al._, 1997; Burke _et al._, 1997). Here was a statistical outlier.
In the context of ultracold atoms, it is worthwhile to know how likely it is that such a calamitous event will occur. That is, the question is one of probabilities. To this end, in Sec. III we extend the Gribakin-Flambaum model to a two-channel case that allows for inelastic scattering. We will cast the inelastic loss in terms of the imaginary part of the scattering length, and determine an approximate probability distribution for this quantity.
To do so requires welding together the long-range physics that determines the threshold law, with the short-range physics that governs the change in state. Here Ravi has also paved the way, stressing that multichannel quantum defect theory (MQDT) is an extremely versatile tool, far beyond its initial application to Rydberg atoms (Fano and Rau, 1986). The ideas and notations that ground our theory in the following are rooted in the seminal work of Greene, Rau, and Fano (Greene _et al._, 1982).
## II Single channel scattering lengths
We first consider \(s\)-wave scattering in a single channel with potential \(V(r)\), governed by the Schrodinger equation
\[\left(-\frac{\hbar^{2}}{2m_{r}}\frac{d^{2}}{dr^{2}}+V\right)\psi=E\psi, \tag{1}\]
where \(m_{r}\) is the reduced mass of the collision partners. For purposes of statistics, we envision an assembly of potentials \(V\), collected from an ensemble of potential collision partners across the periodic table. This variety can also include various Born-Oppenheimer curves for given partners, for example, the singlet and triplet curves of the alkalis, assumed to give scattering phase shifts independent of each other. Different isotopes of the same element are not considered to have independent phase shifts as they are, to a good approximation, related by simple mass scaling (Kitagawa _et al._, 2008).
To include this variety of potentials as our ensemble, it is essential to reduce them to a common system of reduced units. For threshold scattering, a relevant set of natural units is obtained from the long-range behavior. In this note we restrict attention to those potentials with long-range van der Waals behavior characterized by the form \(V(r)\approx-C_{6}/r^{6}\). The corresponding natural unit of length is
\[r_{6}=\left(\frac{2m_{r}C_{6}}{\hbar^{2}}\right)^{1/4}. \tag{2}\]
This scale tends to be of the order \(\approx 100a_{0}\) for many atoms; for Rb it is \(165a_{0}\). The short-range physics is not necessarily amenable to a simple scaling between species; indeed, this is where the joy of variety comes from. Such a scaling will not be necessary in the QDT picture we employ.
In the spirit of quantum defect theory, we identify standard solutions for the long-range potential, denoted \(\hat{f}\) and \(\hat{g}\). These are given a useful standardized form in the _magnum opus_ of Ruzic _et al._ (Ruzic _et al._, 2013), which we follow throughout. The functions are chosen so that the irregular function \(\hat{g}\to 0\) as \(r\rightarrow\infty\) in the zero-energy limit, a choice that maximizes the linear independence of \(\hat{f}\) and \(\hat{g}\) in numerical applications. With this choice, the reference function \(\hat{f}\) has phase shift \(\eta=-\bar{a}k\) defined by the scattering length
\[\bar{a}=r_{6}\left(\frac{\pi}{2^{9/2}\Gamma(5/4)\Gamma(1/2)}\right)^{2}\approx 0.4780\;r_{6}, \tag{3}\]
which coincides exactly with the Gribakin-Flambaum most-likely scattering length (Gribakin and Flambaum, 1993).
The reference functions \(\hat{f}\) and \(\hat{g}\) are related to the energy-normalized reference functions \(f\) and \(g\) in the
usual way Greene _et al._ (1982):
\[\begin{pmatrix}f\\ g\end{pmatrix}=\begin{pmatrix}A^{1/2}&0\\ A^{-1/2}\mathcal{G}&A^{-1/2}\end{pmatrix}\begin{pmatrix}\tilde{f}\\ \hat{g}\end{pmatrix}. \tag{4}\]
This defines two more QDT parameters, which Ruzic works out explicitly in the \(s\)-wave threshold limit:
\[A^{1/2} =-(\bar{a}k)^{1/2}, \tag{5a}\] \[\mathcal{G} =(\bar{a}k)^{2}\left[-1+\frac{1}{3}\left(\frac{r_{6}}{\bar{a}} \right)^{2}\right]. \tag{5b}\]
The statistical model is derived in QDT as follows. For a given potential \(V\), one would solve the Schrodinger equation, matching its solution \(\psi\) to the reference functions at a convenient radius \(r=r_{0}\),
\[\psi=\hat{f}-\tilde{K}\hat{g}. \tag{6}\]
This would define the short-range \(K\)-matrix
\[\tilde{K}=\tan(\pi\mu) \tag{7}\]
in terms of a quantum defect \(\mu\). In the statistical model we do not consider any explicit potential \(V\), but rather _assume_ that the quantum defects from such a process would be uniformly distributed on the interval \(\mu\in[-1/2,1/2]\). In this way the vast differences in depth and shape of the potentials for many different atoms are rendered irrelevant. Whatever the atoms are actually doing down there, the net result is always encapsulated in the quantum defect \(\mu\).
By the rules of QDT, one then constructs the short-range phase shift \(\delta_{sr}\) via
\[\tan\delta_{sr}=\frac{A^{1/2}\tilde{K}A^{1/2}}{1+\mathcal{G}\tilde{K}}\approx \tilde{K}\bar{a}k, \tag{8}\]
here ignoring \(\mathcal{G}\tilde{K}\) as small compared to unity. This quantity will become relevant if and when we consider the effective range. The physical scattering phase shift is given by the sum of long- and short-range contributions,
\[\delta_{0}=\eta+\delta_{sr}\approx(-1+\tilde{K})\bar{a}k, \tag{9}\]
whereby the scattering length in units of \(\bar{a}\) is given by
\[\frac{a}{\bar{a}}=-\frac{1}{\bar{a}k}\delta_{0}=1-\tilde{K}. \tag{10}\]
To find the distribution of scattering lengths, we begin with the distribution of quantum defects,
\[P(\mu)=\begin{cases}1,&-\frac{1}{2}\leq\mu\leq\frac{1}{2}\\ 0,&\text{otherwise}\end{cases}. \tag{11}\]
This assumption along with Eq. (7) implies a distribution of short-range \(K\)-matrices related to the former by
\[P(\tilde{K})=\frac{1}{\pi}\frac{1}{\tilde{K}^{2}+1}. \tag{12}\]
Thus the short-range \(K\)-matrix is distributed as a Lorentzian or, in the language of probability theory, a Cauchy distribution. (That the tangent of a uniform distribution yields a Cauchy distribution is well-known. Nevertheless, this result and all the others that we use below are derived in the Appendix.)
Restoring the units, the distribution of scattering lengths is given by
\[P(a)=\frac{1}{\pi}\frac{\bar{a}}{(a-\bar{a})^{2}+\bar{a}^{2}}. \tag{13}\]
Significantly, the Cauchy distribution has neither a well-defined mean nor a well-defined standard deviation. It is, rather, characterized by its mode (most likely value) and its full-width at half-maximum (FWHM), both of which are \(\bar{a}\) in this case. From this distribution we evaluate the fraction of scattering lengths that are positive,
\[\int_{0}^{\infty}daP(a)=\frac{3}{4}, \tag{14}\]
just as prophesied by Gribakin and Flambaum. Having this distribution, we can say other things, for example, half of all scattering lengths should lie within \(\bar{a}/2\) of the mode \(\bar{a}\).
## III Two channels: scattering and loss
Inelastic collisions that lead to bad outcomes are somewhat less universal than potential scattering, and in principle depend on the mechanism by which channel coupling occurs. Nevertheless, for the kinds of collisions envisioned here, this mechanism may be assumed to lie at short range and to be subsumed in the short-range \(K\)-matrix, regarded as a set of parameters of the theory. We therefore disregard, e.g., collisions of dipolar molecules, where torques exerted by the dipoles when they are far apart can drive inelastic scattering at large \(r\)Avdeenkov and Bohn (2002)
### Model and QDT
For the sake of simplicity, we consider a two-channel system, where the incident channel 1 is at threshold, while the other channel 2 is exothermic by some energy much greater than the collision energy; in particular it is not at threshold. The long-range potentials in both channels are assumed to scale as \(-C_{6}/r^{6}\), whereby the QDT functions are computed for each channel as above. In the incident channel near threshold, \(\eta_{1}=-\bar{a}k\), \(A_{1}^{1/2}=-(\bar{a}k)^{1/2}\), and as above we will not concern ourselves with \(\mathcal{G}_{1}\). In the outgoing channel which is far from threshold, \(A_{2}^{1/2}=1\) and the values of \(\eta_{2}\) and \(\mathcal{G}_{2}\) are irrelevant.
These two asymptotic channels are presumed to become coupled at short range, in a way that is well-approximated by a frame transformation. It is assumed that the short-range physics is described by two alternative channels, each with its own quantum defect \(\mu_{\lambda}\). In this approximation the short-range \(K\)-matrix is diagonal in the short-range basis and has the form
\[\tilde{K}^{sr}=\begin{pmatrix}\tan(\pi\mu_{1})&0\\ 0&\tan(\pi\mu_{2})\end{pmatrix}. \tag{15}\]
Significantly, we do not perform the usual MQDT step of eliminating closed channels, as there are none in this example. It should be remembered that we seek here the statistics of scattering lengths away from resonances.
In this \(2\times 2\) example the transformation between basis sets is a simple rotation through an angle \(\theta\). For any given collision, the value of \(\theta\) will be determined by exactly what the short-range and asymptotic channels are, including the spin structure of the atoms and the mixing of channels by ambient electromagnetic fields. To simplify the treatment we do not consider these details and assume that, across the ensemble of species and conditions considered, \(\theta\) is uniformly distributed in \(\theta\in[-\pi/2,\pi/2]\).
Expressed in the asymptotic basis, the short-range \(K\)-matrix in this notation then becomes
\[\tilde{K}=\begin{pmatrix}\cos^{2}\theta\tan(\pi\mu_{1})+\sin^{2}\theta\tan( \pi\mu_{2})&\cos\theta\sin\theta[\tan(\pi\mu_{2})-\tan(\pi\mu_{1})]\\ \cos\theta\sin\theta[\tan(\pi\mu_{2})-\tan(\pi\mu_{1})]&\sin^{2}\theta\tan( \pi\mu_{1})+\cos^{2}\theta\tan(\pi\mu_{2})\end{pmatrix}. \tag{16}\]
This leads to the asymptotic \(K\)-matrix
\[K =A^{1/2}\tilde{K}A^{1/2}\] \[=\begin{pmatrix}\bar{a}k\tilde{K}_{11}&-(\bar{a}k)^{1/2}\tilde{K} _{12}\\ -(\bar{a}k)^{1/2}\tilde{K}_{21}&\tilde{K}_{22}\end{pmatrix}, \tag{17}\]
followed by the \(S\)-matrix,
\[S=\begin{pmatrix}e^{-i\bar{a}k}&0\\ 0&e^{i\eta_{2}}\end{pmatrix}(I+iK)\left(I-iK\right)^{-1}\begin{pmatrix}e^{-i \bar{a}k}&0\\ 0&e^{i\eta_{2}}\end{pmatrix}. \tag{18}\]
Writing the resulting phase shift in channel 1 as \(S_{11}=\exp(2i\delta_{1})\), we define the complex scattering length in this channel via
\[a=\bar{a}(\alpha-i\beta)=-\frac{1}{\bar{k}}\delta_{1}. \tag{19}\]
This defines the dimensionless quantities \(\alpha\) and \(\beta\), regarded as real and imaginary parts of the scattering length in units of \(\bar{a}\). Expanding \(S_{11}\) to linear order in \(k\), these quantities are given by
\[\alpha =1-\tilde{K}_{11}+\frac{\tilde{K}_{12}^{2}}{\tilde{K}_{22}^{2}+1 }\tilde{K}_{22}, \tag{20a}\] \[\beta =\frac{\tilde{K}_{12}^{2}}{\tilde{K}_{22}^{2}+1}. \tag{20b}\]
### Probability Distributions
The scattering observables \(\alpha\) and \(\beta\) are functions of the fundamental parameters of the model, \(\mu_{1}\), \(\mu_{2}\), \(\theta\), which are treated as random variables. By the standard formalism for transforming and composing random variables, one can then find the probability distributions for \(\alpha\) and \(\beta\). These transformations are carried out in detail in the Appendix. Here we present and explore the results. For purposes of illustration, we have run a simulation choosing \(10,000\) triples \((\mu_{1},\mu_{2},\theta)\) from their uniform distributions. The subsequent quantities of the theory can then be calculated and displayed as histograms.
We begin with the distribution of elements of the short-range \(K\)-matrix, \(\tilde{K}\). The model predicts that these are distributed according to
\[P(\tilde{K}_{11}) =\frac{1}{\pi}\frac{1}{\tilde{K}_{11}^{2}+1}, \tag{21a}\] \[P(\tilde{K}_{12}) =\frac{2}{\pi^{2}}\frac{1}{\sqrt{(\tilde{K}_{12})^{2}+1}}\sinh^{ -1}\left(\frac{1}{|\tilde{K}_{12}|}\right). \tag{21b}\]
Histograms of the numerical simulations of these quantities are plotted in Fig. 1, along with (red lines) the formulas in (21). In the upper panel we see that the distribution of the diagonal matrix element \(\tilde{K}_{11}\) is very well-described by the ordinary Cauchy distribution from the one-channel case. While not shown, the distribution of \(\tilde{K}_{22}\) is the same. Each diagonal matrix element is the weighted sum of variables \(\tan(\pi\mu_{\lambda})\) that are Cauchy-distributed. The weights add to unity, whereby the average is also Cauchy distributed. This is shown in detail in the Appendix.
More interesting, and somewhat unexpected, is the distribution of off-diagonal elements shown in the lower panel. This distribution is far more strongly peaked near zero than the Cauchy distribution, a result captured in the analytical formula (21b). The most likely value of
\(\tilde{K}_{12}\) is zero, but a FWHM is not possible to define here, as the distribution suffers a logarithmic divergence:
\[\lim_{\tilde{K}_{12}\to 0}P(\tilde{K}_{12})=\frac{2}{\pi^{2}}\ln\left(\frac{2}{ \tilde{K}_{12}}\right). \tag{22}\]
One can, however, make the following comparison. For the Cauchy distribution that defines \(P(\tilde{K}_{11})\), half the distribution lies within \(\pm\bar{a}\) of zero; for the distribution \(P(\tilde{K}_{12})\), half the distribution is within \(\pm 0.55\bar{a}\). Thus in spite of the divergence, \(\bar{a}\) is still a relevant scale on which to consider the distribution.
We now turn to the final results, the distributions of dimensionless real and imaginary parts of the scattering length. These are displayed in Fig. 2, with \(\alpha\) in the upper panel and \(\beta\) in the lower, and are compared to the approximate analytical formulas
\[P(\alpha) =\frac{1}{\pi}\frac{1}{(\alpha-1)^{2}+1} \tag{23a}\] \[P(\beta) =\frac{1}{\pi^{2}}\frac{1}{\sqrt{\beta}}\left[\sinh^{-1}\left( \frac{1}{\sqrt{\beta}}\right)\right]^{2}, \tag{23b}\]
These two formulas are the main result of this note.
The real part, \(\alpha\) is well-described by the same Cauchy distribution (13) as for the single channel case. The reason for this is clear from the formula (21a). The main contribution to \(\alpha\) is given simply by \(1-\tilde{K}_{11}\), whereby the result follows trivially just as in the one-channel case. The correction to this result, the second term of (20a), is proportional to \(\tilde{K}_{12}^{2}\), hence is heavily peaked around zero and changes the scattering length but little. In practice, this works out so that (23a) is an excellent approximation. We conclude that, away from resonances, the two-channel elastic scattering length is distributed the same as a single-channel scattering length.
As for the imaginary part \(\beta\), it is by its nature strictly non-negative, and is distributed sharply near zero, an expected behavior it inherits from \(\tilde{K}_{12}\). The analytical formula for the distribution is approximate, but seems to describe the peak at zero quite well. The inset in the lower panel of Fig. 2 is the same histogram, but plotted with counts on a logarithmic scale, to better emphasize the tail of the distribution. As can be seen, the formula somewhat underestimates the true distribution at large values of \(\beta\), but we will not concern ourselves with this detail.
Figure 1: Probability distributions of the diagonal (upper) and off-diagonal (lower) elements of the short-range \(K\)-matrix. In each case, the histogram is numerically sampled from the model in the text. The red curves are the analytical formulas for the distributions, given in (21). The analytical curves are re-normalized to give the same integral as the histogram over the range shown.
Figure 2: Probability distributions of the real (upper) and imaginary (lower) parts of the normalized scattering length \(a/\bar{a}=\alpha-i\beta\). In each case, the histogram is numerically sampled from the model in the text. The red curves are the analytical formulas for the distributions, given in (23). The inset in the lower panel represents the same data, but with the vertical axis on a logarithmic scale, to better show the tail of the distribution.
## IV Discussion
Within the model presented, a message stands out. In the case of scattering in a single potential, we know that the real part of the scattering length is Cauchy distributed as given above, and that the imaginary part is rigorously zero. The present results note that, if an additional channel is added into which scattering can occur, the real part of the scattering length remains Cauchy distributed, while the imaginary part still _tries very hard_ to remain close to zero.
It is not hard to imagine that the result for elastic scattering generalizes. Consider scattering in some asymptotic channel \(i\) in a multichannel system. Within the frame transformation approximation assumed in this model, the non-resonant \(K\)-matrix in this channel will be given by the weighted average of diagonal \(K\)-matrices in each of the short-range channels \(\lambda\):
\[\tilde{K}_{ii}=\sum_{\lambda}\langle i|\lambda\rangle\tan(\pi\mu_{\lambda}) \langle\lambda|i\rangle. \tag{24}\]
And since the sum of squares of the coefficients of transformation is unity, we again recollect the Cauchy distribution from the single-channel case. Thus the Gribakin-Flambaum result is generalized to non-resonant multi-channel scattering.
Finally, let us put the result for the imaginary part of the scattering length into practical terms. The role of \(\beta\) is to track the flux that enters in channel 1 but departs in channel 2. Using the unitarity of the \(S\)-matrix,
\[|S_{12}|^{2}=1-|S_{11}|^{2}=1-|\exp(-2\bar{a}k\beta)|^{2}\approx 4\bar{a}k\beta, \tag{25}\]
giving the collision rate constant for inelastic collisions (regarded as bad),
\[\mathcal{K}_{\rm bad}=gv\frac{\pi}{k^{2}}|S_{12}|^{2}=g\left(\frac{\hbar k}{m _{r}}\right)\frac{\pi}{k^{2}}(4\bar{a}k\beta)=g\frac{4\pi\hbar}{m_{r}}\bar{a}\beta. \tag{26}\]
Here \(v=\hbar k/m_{r}\) is the collision velocity, and \(g\) is a factor that accounts for symmetrization: \(g=1\) unless the initial channel contains two identical atoms in identical internal states, in which case \(g=2\). In the event that these bad collisions lead to loss from the trap, their number density \(n\) diminishes in time according to
\[\frac{dn}{dt}=-\mathcal{K}_{\rm bad}n^{2}, \tag{27}\]
assuming that the loss is dominated by two-body scattering events.
To put the result into perspective, consider the following. Suppose you are building a new laboratory to cool and trap an atomic or molecular species that has not been trapped before, so that nothing is known about its collision properties. (I do not think this is something Ravi is likely to do, but one never knows!) Suppose, further, that some kind of bad collision process is possible, and that it occurs at short range. This may include spin-exchange for atoms, chemical reactions for molecules, or perhaps light-assisted collisions for either. In the context of collisions, all you know are the reduced mass and the \(C_{6}\) coefficient, which can often be estimated in perturbation theory.
From this, you would like some sense of the size of the rate constant for bad collisions. You can construct a typical scale for this quantity by disregarding the influence of \(\beta\), thus defining a reference rate constant
\[\mathcal{K}_{\rm ref}=g\frac{4\pi\hbar}{m_{r}}\bar{a}. \tag{28}\]
In terms of this reference value, the true rate constant will be given by
\[\mathcal{K}_{\rm bad}=\mathcal{K}_{\rm ref}\;\beta. \tag{29}\]
That is, the values of \(\mathcal{K}_{\rm bad}\), in units of the reference value \(\mathcal{K}_{\rm ref}\), are distributed just as the value of \(\beta\) is in Eqn. (23b).
In this spirit, we present in Fig. 3 the cumulative probability distribution for the normalized bad rate constant. From this figure we read that there is an approximately 80% probability that the actual rate constant is smaller than \(\mathcal{K}_{\rm ref}\); the easiest estimate is likely an over-estimate. Even better: the odds are about 34% that the actual rate constant is 100 times smaller than \(\mathcal{K}_{\rm ref}\), thus bad scattering has at least a fighting chance of not being as bad as
Figure 3: Cumulative probability distribution for the rate constant \(\mathcal{K}_{\rm bad}\) for “bad” collisions, normalized by the reference rate \(\mathcal{K}_{\rm ref}=g(4\pi\hbar/m_{r})\bar{a}\).
feared. This is the ultimate consequence of the peaking of \(\beta\) around zero.
To return to the context of the mixed-BEC experiment in (Myatt _et al._, 1997), for rubidium we expect a reference rate constant of \(\mathcal{K}_{\rm ref,Rb}=7.7\times 10^{-11}\) cm\({}^{3}\)/s. The observed value, \(\mathcal{K}=2.2\times 10^{-14}\) cm\({}^{3}\)/s, is 3500 times smaller. From our simple theory, finding a rate this small or smaller is an event with probability \(\approx 12\%\).
The simple distributions presented here are of course subject to assumptions of the model. For example, they refer to scattering with only a single loss channel. More significantly, the result assumes that the rotation angle \(\theta\) is uniformly distributed. Nonetheless, the results are emblematic of future possibilities, where statistical understanding of ultracold collisions can be explored through the lens of MQDT.
This work is supported by the National Science Foundation under Grant No. PHY2110327. JLB gratefully acknowledges advice and encouragement from Ravi Rau over the years, particularly in graduate school when things may have gone off the rails.
## Appendix A Transformation of Probability Distributions
Given certain variables with defined probability distributions functions (pdfs), it is a standard matter to find the pdfs of combinations of these variables. The results used here are as follows. Suppose \(X\) is a random variable with probability distribution \(P_{X}(x)\). We now change variables to a new random variable \(Y=Y(X)\), given as a function of the original. Then the new pdf is
\[P_{Y}(y)=P_{X}(x)\Big{|}\frac{dy}{dx}\Big{|}^{-1}, \tag{10}\]
where on the right the inversion \(x=x(y)\) is implied.
Given two pdfs, \(P_{X}(x)\), \(P_{Y}(y)\), assumed to by independent, the pdf of their sum \(Z=X+Y\) is given by
\[P_{Z}(z) =\int dx\int dyP_{X}(x)P_{Y}(y)\delta(x+y-z) \tag{11}\] \[=\int dxP_{X}(x)P_{Y}(z-x), \tag{12}\]
while the pdf of their product \(W=XY\) is given by
\[P_{W}(w) =\int dx\int dyP_{X}(x)P_{Y}(y)\delta(w-xy) \tag{13}\] \[=\int dxP_{X}(x)P_{Y}\left(\frac{w}{x}\right)\frac{1}{|x|}. \tag{14}\]
In both cases the limits of integration are those appropriate to the ranges of the original pdfs. In practice, we evaluate these integrals in Mathematica, at least up to the point where the resulting expression, even if in principle analytic, is no longer useful to look at. In the following we will omit the subscript on \(P\), the random variable being assumed identified by the argument.
For example, we have a quantum defect distributed according to \(P(\mu)=1\), for \(\mu\in[-1/2,1/2]\), the corresponding \(K\)-matrix \(\tilde{K}=\tan(\pi\mu)\) has pdf
\[P(\tilde{K}) =P\Big{(}\mu(\tilde{K})\Big{)}\left|\frac{d\tilde{K}}{d\mu} \right|^{-1}\] \[=\frac{1}{\pi}\left(\frac{1}{\sqrt{1+\tilde{K}^{2}}}\right)^{2}\] \[=\frac{1}{\pi}\frac{1}{\tilde{K}^{2}+1}. \tag{15}\]
Next we construct the pdf for the short-range \(K\)-matrix. For example,
\[\tilde{K}_{11}=\cos^{2}\theta\tan(\pi\mu_{1})+\sin^{2}\theta\tan(\pi\mu_{2}). \tag{16}\]
Each random variable \(t_{i}=\tan\pi\mu_{i}\) is Cauchy distributed. Scaling these variables to, for example, \(t_{a}=at\) yields the distribution
\[P(t_{a})=\frac{1}{\pi}\frac{|a|}{t_{a}^{2}+a^{2}}, \tag{17}\]
with FWHM \(|a|\). Thus, if \(t_{a}=a\tan(\pi\mu_{1})\) and \(t_{b}=b\tan(\pi\mu_{2})\) are two such scaled variables, their sum \(t_{ab}=t_{a}+t_{b}\) has distribution
\[P(t_{ab}) =\int_{-\infty}^{\infty}dt_{a}\frac{1}{\pi}\frac{|a|}{t_{a}^{2}+ a^{2}}\frac{1}{\pi}\frac{|b|}{(t_{ab}-t_{a})^{2}+b^{2}}\] \[=\frac{1}{\pi}\frac{|a|+|b|}{z^{2}+(|a|+|b|)^{2}}. \tag{18}\]
From this it follows that, for our matrix element \(\tilde{K}_{11}\), with \(a=\cos^{2}\theta\), \(b=\sin^{2}\theta\), we have
\[P(\tilde{K}_{11})=\frac{1}{\pi}\frac{1}{\tilde{K}_{11}^{2}+1}. \tag{19}\]
The same is true for \(\tilde{K}_{22}\).
The off-diagonal element of the short-range \(K\)-matrix is distributed quite differently.
\[\tilde{K}_{12}=\frac{1}{2}\sin(2\theta)(t_{2}-t_{1}), \tag{20}\]
with \(\theta\) distributed uniformly through \(\theta\in[-\pi/2,\pi/2]\). The pdf for \(u=\sin(2\theta)/2\) (\(u\in[-1/2,1/2]\)) is given by
\[P(u) =\frac{1}{\pi}\times\Big{|}\cos 2\theta\Big{|}^{-1}\] \[=\frac{1}{\pi}\frac{1}{\sqrt{1-\sin^{2}2\theta}}\] \[=\frac{1}{\pi}\frac{1}{\sqrt{1-4u^{2}}}. \tag{21}\]
Meanwhile, the pdf of the difference \(t=t_{2}-t_{1}\) is
\[P(t) =\int_{-\infty}^{-\infty}dt_{1}\frac{1}{\pi}\frac{1}{t_{1}^{2}+1} \frac{1}{\pi}\frac{1}{(t-t_{1})^{2}+1}\] \[=\frac{2}{\pi}\frac{1}{t^{2}+4}. \tag{113}\]
This is another Cauchy distribution, but one with twice the FWHM; this is a special case of 109. Finally, the product is composed to give
\[P(\tilde{K}_{12}) \propto\int_{-1/2}^{1/2}du\frac{1}{\sqrt{1-4u^{2}}}\frac{1}{(u/ \tilde{K}_{12})^{2}+4}\frac{1}{|u|}, \tag{114}\] \[P(\tilde{K}_{12}) =\frac{2}{\pi^{2}}\frac{1}{\sqrt{(\tilde{K}_{12})^{2}+1}}\sinh^{ -1}\left(\frac{1}{|\tilde{K}_{12}|}\right). \tag{115}\]
Here the argument of the inverse hyperbolic sine function makes the distribution divergent at \(\tilde{K}_{12}=0\), emphasizing small values of this parameter. Yet, the divergence is logarithmic, thus maintaining normalizability.
To get to the distributions for the imaginary part of the scattering length requires yet a few more steps. Given that \(x=\tilde{K}_{22}\) is Cauchy distributed as above, define \(v=1/(x^{2}+1)\). Well,
\[\frac{dv}{dx} =\frac{2x}{x^{2}+1}=2xv^{2}, \tag{116}\] \[P(v) \propto v\frac{1}{xv^{2}}\] (117) \[=\frac{1}{\pi}\frac{1}{\sqrt{v(1-v)}}, \tag{118}\]
where \(v\in[0,1]\). Similarly, setting \(y=\tilde{K}_{12}\) and \(w=y^{2}\), we have
\[P(w)=\frac{2}{\pi^{2}}\frac{1}{\sqrt{w(w+1)}}\sinh^{-1}\left(\frac{1}{\sqrt{w }}\right). \tag{119}\]
In this notation, we have
\[\beta=\frac{1}{\tilde{K}_{22}^{2}+1}\tilde{K}_{12}^{2}=vw. \tag{120}\]
Then the distribution of \(\beta\) is given formally by
\[P(\beta) =\int_{0}^{1}dv\frac{1}{\pi}\frac{1}{\sqrt{v(1-v)}}\frac{2}{\pi^ {2}}\frac{1}{\sqrt{(\beta/v)(\beta/v+1)}}\] \[\qquad\qquad\times\sinh^{-1}\left(\frac{1}{\sqrt{\beta/v}} \right)\frac{1}{v} \tag{121}\] \[=\frac{1}{\sqrt{\beta}}\frac{2}{\pi^{3}}\int_{0}^{1}dv\frac{1}{ \sqrt{v(1-v)}}\frac{1}{\sqrt{v+\beta}}\sinh^{-1}\left(\sqrt{\frac{v}{\beta}} \right).\]
this expression is somewhat intractable, or at least, Mathematica could not seem to tract it.
We therefore make an approximation. We regard \(\beta\) as fundamentally determined by the factor \(\tilde{K}_{12}^{2}\), as modified somewhat by \(v=1/(\tilde{K}_{22}^{2}+1)\). The probability distribution for \(v\) is seen to be strongly peaked around \(v=0\) and \(v=1\). For values of \(v\) near unity, \(\tilde{K}_{22}^{2}\) is hardly changed, whereas when \(v\approx 0\), the value of \(\tilde{K}_{22}^{2}\) is dramatically reduced. the influence of \(v\) is therefore approximately accounted for by the simplified distribution
\[P^{\prime}(v)=\frac{1}{2}\frac{1}{\sqrt{v}},\quad v\in[0,1]. \tag{122}\]
With this approximation, the probability distribution for \(\beta\) becomes relatively simple:
\[P(\beta) \approx\frac{1}{\sqrt{\beta}}\frac{1}{\pi^{2}}\int_{0}^{1}dv\frac {1}{\sqrt{v}}\frac{1}{\sqrt{v+\beta}}\sinh^{-1}\left(\sqrt{\frac{v}{\beta}} \right).\] \[=\frac{1}{\pi^{2}}\frac{1}{\sqrt{\beta}}\left[\sinh^{-1}\left( \frac{1}{\sqrt{\beta}}\right)\right]^{2}. \tag{123}\]
This formula does a reasonable job of focusing the probability heavily toward \(\beta=0\).
|
2309.12559 | Invariant Learning via Probability of Sufficient and Necessary Causes | Out-of-distribution (OOD) generalization is indispensable for learning models
in the wild, where testing distribution typically unknown and different from
the training. Recent methods derived from causality have shown great potential
in achieving OOD generalization. However, existing methods mainly focus on the
invariance property of causes, while largely overlooking the property of
\textit{sufficiency} and \textit{necessity} conditions. Namely, a necessary but
insufficient cause (feature) is invariant to distribution shift, yet it may not
have required accuracy. By contrast, a sufficient yet unnecessary cause
(feature) tends to fit specific data well but may have a risk of adapting to a
new domain. To capture the information of sufficient and necessary causes, we
employ a classical concept, the probability of sufficiency and necessary causes
(PNS), which indicates the probability of whether one is the necessary and
sufficient cause. To associate PNS with OOD generalization, we propose PNS risk
and formulate an algorithm to learn representation with a high PNS value. We
theoretically analyze and prove the generalizability of the PNS risk.
Experiments on both synthetic and real-world benchmarks demonstrate the
effectiveness of the proposed method. The details of the implementation can be
found at the GitHub repository: https://github.com/ymy4323460/CaSN. | Mengyue Yang, Zhen Fang, Yonggang Zhang, Yali Du, Furui Liu, Jean-Francois Ton, Jianhong Wang, Jun Wang | 2023-09-22T01:06:16Z | http://arxiv.org/abs/2309.12559v5 | # Invariant Learning via Probability of Sufficient and Necessary Causes
###### Abstract
Out-of-distribution (OOD) generalization is indispensable for learning models in the wild, where testing distribution typically unknown and different from the training. Recent methods derived from causality have shown great potential in achieving OOD generalization. However, existing methods mainly focus on the invariance property of causes, while largely overlooking the property of _sufficiency_ and _necessity_ conditions. Namely, a necessary but insufficient cause (feature) is invariant to distribution shift, yet it may not have required accuracy. By contrast, a sufficient yet unnecessary cause (feature) tends to fit specific data well but may have a risk of adapting to a new domain. To capture the information of sufficient and necessary causes, we employ a classical concept, the probability of sufficiency and necessary causes (PNS), which indicates the probability of whether one is the necessary and sufficient cause. To associate PNS with OOD generalization, we propose PNS risk and formulate an algorithm to learn representation with a high PNS value. We theoretically analyze and prove the generalizability of the PNS risk. Experiments on both synthetic and real-world benchmarks demonstrate the effectiveness of the proposed method. The detailed implementation can be found at the GitHub repository: [https://github.com/ymy4323460/CaSN](https://github.com/ymy4323460/CaSN).
## 1 Introduction
The traditional supervised learning methods heavily depend on the in-distribution (ID) assumption, where the training data and test data are sampled from the same data distribution (Shen et al., 2021; Peters et al., 2016). However, the ID assumption may not be satisfied in some practical scenarios like distribution shift (Zhang et al., 2013; Sagawa et al., 2019), which leads to the failure of these traditional supervised learning methods. To relax the ID assumption, researchers have recently started to study a different learning setting called _out-of-distribution_ (OOD) _generalization_. OOD generalization aims to train a model using the ID data such that the model generalizes well in the unseen test data that share the same semantics with ID data (Li et al., 2018; Ahuja et al., 2021).
Recent works have proposed to solve the OOD generalization problem through the lens of causality (Peters et al., 2016; Pfister et al., 2019; Rothenhausler et al., 2018; Heinze-Deml et al., 2018; Gamella and Heinze-Deml, 2020; Oberst et al., 2021; Chen et al., 2022). These works focus on learning invariant representation, aiming to capture the cause of the labels. By learning this representation, one can bridge the gap between the ID training data and unknown OOD test data, and thus mitigate
the negative impacts on the distribution shift between ID and OOD distributions. Among these works, invariant risk minimization (IRM) (Arjovsky et al., 2019) is the most representative method, targeting to identify invariant representation and classifier using a bi-level optimization algorithm. Following works, many efforts have been devoted to further extending the original invariant learning framework (Chen et al., 2022; Ahuja et al., 2020; Lu et al., 2021; Liu et al., 2021; Liu et al., 2021; Lin et al., 2022).
Noticeably, the aforementioned invariant learning methods mainly focus on learning the invariant causal representation, which may contain non-essential information that is not necessary nor sufficient information (Pearl, 2009). In image classification tasks, necessity describes the label is not true if the features disappear, and sufficiency describes the presence of a feature helps us determine the correctness of the label. If the feature extractor only learns a representation that is invariant but fails to satisfy the sufficiency or necessity, the model's generalisation ability may deteriorate. As an illustrative example (see Figure 3(a)), suppose that the training data only contains images of cats with feet and that we are interested in learning a model for a cat prediction task. If the model captures the invariant information (feature) "cat feet", then the learned model is likely to make a mistake in the OOD data containing cats without "cat feet" features. The example "cat feet" demonstrates the representation contains sufficient but unnecessary causal information because using "cat feet" can predict the label "cat" but a cat image might not contain "cat feet". Analogously, there are also representations that are necessary but not sufficient (the feature "pointy ear" in Figure 3(a)). In Section 2.2, we present more examples to enhance the understanding of sufficiency and necessity.
This paper proposes achieving OOD generalization using _essential causal information_, which builds upon the probability of _necessarity_ and _sufficiency_ (PNS) (Pearl, 2009). In this paper, we introduce the PNS risk. A low PNS risk implies that the representation contains both the necessary and sufficient causal information from the observation data with a high level of confidence. We provide some theoretical analysis that establishes the approximation of the risk on unseen test domains by the risk on source data. Based on these theoretical results, we discuss PNS risk in the context of a semantic separable representation space and propose an algorithm for learning the representation which contains the information of both sufficient and necessary causes from training data (ID data) under different causal assumptions in Figure 3(b). The main contributions of this paper are as follows:
Firstly, we propose a new learning risk--PNS risk--to estimate the sufficiency and necessity of information contained in the learned representation. Secondly, we theoretically analyze the PNS risk under OOD problem and bound the gap between PNS risk on the test domain distribution and the risk on source data. Lastly, we propose an algorithm that captures sufficient and necessary causal representation with low PNS risk on test domains. Experiments on synthetic and real-world benchmarks are conducted to show the effectiveness of the algorithm over state-of-the-art methods.
## 2 Preliminaries
### Learning Setups
**Domains.** Let \(\mathbf{X}\in\mathcal{X}\subset\mathbb{R}^{D}\) be the observable feature variable and \(Y\in\mathcal{Y}\) be the label. In this paper, we mainly focus on binary classification task2, i.e., the label space \(\mathcal{Y}=\{0,1\}\). \(\mathcal{S}\) is a joint distribution \(P_{s}(\mathbf{X},Y)\) defined over \(\mathcal{X}\times\mathcal{Y}\) in source domain. Equivalently, the unseen test domain is
Figure 1: (a) Examples for causal sufficiency and necessity in the cat classification. (b) The causal graph for OOD generalization problem. The arrows denote the causal generative direction and the dashed line connects the spurious correlated variables. Notations are formally defined in Section 2.1.
\(\mathcal{T}:=P_{t}(\mathbf{X},Y)\). We also set \(\mathcal{T}_{\mathbf{X}}:=P_{t}(\mathbf{X})\) to be the marginal distribution over variable \(\mathbf{X}\) on test domain. Similarly, \(\mathcal{S}_{\mathbf{X}}:=P_{s}(\mathbf{X})\) is the marginal distributions on source domain over \(\mathbf{X}\).
**Assumption and model.** We conclude the causal graph in OOD generalization in Figure 3(b), inspired by the content (invariant) features and style (variant) features partition (Zhang et al., 2022). There are the invariant feature \(\mathbf{C}\in\mathbb{R}^{d}\) and domain specific variable (i.e. domain indicator) \(V\in\{1,\cdots,n\}\) A common _assumption_ of OOD generalization is that there exists latent causal variable \(\mathbf{C}\in\mathbb{R}^{d}\) that maintains the invariance property across domains (see Figure 3(b)), i.e., \(P_{s}(Y|\mathbf{C}=\mathbf{c})=P_{t}(Y|\mathbf{C}=\mathbf{c})\)(Arjovsky et al., 2019). Built upon this assumption, we define an invariant predictor by using a simple linear classifier \(\mathbf{w}:\mathbb{R}^{d}\rightarrow\mathcal{Y}\) on the causal features to get label \(y=\mathrm{sign}(\mathbf{w}^{\top}\mathbf{c})\). Since the causal variable \(\mathbf{C}\) cannot be directly observed, we infer \(\mathbf{C}\) from the observational data \(\mathbf{x}\sim\mathbf{X}\). Then, the invariant predictor with the invariant representation inference model is defined as below.
\[y=\mathrm{sign}[\mathbb{E}_{\mathbf{c}\sim P_{t}(\mathbf{C}|\mathbf{X}= \mathbf{x})}\mathbf{w}^{\top}\mathbf{c}]. \tag{1}\]
### Probability of Sufficient and Necessary Cause
Existing invariant learning strategies (Rojas-Carulla et al., 2018; Pfister et al., 2019; Arjovsky et al., 2019) only consider the invariant property. However, the invariant representation can be further divided into three parts, each containing the different sufficient and necessary causal information.
**(i) Sufficient but unnecessary causes \(A\):** The cause \(A\) leads to the effect \(B\), but when observing the effect \(B\), it is hard to confirm \(A\) is the actual cause (See example in Figure 3(a)). **(ii) Necessary but insufficient causes \(A\):** Knowing effect \(B\) we confirm the cause is \(A\), but cause \(A\) might not lead to effect \(B\). "pointy ear" in cat prediction is selected as a typical example. Because when the ear shape is not pointy, we can confirm it is not a cat. However, a fox has a similar ear shape to a cat. Thus "pointy ear" is not a stable feature to predict cats. **(iii) Necessary and sufficient causes \(A\):** Knowing the effect \(B\) confirms the cause \(A\), while observing \(A\) leads to \(B\). In the cat and fox classification task, "short mouth" could be a necessary and sufficient cause. It is because the feature "short mouth" allows us to distinguish a cat from a fox, and when know there is a cat, "short mouth" must exist.
In order to learn invariant representations \(\mathbf{C}\) contains both sufficient and necessary causal information, we refer to the concept of _Probability of Sufficient and Necessary_ (PNS) (Chapter 9 in (Pearl, 2009), which is formally defined as below.
**Definition 2.1** (Probability of Necessary and Sufficient (Pns) (Pearl, 2009)).: Let the specific implementations of causal variable \(\mathbf{C}\) as \(\mathbf{c}\) and \(\overline{\mathbf{c}}\), where \(\overline{\mathbf{c}}\neq\mathbf{c}\). The probability that \(\mathbf{C}\) is the necessary and sufficiency cause of \(Y\) on test domain \(\mathcal{T}\) is
\[\begin{split}\text{PNS}(\mathbf{c},\overline{\mathbf{c}}):=& \underbrace{P_{t}(Y_{do(\mathbf{C}=\overline{\mathbf{c}})}=y\mid \mathbf{C}=\overline{\mathbf{c}},Y\neq y)}_{\text{sufficiency}}P_{t}(\mathbf{C} =\overline{\mathbf{c}},Y\neq y)\\ +&\underbrace{P_{t}(Y_{do(\mathbf{C}=\overline{ \mathbf{c}})}\neq y\mid\mathbf{C}=\mathbf{c},Y=y)}_{\text{necessity}}P_{t}( \mathbf{C}=\mathbf{c},Y=y).\end{split} \tag{2}\]
In the above definition, the notion \(P(Y_{do(\mathbf{C}=\overline{\mathbf{c}})}\neq y|\mathbf{C}=\mathbf{c},Y=y)\) means that we study the probability of \(Y\neq y\) when we force the manipulable variable \(\mathbf{C}\) to be a fixed value \(do(\mathbf{C}=\overline{\mathbf{c}})\) (do-operator) given a certain factual observation \(Y=y\) and \(\mathbf{C}=\mathbf{c}\). The first and second terms in PNS correspond to the probabilities of sufficiency and necessity, respectively. Variable \(\mathbf{C}\) has a high probability to be the sufficient and necessary cause of \(Y\) when the PNS value is large. Computing the counterfactual probability is a challenging problem since collecting the counterfactual data is difficult, or even impossible in real-world systems. Fortunately, PNS defined on counterfactual distribution can be directly estimated by the data under proper conditions, i.e., Exogeneity and Monotonicity.
**Definition 2.2** (Exogeneity (Pearl, 2009)).: Variable \(\mathbf{C}\) is exogenous relative to variable \(Y\) w.r.t. source and test domains \(\mathcal{S}\) and \(\mathcal{T}\), if the intervention probability is identified by conditional probability \(P_{s}(Y_{do(\mathbf{C}=\overline{\mathbf{c}})}=y)=P_{s}(Y=y|\mathbf{C}=\mathbf{ c})\) and \(P_{t}(Y_{do(\mathbf{C}=\overline{\mathbf{c}})}=y)=P_{t}(Y=y|\mathbf{C}=\mathbf{c})\).
**Definition 2.3** (Monotonicity (Pearl, 2009)).: \(Y\) is monotonic relative to \(X\) if and only if either \(P(Y_{do(\mathbf{C}=\overline{\mathbf{c}})}=y,Y_{do(\mathbf{C}=\overline{ \mathbf{c}})}\neq y)=0\) or \(P(Y_{do(\mathbf{C}=\overline{\mathbf{c}})}\neq y,Y_{do(\mathbf{C}=\overline{ \mathbf{c}})}=y)=0\)
The definition of Exogeneity describes the gap between the intervention and conditional distributions vanishes when \(\mathbf{C}\) is exogenous relative to \(Y\) and the definition of Monotonicity demonstrates the monotonic effective on \(Y\) of causal variable \(\mathbf{C}\). Based on Definitions 2.2 and 2.3, the identifiability of PNS in Definition 2.1 is described as the following lemma.
**Lemma 2.4** (Pearl (2009)).: _If \(\mathbf{C}\) is exogenous relative to \(Y\), and \(Y\) is monotonic relative to \(\mathbf{C}\), then_
\[\text{PNS}(\mathbf{c},\mathbf{\bar{c}})=\underbrace{P_{t}(Y=y|\mathbf{C}= \mathbf{c})}_{\text{sufficiency}}-\underbrace{P_{t}(Y=y|\mathbf{C}=\mathbf{\bar{c} })}_{\text{necessity}}. \tag{3}\]
According to Lemma 2.4, the computation of PNS is feasible through the observation data under Exogeneity and Monotonicity. This allows us to quantify PNS when counterfactual data is unavailable. The proof of Lemma 2.4 is provided by Pearl (2009). Wang & Jordan (2021) further extend the proof by incorporating probabilistic computation, as opposed to the logical calculation used in Pearl (2009).
## 3 PNS Risk Modeling
This section presents the PNS-based risk for invariant learning in OOD problem. The risk on test domains is a PNS-value evaluator, which is bounded by the tractable risk on the training domain.
### PNS Risk
In this section, we introduce the PNS risk, which is a PNS-value estimator. The risk estimates the PNS value of the representation distribution \(P_{t}(\mathbf{C}|\mathbf{X}=\mathbf{x})\) inferred from \(\mathbf{X}\) on an unseen test domain \(\mathcal{T}\). The risk increases when the representation contains less necessary and sufficient information, which can be caused by data distribution shifts. The PNS risk is based on the definition of \(\text{PNS}(\mathbf{c},\mathbf{\bar{c}})\). As \(\mathbf{\bar{c}}\) represents the intervention value, it is not necessary for it to be a sample from the same distribution as the causal variable \(\mathbf{C}\). Thus, we define an auxiliary variable \(\mathbf{\bar{C}}\in\mathbb{R}^{d}\) (same as the range of \(\mathbf{C}\)) and sample \(\mathbf{\bar{c}}\) from its distribution \(P_{t}(\mathbf{\bar{C}}|\mathbf{X}=\mathbf{x})\). In the learning method, we use the notations \(P_{t}^{\phi}(\mathbf{C}|\mathbf{X}=\mathbf{x})\) and \(P_{t}^{\xi}(\mathbf{\bar{C}}|\mathbf{X}=\mathbf{x})\) to present the estimated distributions, which are parameterized by \(\phi\) and \(\xi\), separately. Let \(\mathrm{I}(A)\) be an indicator function, where \(\mathrm{I}(A)=1\) if \(A\) is true; otherwise, \(\mathrm{I}(A)=0\). PNS risk based on Definition 2.1 and Lemma 2.4 is formally defined as Eq. (4) below.
\[\begin{split} R_{t}(\mathbf{w},\phi,\xi):=\mathbb{E}_{(\mathbf{x },y)\sim\mathcal{T}}\big{[}\mathbb{E}_{\mathbf{c}\sim P_{t}(\mathbf{C}| \mathbf{X}=\mathbf{x})}\mathrm{I}[\mathrm{sign}(\mathbf{w}^{\top}\mathbf{c}) \neq y]\\ +\mathbb{E}_{\mathbf{c}\sim P_{t}(\mathbf{\bar{C}}|\mathbf{X}= \mathbf{x})}\mathrm{I}[\mathrm{sign}(\mathbf{w}^{\top}\mathbf{\bar{c}})=y] \big{]}.\end{split} \tag{4}\]
As the identifiability result in Lemma 2.4 is based on the Exogeneity 2.2 and Monotonicity 2.3, we modify the original risk equation, Eq. (4), to ensure compliance with these conditions. Below, we provide Monotonicity measurement and discuss the satisfaction of Exogeneity in Section 4.3.
**Satisfaction of monotonicity.** We naturally introduce the measurement of Monotonicity into PNS risk by deriving an upper bound of Eq. (4), which is given below.
**Proposition 3.1**.: _Given a test domain \(\mathcal{T}\), we define the sufficient and necessary risks as:_
\[\begin{split} SF_{t}(\mathbf{w},\phi):=\underbrace{\mathbb{E}_{( \mathbf{x},y)\sim\mathcal{T}}\mathbb{E}_{\mathbf{c}\sim P_{t}^{\phi}(\mathbf{ C}|\mathbf{X}=\mathbf{x})}\mathrm{I}[\mathrm{sign}(\mathbf{w}^{\top}\mathbf{c}) \neq y]}_{\text{sufficiency term}},\\ NC_{t}(\mathbf{w},\xi):=\underbrace{\mathbb{E}_{(\mathbf{x},y) \sim\mathcal{T}}\mathbb{E}_{\mathbf{c}\sim P_{t}^{\xi}(\mathbf{C}|\mathbf{X}= \mathbf{x})}\mathrm{I}[\mathrm{sign}(\mathbf{w}^{\top}\mathbf{\bar{c}})=y]}_{ \text{necessity term}},\end{split}\]
_and let the Monotonicity measurement be_
\[M_{t}^{\mathbf{w}}(\phi,\xi):=\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{T}} \mathbb{E}_{\mathbf{c}\sim P_{t}^{\phi}(\mathbf{C}|\mathbf{X}=\mathbf{x})} \mathbb{E}_{\mathbf{c}\sim P_{t}^{\xi}(\mathbf{C}|\mathbf{X}=\mathbf{x})} \mathbb{I}[\mathrm{sign}(\mathbf{w}^{\top}\mathbf{c})=\mathrm{sign}(\mathbf{w} ^{\top}\mathbf{\bar{c}})],\]
_then we have_
\[R_{t}(\mathbf{w},\phi,\xi)=M_{t}^{\mathbf{w}}(\phi,\xi)+2SF_{t}(\mathbf{w}, \phi)NC_{t}(\mathbf{w},\xi)\leq M_{t}^{\mathbf{w}}(\phi,\xi)+2SF_{t}(\mathbf{w},\phi). \tag{5}\]
The upper bound for PNS risk in Eq. (5) consists of two terms: (i) the evaluator of sufficiency \(SF_{t}(\mathbf{w},\phi)\) and (ii) the Monotonicity measurement \(M_{t}^{\mathbf{w}}(\phi,\xi)\). In the upper bound, the necessary term \(\text{NC}_{t}(\mathbf{w},\xi)\) is considered to be absorbed into measurement of Monotonicity \(M_{t}^{\mathbf{w}}(\phi,\xi)\). The minimization process of Eq. (4) on its upper bound (5) considers the satisfaction of Monotonicity.
### OOD Generalization with PNS risk
In OOD generalization tasks, only source data collected from \(\mathcal{S}\) is provided, while the test domain \(\mathcal{T}\) is unavailable during the optimization process. As a result, it is not possible to directly evaluate the risk on the test domain, i.e. \(R_{t}(\mathbf{w},\phi,\xi)\). To estimate \(R_{t}(\mathbf{w},\phi,\xi)\), we have a two-step process: (i) Firstly, since the test-domain distribution \(\mathcal{T}\) is not available during the training process, We aim to establish a connection between the risk on the test domain \(R_{t}(\mathbf{w},\phi,\xi)\) and the risk on the source domain \(R_{s}(\mathbf{w},\phi,\xi)\) in Theorem 3.2. (ii) Furthermore, in practical scenarios where only a finite number of samples are available, we demonstrate the bound of the gap between the expected risk on the domain distribution and the empirical risk on the source domain data in Theorem 3.3.
**Connecting the PNS risks, i.e., \(R_{t}(\mathbf{w},\phi,\xi)\) and \(R_{s}(\mathbf{w},\phi,\xi)\).** We introduce divergence measurement \(\beta\) divergence (Ganin et al., 2016) and weigh the \(R_{s}(\mathbf{w},\phi,\xi)\) term by variational approximation. \(\beta\) divergence measures the distance between domain \(\mathcal{T}\) and \(\mathcal{S}\), which is formally defined below.
\[\beta_{k}(\mathcal{T}\|\mathcal{S})=\left[\underset{(\mathbf{x},y)\sim \mathcal{S}}{\mathbb{E}}\left(\frac{\mathcal{T}(\mathbf{x},y)}{\mathcal{S}( \mathbf{x},y)}\right)^{k}\right]^{\frac{1}{k}}. \tag{6}\]
Based on \(\beta_{k}(\mathcal{T}\|\mathcal{S})\), we connect the risks on the source and test domains by Theorem 3.2.
**Theorem 3.2**.: _The risk on the test domain is bounded by the risk on the source domain, i.e.,_
\[R_{t}(\mathbf{w},\phi,\xi)\leq\lim_{k\rightarrow+\infty}\beta_{k}(\mathcal{T} \|\mathcal{S})([M_{s}^{\mathbf{w}}(\phi,\xi)]^{1-\frac{1}{k}}+2[SF_{s}( \mathbf{w},\phi)]^{1-\frac{1}{k}})+\eta_{t\setminus s}(\mathbf{X},Y),\]
_where_
\[\eta_{t\setminus s}(\mathbf{X},Y):=P_{t}(\mathbf{X}\times Y\notin\mathrm{supp }(\mathcal{S}))\cdot\sup R_{t\setminus s}(\mathbf{w},\phi,\xi).\]
_Here \(\mathrm{supp}(\mathcal{S})\) is the support set of source domain distribution \(P_{s}(\mathbf{X})\),_
\[R_{t\setminus s}(\mathbf{w},\phi,\xi):=\mathbb{E}_{(\mathbf{x},y )\sim P_{t}(\mathbf{X}\times Y\notin\mathrm{supp}(\mathcal{S}))}\big{[} \mathbb{E}_{\mathbf{c}\sim P_{t}(\mathbf{C}|\mathbf{X}=\mathbf{x})}\mathrm{I} [\mathrm{sign}(\mathbf{w}^{\top}\mathbf{c})\neq y]\] \[+\mathbb{E}_{\mathbf{c}\sim P_{t}(\mathbf{C}|\mathbf{X}=\mathbf{ x})}\mathrm{I}[\mathrm{sign}(\mathbf{w}^{\top}\mathbf{\bar{c}})=y]\big{]}.\]
In Theorem 3.2, \(\eta_{t\setminus s}(\mathbf{X},Y)\) describes the expectation of worst risk for unknown area i.e. the data sample \((\mathbf{x},y)\) does not include in the source domain support set \(\mathrm{supp}(\mathcal{S})\). Theorem 3.2 connects the source-domain risk and the test-domain risk. In the ideal case, where \(\mathbf{C}\) is the invariant representation, i.e. \(P_{s}(Y|\mathbf{C}=\mathbf{c})=P_{t}(Y|\mathbf{C}=\mathbf{c})\), the bound is reformed as below.
\[R_{t}(\mathbf{w},\phi,\xi)\leq\lim_{k\rightarrow+\infty}\beta_{k}(\mathcal{T} _{\mathbf{X}}\|\mathcal{S}_{\mathbf{X}})([M_{s}^{\mathbf{w}}(\phi,T)]^{1- \frac{1}{k}}+2[SF_{s}(\mathbf{w},\phi)]^{1-\frac{1}{k}})+\eta_{t\setminus s}( \mathbf{X},Y). \tag{7}\]
When the observations \(\mathbf{X}\) in \(\mathcal{S}\) and \(\mathcal{T}\) share the same support set, the term \(\eta_{t\setminus s}(\mathbf{X},Y)\) approaches to 0. In domain generalization tasks, the term \(\beta_{k}(\mathcal{T}_{\mathbf{X}}|\mathcal{S}_{\mathbf{X}})\) is treated as a hyperparameter, as the test domain \(\mathcal{T}_{\mathbf{X}}\) is not available during training. However, in domain adaptation tasks where \(\mathcal{T}_{\mathbf{X}}\) is provided, \(\beta_{k}(\mathcal{T}_{\mathbf{X}}|\mathcal{S}_{\mathbf{X}})\) and the test-domain Monotonicity measurement \(M_{t}^{\mathbf{w}}(\phi,\xi)\) can be directly estimated. Further details of the discussion on domain adaptation are provided in Appendix A.3.
**Connecting empirical risk to the expected risk.** In most real-world scenarios where distribution \(\mathcal{S}\) is not directly provided, we consider the relationship of expected risk on source domain distribution and empirical risk on source domain data \(\mathcal{S}^{n}:=\left\{(\mathbf{x}_{i},y_{i})\right\}_{i=1}^{n}\). We also define the empirical risks w.r.t. \(\widehat{SF}_{s}(\mathbf{w},\phi),\widehat{M}_{s}^{\mathbf{w}}(\phi,\xi)\) as follows:
\[\widehat{SF}_{s}(\mathbf{w},\phi):=\mathbb{E}_{\mathcal{S}^{n}}\mathbb{E}_{ \mathbf{c}\sim\hat{P}_{s}^{\phi}(\mathbf{C}|\mathbf{X}=\mathbf{x})}\mathbb{E} _{\mathbf{c}\sim\hat{P}_{s}^{\xi}(\mathbf{C}|\mathbf{X}=\mathbf{x})}\mathbb{I} [\mathrm{sign}(\mathbf{w}^{\top}\mathbf{c})=\mathrm{sign}(\mathbf{w}^{\top} \mathbf{\bar{c}})],\]
where \(\hat{P}_{s}^{\phi}(\mathbf{C}|\mathbf{X}=\mathbf{x})\) and \(\hat{P}_{s}^{\xi}(\mathbf{\bar{C}}|\mathbf{X}=\mathbf{x})\) describe the estimated distribution on dataset \(\mathcal{S}^{n}\).
Then, we use PAC-learning (Shalev-Shwartz & Ben-David, 2014) tools to formulate the upper bound of gap between empirical risk and expected risk as a theorem below.
**Theorem 3.3**.: _Given parameters \(\phi\), \(\xi\), for any \(\mathbf{w}:\mathbb{R}^{d}\rightarrow\mathcal{Y}\), prior distribution \(\pi_{\mathbf{C}}:=P_{s}(\mathbf{C})\) and \(\pi_{\mathbf{C}}:=P_{s}(\mathbf{\bar{C}})\) which make \(\mathbb{E}_{\mathcal{S}}\mathrm{KL}(P_{s}^{\phi}(\mathbf{C}|\mathbf{X}=\mathbf{ x})\|\pi_{\mathbf{C}})\) and \(\mathbb{E}_{\mathcal{S}}\mathrm{KL}(P_{s}^{\xi}(\mathbf{\bar{C}}|\mathbf{X}= \mathbf{x})\|\pi_{\mathbf{C}})\) both lower than a positive constant \(C\), then with a probability at least \(1-\epsilon\) over source domain data \(\mathcal{S}_{n}\),_
_(1) \(|SF_{s}(\mathbf{w},\phi)-\widehat{SF}_{s}(\mathbf{w},\phi)|\) is upper bounded by_
\[\mathbb{E}_{S^{n}}\mathrm{KL}(\hat{P}_{s}^{\phi}(\mathbf{C}|\mathbf{X}=\mathbf{ x})\|\pi_{\mathbf{C}})+\frac{\ln(n/\epsilon)}{2(n-1)}+C.\]
_(2) \(|M_{\mathbf{s}}^{\mathbf{w}}(\phi,\xi)-\widehat{M}_{\mathbf{s}}^{\mathbf{w}}( \phi,\xi)|\) is upper bounded by_
\[\mathbb{E}_{S^{n}}\mathrm{KL}(\hat{P}_{s}^{\phi}(\mathbf{C}|\mathbf{X}= \mathbf{x})\|\pi_{\mathbf{C}})+\mathbb{E}_{S^{n}}\mathrm{KL}(\hat{P}_{s}^{\xi} (\mathbf{\bar{C}}|\mathbf{X}=\mathbf{x})\|\pi_{\mathbf{C}})+\frac{\ln(n/ \epsilon)}{2(n-1)}+2C.\]
Theorem 3.3 demonstrates that as the sample size increases and the terms with KL divergence decrease, the empirical risk on the source domain dataset becomes closer to the expected risk. Combining Theorems 3.2 and 3.3, we can evaluate the expected PNS risk on the test distribution using the empirical risk on the source dataset. In the next section, we present a representation learning objective based on the results of Theorems 3.2 and 3.3 and introduce the satisfaction of Exogeneity.
## 4 Learning to Minimizing PNS Risk
In this section, we propose a learning objective built upon the PNS risk that is used to capture the essential representation having a high PNS value from observational data.
### The Semantic Separability of PNS
In Section 3, we present PNS risk and Monotonicity measurement. Furthermore, to ensure that finding interpretable representations is feasible, we need to make certain assumptions that the representation of the data retains its semantic meaning under minor perturbations. Specifically, we define the variable \(\mathbf{C}\) as Semantic Separability relative to \(Y\) if and only if the following assumption is satisfied:
**Assumption 4.1** (\(\delta\)-Semantic Separability).: For any domain index \(d\in\{s,t\}\), the variable \(\mathbf{C}\) is \(\delta\)-semantic separable, if for any \(\mathbf{c}\sim P_{d}(\mathbf{C}|Y=y)\) and \(\mathbf{\bar{c}}\sim P_{d}(\mathbf{C}|Y\neq y)\), the following inequality holds almost surely: \(\|\mathbf{\bar{c}}-\mathbf{c}\|_{2}>\delta\).
\(\delta\)-Semantic Separability refers to the semantic meaning being distinguishable between \(\mathbf{c}\) and \(\mathbf{\bar{c}}\) when the distance between them is large enough, i.e., \(\|\mathbf{\bar{c}}-\mathbf{c}\|_{2}>\delta\). This assumption is widely accepted because, without it, nearly identical values would correspond to entirely different semantic information, leading to inherently unstable and chaotic data. If \(\mathbf{C}\) satisfies Assumption 4.1, then considering the PNS value in a small intervention, such as \(\|\mathbf{c}-\mathbf{\bar{c}}\|_{2}<\delta\), will lead failure in representation learning. Therefore, during the learning process, we add the penalty of \(\|\mathbf{c}-\mathbf{\bar{c}}\|_{2}>\delta\).
### Overall Objective
Depending on the diverse selections of \(P^{\xi}(\mathbf{\bar{C}}|\mathbf{X}=\mathbf{x})\), there are multiple potential PNS risks. In the learning process, we consider minimizing the risk in the worst-case scenario lead by \(\mathbf{\bar{C}}\), i.e., the maximal PNS risk lead by the selection of \(P^{\xi}(\mathbf{\bar{C}}|\mathbf{X}=\mathbf{x})\). Minimizing the upper bounds in Theorems 3.2 and 3.3 can be simulated by the following optimization process
\[\min_{\phi,\mathbf{w}}\max_{\xi}\ \ \widehat{M}_{\mathbf{s}}^{\mathbf{w}}(\phi,\xi)+ \widehat{SF}_{s}(\mathbf{w},\phi)+\lambda L_{\mathrm{KL}},\ \ \ \text{subject to}\ \ \ \|\mathbf{c}-\mathbf{\bar{c}}\|_{2}>\delta, \tag{8}\]
where \(L_{\mathrm{KL}}:=\mathbb{E}_{S^{n}}\mathrm{KL}(\hat{P}_{s}^{\phi}(\mathbf{C}| \mathbf{X}=\mathbf{x})\|\pi_{\mathbf{C}}))+\mathbb{E}_{S^{n}}\mathrm{KL}( \hat{P}_{s}^{\xi}(\mathbf{\bar{C}}|\mathbf{X}=\mathbf{x})\|\pi_{\mathbf{\bar{ C}}}))\). The constraint \(\|\mathbf{c}-\mathbf{\bar{c}}\|_{2}>\delta\) is set because of the Semantic Separability assumption. We name the algorithm of optimizing Eq. (8) as CaSN (Causal Representation of Sufficiency and Necessity).
### Satisfaction of Exogeneity
In the previous sections, we introduced an objective to satisfy monotonicity. Identifying PNS values not only needs to satisfy monotonicity but also exogeneity. In this part, we discuss the satisfaction of Exogeneity and provide the solution to find the representation under three causal assumptions below.
**Assumption 4.2**.: The Exogeneity of \(\mathbf{C}\) holds, if and only if the following invariant conditions are satisfied separately under three causal assumptions in Figure 3(b): (1) \(\mathbf{X}\perp Y|\mathbf{C}\) (2) \(\mathbf{C}\perp V\) (3) \(V\perp Y|\mathbf{C}\) (for assumption in Figure 3(b).1, 2 and 3, respectively).
The above three assumptions are commonly accepted by the OOD generalization (Lu et al., 2021; Liu et al., 2021; Ahuja et al., 2021). To satisfy the Exogeneity, we use different objective functions to identify \(\mathbf{C}\) over three invariant causal assumptions. For Assumption 4.2 (1), we provide the following theorem showing the equivalence between optimizing Eq.8 and identifying invariant representation.
**Theorem 4.3**.: _The optimal solution of learned \(\mathbf{C}\) is obtained by optimizing the following objective (the key part of the objective in Eq. (8))_
\[\min_{\phi,\mathbf{w}}\widehat{SP_{s}}(\mathbf{w},\phi)+\lambda\mathbb{E}_{ \mathcal{S}^{n}}\mathrm{KL}(\hat{P}_{s}^{\phi}(\mathbf{C}|\mathbf{X}=\mathbf{ x})\|\pi_{\mathbf{C}})\]
_satisfies the conditional independence \(\mathbf{X}\perp Y|\mathbf{C}\)._
Theorem 4.3, details of the proof are shown in Appendix E, indicates optimizing overall objective Eq. (8) implicitly makes \(\mathbf{C}\) satisfy the property of Exogeneity under causal assumption \(\mathbf{X}\perp Y|\mathbf{C}\). For Assumption 4.2 (2), to identify the invariant assumption \(\mathbf{C}\perp V\)(Li et al., 2018), we introduce the following Maximum Mean Discrepancy (MMD) penalty to the minimization process in Eq. (8),
\[L_{\text{mmd}}=\sum_{v_{i}}\sum_{v_{j}}\mathbb{E}_{\mathbf{x}_{i}\sim P( \mathbf{X}|V=v_{i})}\mathbb{E}_{\mathbf{c}_{i}\sim\hat{P}_{s}^{\phi}(\mathbf{ C}|\mathbf{X}=\mathbf{x}_{i})}\mathbb{E}_{\mathbf{x}_{j}\sim P(\mathbf{X}|V=v_{j})} \mathbb{E}_{\mathbf{c}_{j}\sim\hat{P}_{s}^{\phi}(\mathbf{C}|\mathbf{X}= \mathbf{x}_{j})}\left\|\mathbf{c}_{i}-\mathbf{c}_{j}\right\|_{2}.\]
For Assumption 4.2 (3), to specify the representation of \(\mathbf{C}\) and allows the Exogeneity when the assumption \(V\perp Y|\mathbf{C}\) holds, we introduce the IRM-based (Arjovsky et al., 2019) penalty into Eq.(8).
\[L_{\text{irm}}=\sum_{v}E_{(\mathbf{x},y)\sim P_{s}(\mathbf{X},Y|V=v)}\left\| \nabla_{w|w=1.0}\mathbb{E}_{\mathbf{c}\sim\hat{P}_{s}^{\phi}(\mathbf{C}| \mathbf{X}=\mathbf{x})}\mathrm{I}[\mathrm{sign}(\mathbf{w}^{\top}\mathbf{c}) \neq y]\right\|^{2}\]
Noticebly, to address the issue of invariant learning and satisfy the Exogeneity under Assumptions 4.2 (2) and (3), it is necessary to introduce additional domain information, such as domain index.
## 5 Related Work
In this section, we review the progress of OOD prediction tasks. A research perspective for OOD prediction is from a causality viewpoint (Zhou et al., 2021; Shen et al., 2021). Based on the postulate that the causal variables are invariant and less vulnerable to distribution shifts, a bunch of methods identify the invariant causal features behind the observation data by enforcing invariance in the learning process. Different works consider causality across multiple domains in different ways. One series of research called the causal inference-based methods model the invariance across domains by causality explanation, which builds a causal graph of data generative process (Pfister et al., 2019; Rothenhausler et al., 2018; Heinze-Deml et al., 2018; Gamella and Heinze-Deml, 2020; Oberst et al., 2021; Zhang et al., 2015). The other series of methods consider invariant learning from a causality aspect. They formulate the invariant causal mechanisms by representation rather than causal variables. Invariant risk minimization (IRM) methods (Arjovsky et al., 2019) provide a solution for learning invariant variables and functions. Under this viewpoint, some pioneering work (Ahuja et al., 2020; Chen et al., 2022; Krueger et al., 2021; Lu et al., 2021; Ahuja et al., 2021; Lin et al., 2022) further extend the IRM framework by considering game theory, variance penalization, information theory, nonlinear prediction functions, and some recent works apply the IRM framework to large neural networks (Jin et al., 2020; Gulrajani and Lopez-Paz, 2020). In this paper, different from the aforementioned works which consider to learn the invariant information, we think that information satisfying invariance is not enough to be most appropriate for the generalization task. We thus focus on learning to extract the most essential information from observations with a ground on the sufficient and necessary causal theorem. In the main text, we only provide a review of OOD prediction. We further elaborate on the correlation with other lines of work, such as domain adaptation, causal discovery, representation learning, causal disentanglement, and contrastive learning, in Appendix F.
## 6 Experiments
In this section, we verify the effectiveness of CaSN using synthetic and real-world OOD datasets.
### Setups
**Synthetic data.** The effectiveness of the proposed method is demonstrated by examining whether it can learn the essential information (i.e., sufficient and necessary causes) from source data. To this end, based on the causal graph in Figure 3(b).1 we designed a synthetic data generator that produced a sample set \(\{\mathbf{x}_{i}\}_{i=1}^{n}\) with corresponding labels \(\{y_{i}\}_{i=1}^{n}\). Four types of information were considered, including: (i) SN: Sufficient and Necessary Cause \(\text{sn}_{i}\) of \(y_{i}\). The value of \(y_{i}\) is directly calculated as \(y_{i}=\text{sn}_{i}\bigoplus B(0.15)\), where \(\bigoplus\) represents the XOR operation and \(B(0.15)\) is a Bernoulli distribution with a probability of \(0.15\) to generate \(1\). (ii) SF: sufficient and unnecessary cause \(\text{sf}_{i}\) of \(y_{i}\). \(\text{sf}_{i}\) is a transformation of \(\text{sn}_{i}\). We set \(\text{sf}_{i}=B(0.1)\) when \(\text{sn}_{i}=0\), and \(\text{sf}_{i}=\text{sn}_{i}\) when \(\text{sn}_{i}=1\). SF is designed to decrease the probability of necessity (i.e. \(P(Y=0|\text{SN}=0)\)). (iii) NC: insufficient and necessary cause \(\text{nc}_{i}\) of \(y_{i}\). We set \(\text{nc}_{i}=\text{I}(\text{sn}_{i}=1)\cdot B(0.9)\). NC is designed to decrease the probability of sufficiency (i.e. \(P(Y=1|\text{SN}=1)\)). (iv) Spurious: spurious correlation information \(\text{sp}_{i}\). Spurious correlated information is generated by \(s*\text{sn}_{i}*\mathbf{1}_{d}+(1-s)\mathcal{N}(0,1)\), where \(d\) denotes dimension and \(s\) denotes the **spurious degree**. When \(s\) gets higher, the spurious correlation becomes stronger in data \(\mathbf{x}\). We select \(d=5\) and \(s\in\{0.1,0.7\}\). in the synthetic generative process and develop a non-linear function to generate \(\mathbf{x}\) from \([\text{sn}_{i},\text{sf}_{i},\text{nc}_{i},\text{sp}_{i}]\). We use Distance Correlation (Jones et al., 1995) as evaluation metrics to measure the correlation between the learned representation \(\mathbf{C}\) and ground information (i.e. SN, SF, NC, SP). We provide an ablation study of CaSN without the Monotonicity evaluator CaSN(-m) in comparison results, which evaluates the effectiveness of CaSN.
**Performance on OOD prediction task.** The proposed method CaSN is implemented based on codebase DomainBed (Gulrajani and Lopez-Paz, 2020). We provide three implementations of our method, which are CaSN, CaSN(irm) and CaSN(mmd) with the same architecture but using Eq.(8), Eq.(8) \(+L_{\text{irm}}\) and Eq.(8) \(+L_{\text{mmd}}\) as their final objective, respectively. We compare CaSN with several common baselines, including **ERM**(Vapnik, 1999), **IRM**(Arjovsky et al., 2019), **GroupDRO**(Sagawa et al., 2019), **Mixup**(Xu et al., 2020), **MLDG**(Li et al., 2018a), **MMD**(Li et al., 2018b), **DANN**(Ganin et al., 2016) and **CDANN**(Li et al., 2018c), where the best accuracy scores are directly given by training-domain-validation in Gulrajani and Lopez-Paz (2020). We test the performance on commonly used ColoredMnist (Ahuja et al., 2020), PACS (Li et al., 2017), and VLCS (Fang et al., 2013) datasets. During the experiment process, we adjust the hyperparameters provided by DomainBed and extra hyperparameters \(\delta\) and \(\lambda\) in CaSN. The results show the mean and standard error of accuracy by executing the experiments randomly 2 times on 40 randomly selected hyperparameters. We also provide the extra experiments on large-scale spurious correlation dataset SpuCo (Joshi et al., 2023). Due to the page limitation, more experiment setups and results are provided in Appendix B.
### Learning Sufficient and Necessary Causal Representations
We conducted experiments on synthetic data to verify the effectiveness of the learned representation. In experiments, we use single domain with different degrees of spurious correlation. The experiments aimed to demonstrate the properties of the learned representation and answer the following question:
**Does CaSN capture the sufficient and necessary causes?** We present the results in Figure 2 (a) and (b), which show the distance correlation between the learned representation and four ground truths: Sufficient and Necessary cause (SN, SF, NC and Spurious). A higher distance correlation indicates a better representation. From both Figure 2 (a) (b), we found that CaSN achieves higher distance
Figure 2: The synthetic results for validating the property of learned representation under different spurious degrees in data, \(s=0.1\) for (a) and \(s=0.7\) for (b), the x-axis shows different causal information y-axis shows the choice of \(\hat{\delta}\). (c) The results of the feature identification when \(s=0.7\).
correlations with the ground truths (e.g., SN, SF, and NC) and lower correlations with spurious factors compared to other methods. As an example, we consider Figure 2 (a) with \(\delta=1.1\). We obtain distance correlations of \(\{0.90,0.65,0.67,0.13\}\) for SN, SF, NC, and spurious factors, respectively. We found that when we set \(\delta\) as a large value \(1.1\), CaSN captures more essential information SN. However, the result of CaSN decreases when \(\delta=0.1\), which suggests that CaSN tends to capture the most essential information when \(\delta\) is set to a larger value. This phenomenon aligns with Semantic Separability. We then compare Figure 2 (a) and (b). As an example, when \(\delta=1.1\), CaSN achieves distance correlations of 0.9 and 0.91 for SN on \(s=0.1\) and \(s=0.7\), respectively. The distance correlation with spurious information is 0.13 and 0.37 for \(s=0.1\) and \(s=0.7\), respectively. The results show that when more spurious correlations are in data, CaSN tends to capture information from those spurious correlations, but the algorithm is still able to get sufficient and necessary causes.
**Ablation study.** In Figure 2(c), we provide the comparison results between CaSN and the CaSN(-m) that removes the Monotonicity measurement on synthetic data. The figure demonstrates the distance correlation recorded over 5 experiments. The green bars indicate the distance correlation between learned representation and ground truth by CaSN. CaSN can capture the desired information SN compared to others. As the blue bars show, the CaSN(-m) can better capture the causal information (e.g. SN, SF and NC) rather than spurious correlation. It can not stably identify SN, compared to SF. CaSN(-m) can be regarded as the method that only cares Exogeneity. The results support the theoretical results in Theorem 4.3, which show the effectiveness of introducing Monotonicity term.
### Generelization to Unseen Domain
The results of the OOD generalization experiments on PACS and VLCS datasets are presented in Tables 1. Due to page limitation, we provide the results on ColoredMNIST in Table 2. The baseline method results are from Kilbertus et al. (2018). The proposed CaSN method exhibits good OOD generalization capability on both PACS and VLCS datasets. In Table 1, CaSN achieves the best average performance over 4 domains by \(86.0\) on PACS. On the VLCS, CaSN(irm) achieves a good average performance of \(78.2\), which is close to the best state-of-the-art performance achieved by DANN. For worst-domain test accuracies, the proposed method CaSN outperforms all the baseline methods. An intuitive explanation for the good performance of CaSN is that it aims to identify and extract the most essential information from observation data, excluding unnecessary or insufficient information from the optimal solution. This enables CaSN to better generalize on the worst domain.
## 7 Conclusion
In this paper, we consider the problem of learning causal representation from observation data for generalization on OOD prediction tasks. We propose a risk based on the probability of sufficient and necessary causes (Pearl, 2009), which is applicable OOD generalization tasks. The learning principle leads to practical learning algorithms for causal representation learning. Theoretical results on the computability of PNS from the source data and the generalization ability of the learned representation are presented. Experimental results demonstrate its effectiveness on OOD generalization.
## 8 Acknowledgement
We are thankful to Juzheng Miao, Xidong Feng, Kun Lin and Jingsen Zhang for their constructive suggestions and efforts on OOD generalization experiments and for offering computation resources. We also thank Pengfei Zheng for his helpful discussions and the anonymous reviewers for their constructive comments on an earlier version of this paper.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline \multicolumn{2}{c}{**Dataset**} & \multicolumn{4}{c}{**PACS**} & \multicolumn{4}{c}{**VLCS**} \\ \hline
**Algorithm** & **A** & **C** & **P** & **S** & **Avg** & **Min** & **C** & **L** & **S** & **V** & **Avg** & **Min** \\ \hline ERM & 84.7 \(\pm\) 0.4 & 80.8 \(\pm\) 0.6 & 97.2 \(\pm\) 0.3 & 79.3 \(\pm\) 1.0 & 85.5 & 79.3 & 97.7 \(\pm\) 0.4 & 64.3 \(\pm\) 0.9 & 73.4 \(\pm\) 0.5 & 74.6 \(\pm\) 1.3 & 77.5 & 64.3 \\ IRM & 84.8 \(\pm\) 1.3 & 76.4 \(\pm\) 1.1 & 96.7 \(\pm\) 0.6 & 76.1 \(\pm\) 1.0 & 83.5 & 76.4 & 98.6 \(\pm\) 0.1 & 64.9 \(\pm\) 0.9 & **73.4 \(\pm\) 0.6** & **73.3 \(\pm\) 0.9** & 78.5 & 64.9 \\ GroupDRO & 83.5 \(\pm\) 0.9 & 79.1 \(\pm\) 0.6 & 96.7 \(\pm\) 0.3 & 78.3 \(\pm\) 2.0 & 84.4 & 79.1 & 97.3 \(\pm\) 0.3 & 63.4 \(\pm\) 0.9 & 69.5 \(\pm\) 0.8 & 76.7 \(\pm\) 0.7 & 63.4 \\ Mixup & 86.1 \(\pm\) 0.5 & 78.9 \(\pm\) 0.8 & **97.6 \(\pm\) 0.1** & 75.8 \(\pm\) 1.8 & 84.6 & 78.9 & 98.3 \(\pm\) 0.6 & 64.8 \(\pm\) 1.0 & 72.1 \(\pm\) 0.5 & 74.3 \(\pm\) 0.8 & 77.4 & 64.8 \\ MLDG & 86.4 \(\pm\) 0.8 & 77.4 \(\pm\) 0.8 & 79.5 \(\pm\) 0.4 & 73.5 \(\pm\) 2.3 & 83.6 & 77.4 & 97.4 \(\pm\) 0.2 & 65.2 \(\pm\) 0.7 & 70.1 \(\pm\) 0.4 & 75.3 \(\pm\) 0.3 & 77.5 & 65.2 \\ MMD & 86.1 \(\pm\) 1.4 & 79.4 \(\pm\) 0.9 & 96.6 \(\pm\) 0.2 & 76.5 \(\pm\) 0.5 & 84.6 & 79.4 & 97.5 \(\pm\) 0.1 & 64.0 \(\pm\) 1.1 & 72.8 \(\pm\) 0.2 & 75.3 \(\pm\) 3.3 & 77.5 & 64.0 \\ DANN & 86.4 \(\pm\) 0.8 & 77.4 \(\pm\) 0.8 & 97.3 \(\pm\) 0.4 & 75.3 \(\pm\) 2.3 & 83.6 & 77.4 & **99.0** & 83.1 \(\pm\) 1.6 & 73.1 \(\pm\) 0.3 & 77.2 \(\pm\) 0.6 & **78.6** & 65.1 \\ CDANN & 84.6 \(\pm\) 1.5 & 75.5 \(\pm\) 0.9 & 96.8 \(\pm\) 0.3 & 73.5 \(\pm\) 0.6 & 82.6 & 75.5 \(\pm\) 0.9 & 75.1 \(\pm\) 0.3 & 65.1 \(\pm\) 1.2 & 70.7 \(\pm\) 1.1 & 77.5 & 65.1 \\ \hline
**CaSN (base)** & **87.1 \(\pm\) 0.6** & 80.2 \(\pm\) 0.6 & 96.2 \(\pm\) 0.8 & 80.4 \(\pm\) 0.2 & **86.0** & 80.2 & 97.5 \(\pm\) 0.6 & 64.8 \(\pm\) 1.9 & 70.2 \(\pm\) 0.5 & 76.4 \(\pm\) 1.7 & 77.2 & 64.8 \\
**CaSN (frm)** & 82.1 \(\pm\) 0.3 & 77.9 \(\pm\) 1.8 & 93.3 \(\pm\) 0.8 & **80.6 \(\pm\) 1.0** & 83.5 & 77.9 & 97.8 \(\pm\) 0.3 & 65.7 \(\pm\) 0.8 & 72.3 \(\pm\) 0.4 & 77.0 \(\pm\) 1.4 & 78.2 & 65.7 \\
**CaSN (mm)** & **84.7 \(\pm\) 1.2** & **95.7 \(\pm\) 2.0** & 80.2 \(\pm\) 0.6 & 83.5 & **81.4** & **89.2 \(\pm\) 0.7** & **65.9 \(\pm\) 0.6** & **71.2 \(\pm\) 0.3** & 76.9 \(\pm\) 0.7 & 78.1 \(\pm\) 0.5 & **65.9** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on PACS and VLCS dataset |
2309.03454 | Dynamical phase transition and scaling in the chiral clock Potts chain | Based on time-dependent variational principle (TDVP) techniques, we
investigate the dynamical critical behavior of quantum three-state Potts chains
with chiral interactions. Using Loschmidt echo, order parameter, and
entanglement entropy as an indicator, we show that as the chiral interaction
$\theta$ increases, the first critical time $t_{1}^{*}$ shift towards lower
values, indicating a chirality-enhanced dynamical phase transition. Moreover,
we perform dynamical scaling for the Loschmidt echo and obtain the critical
exponent $\nu$ at the non-conformal critical point. The results show that as
the chiral interaction $\theta$ increases, the correlation length exponent
$\nu$ decreases, which is similar to the long-range interaction case. Finally,
we give a simple physical argument to understand the above numerical results.
This work provides a useful reference for further research on many-body physics
out of equilibrium with chiral interaction. | Xue-Jia Yu | 2023-09-07T02:28:16Z | http://arxiv.org/abs/2309.03454v2 | # Dynamical phase transition and scaling in the chiral clock Potts chain
###### Abstract
Based on time-dependent variational principle (TDVP) techniques, we investigate the dynamical critical behavior of quantum three-state Potts chains with chiral interactions. Using Loschmidt echo, order parameter, and entanglement entropy as an indicator, we show that as the chiral interaction \(\theta\) increases, the first critical time \(t_{1}^{*}\) shift towards lower values, indicating a chirality-enhanced dynamical phase transition. Moreover, we perform dynamical scaling for the Loschmidt echo and obtain the critical exponent \(\nu\) at the non-conformal critical point. The results show that as the chiral interaction \(\theta\) increases, the correlation length exponent \(\nu\) decreases, which is similar to the long-range interaction case. Finally, we give a simple physical argument to understand the above numerical results. This work provides a useful reference for further research on many-body physics out of equilibrium with chiral interaction.
## I Introduction
Understanding exotic phases and phase transitions in many-body systems is a fundamental challenge in the field of condensed matter and statistical physics [1; 2; 3; 4]. While extensive studies have been focused on equilibrium phase transitions [5; 6; 7; 8; 9], less attention has been paid to the behavior of quantum many-body systems out of equilibrium [10; 11]. Dynamical quantum phase transition (DQPT) [12; 13; 14; 15; 16; 17; 18] is a type of non-equilibrium phase transition that occurs at critical times \(t^{*}\) during real-time evolution, characterized by non-analyticities of the rate function after a sudden quench of the system [17]. Analogous to equilibrium phase transitions that arise from singularities in parameter space, DQPT originates from singularities in time [12; 16; 17]. Recently, there has been a surge of interest in the study of DQPT, including investigations into critical behavior [19; 20; 21; 22], order parameters [23; 24; 25; 26; 27], spontaneously broken symmetries [14], and experimental realizations across a variety of platforms [28; 29; 30; 31; 32; 33; 34; 35].
On the one hand, quantum phase transitions have been previously investigated to possess relativistic and conformal invariance, allowing for significant analytical progress [36; 37; 38]. However, there has been recent debate surrounding the commensurate-incommensurate phase transition [39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49]. This debate has to revisit due to the potential of neutral long-range interacting Rydberg atom array confined in optical tweezers to serve as a tunable platform for observing a variety of quantum phenomena [50; 51]. Similaratly, the \(\mathbb{Z}_{3}\) clock model with chiral interaction also exhibit such unconformal chiral transition [39; 52; 53; 54; 55; 56; 57]. An intriguing question within the chiral clock model therefore arises: What is the relationship between the chiral and long-range interaction on quantum critical behavior? [39; 42; 58].
On the other hand, previous studies on DQPT across different quantum critical points were carried out in many systems, such as symmetry-breaking critical point [12; 14], topological phase transition [59; 60; 22], an exotic deconfined quantum critical point [27], and even non-Hermitian critical point [61; 62; 21]. Although the link between the DQPT and many physical observables has been established [16; 17; 18], a thorough understanding of this transition still calls for more studies. To the best of our knowledge, whether DQPT can occur in a system after a quench across a non-conformal critical point and its dynamical scaling behavior are so far less studied, therefore, it is very worthwhile to study and demonstrate the possible existence of DQPT in the system with non-conformal critical point.
To answer the above two questions, in this work, we explore the dynamical behavior of a \(\mathbb{Z}_{3}\) symmetric quantum spin chain with chiral interaction. Our investigation utilizes TDVP simulation [63; 64; 65; 66] to examine the effect of chiral interaction on the system, we show that introduction of chiral interaction can enhance the dynamical phase transition. Furthermore, our analysis of the Loschmidt echo reveals that the correlation length critical exponent decreases as the chiral interaction increases, which agrees with previous studies of long-range interacting Rydberg atom array. The results imply chiral interaction and long-range interaction have similar effects on quantum critical behavior.
The paper is organized as follows: Section II presents the lattice model of the quantum Potts chain with chiral interaction, the numerical method used, and the physical quantities that display DQPT. In Section III, we provide benchmark results for DQPT in nearest neighbor quantum Potts chains and chirality-enhanced dynamical phase transitions. Section IV presents the dynamical scaling for the Loschmidt echo to obtain critical behavior in the chiral transition, as well as a simple physical explanation for our numerical observations. Finally, our conclusion is presented in Section V. Appendixes provide additional data for our numerical calculations.
## II Model and method
The system of our study is a quantum chiral clock Potts chain of \(L\) spins (see Fig. 1), described by the following Hamiltonian [39; 52; 67; 68; 69]
\[H_{CCM}=-J\sum_{j=1}^{N}\sigma_{j}^{\dagger}\sigma_{j+1}e^{-i\theta}-f\sum_{j }^{N}\tau_{j}^{\dagger}e^{-i\phi}+H.c, \tag{1}\]
where \(\phi\) and \(\theta\) define two types of chiral interaction (temporal and spatial, respectively). The main text focuses on the \(\phi=0\) case, where time reversal and spatial parity symmetry are both preserved but the chirality is still present as a purely spatial one. (temporal case see Appendix C). \(J\) is the interaction strength, and \(f\) represents the external transverse field. The Hilbert space is \((\mathbb{C}^{3})^{\otimes N}\). \(\sigma\) dictates the direction of the watch hand, and \(\tau\) rotates the watch hand clockwise through a discrete angle \(2\pi/3\), as shown in Fig. 1(a). \(\sigma\) and \(\tau\) satisfy \(\sigma_{i}^{3}=I,\tau_{i}^{3}=I\), and \(\sigma_{i}\tau_{j}=\omega\delta_{ij}\tau_{j}\sigma_{i}\), where \(\omega=e^{2\pi i/3}\). A global \(\mathbb{Z}_{3}\) transformation represented by \(G=\prod_{i}\tau_{i}\) makes the Hamiltonian invariant. The operators are defined by
\[\sigma=\begin{pmatrix}1&0&0\\ 0&\omega&0\\ 0&0&\omega^{2}\end{pmatrix},\quad\tau=\begin{pmatrix}0&1&0\\ 0&0&1\\ 1&0&0\end{pmatrix}. \tag{2}\]
The introduction of chiral interaction in the Hamiltonian has a significant impact on the phase diagram (as shown in Fig. 1(b)) and has been extensively studied in the literature. Specifically, in the absence of chiral interaction (\(\theta=\phi=0\)), the model reduces to the standard nearest neighbor quantum three-state Potts chain. In this case, for \(f<<J\), the system is in an ordered phase that breaks the \(\mathbb{Z}_{3}\) symmetry, while for \(f>>J\), it is in a disordered paramagnetic phase. Fradkin-Kadanoff's transformation demonstrates that the system exhibits a continuous phase transition from the Potts-ordered topological phase to a trivial disordered phase, with a correlation length exponent of \(\nu=5/6\)[52; 53; 56; 57]. In the presence of non-zero chiral interaction, the effective interaction can be induced incommensurate floating phases relative to the lattice periodicity, and the transition between gapped states belongs to an unconventional non-conformal chiral universality class. Furthermore, the model is known to be integrable for a two-parameter family of couplings along the line \(f\text{cos}(3\phi)=J\text{cos}(3\theta)\) and is exactly solvable.
To exhibit DQPT, we first consider the Loschmidt amplitude (return amplitude ) which has been introduced by Heyl et al [12]
\[\mathcal{L}(t)=\left\langle\psi_{i}\left|\psi(t)\right\rangle=\left\langle\psi _{i}\right|e^{-iH_{f}t}\left|\psi_{i}\right.\right\rangle, \tag{3}\]
where \(\left|\psi_{i}\right\rangle\) denotes the ground state that pertains to Hamiltonian before quench \(H_{i}\) (or more generally, arbitrary initial state) and evolves under quenched Hamiltonian \(H_{f}\) in time. The structure of the Loschmidt amplitude resembles the boundary partition function defined in statistical mechanics, except the time-evolution operator makes it a complex quantity instead of real. This analogy suggests the introduction of the effective free energy (return rate)
\[r(t)=-\frac{1}{L}\lim_{L\rightarrow\infty}\ln\left|\mathcal{L}\right|^{2}, \tag{4}\]
similar to the equilibrium statistical physics, where phase transitions are identified by singularities in the free energy at certain values of the control parameter, DQPT is characterized by non-analytic cusps in the return rate \(r(t)\) at the time \(t^{*}\). However, a specific non-equilibrium protocol, such as a quantum quench, can be employed to observe DQPT by driving the system out of equilibrium. In the case of a quantum quench, the initial state is prepared as the ground state of an initial Hamiltonian (\(H_{0}\)), and then the control parameter of the Hamiltonian is suddenly switched to a different value, leading to the final Hamiltonian (\(H_{f}\)).
The relationship between the order parameter and DQPT is an intriguing topic that has recently attracted considerable interest. To explore this relation in the non-conformal quantum critical point, we introduce an order parameter defined as \(Q(t)=\frac{1}{L}\left\langle\psi(t)\right|\sum_{j}(\tau_{j}+\tau_{j}^{\dagger} )\left|\psi(t)\right\rangle\).
Additionally, the relationship between DQPT and entanglement structures has also been investigated. DQPT may correspond to regions of rapid growth or peaks in entanglement entropy [30; 69]. To further probe DQPT, we define the entanglement entropy as \(S(t)=-\text{Tr}(\rho_{A}\text{log}\rho_{A})\), where \(\rho_{A}=\text{Tr}_{B}\left|\psi(t)\right\rangle\left\langle\psi(t)\right|\) is the reduced density matrix about half chain A: \(1,2,...,L/2\) (B: \(L/2+1,...,L\)).
Except for specific integrable lines, the quantum chiral clock Potts chain does not have exact solutions. In the parameter region of interest, we employ a state-of-art, numerical exact time-dependent density matrix renormalization group (tDMRG) method (more precisely, TDVP method) [63; 64; 65; 66], based on matrix product states (MPS), which is a powerful numerical method for one-dimensional strongly correlated many-body systems. We set the MPS bond dimension to 800, ensuring good convergence of the physical quantities by requiring relative energy errors less than \(10^{-5}\). The time step during the evolution is set to \(\delta t=0.02\). To minimize edge effects, we impose periodic boundary conditions and use \(J=1\) as the energy unit.
Figure 1: (Color online) Schematic chiral interaction \(\theta\) and \(\phi\) (a) and ground phase diagram with respect to chiral interaction \(\theta\) and external transverse field \(f\) of quantum chiral clock Potts chain with \(\phi=0\)[39; 57] (b). With nonzero chiral interaction phase (\(\theta\neq 0\)), the effective interaction can induce incommensurate floating phases with respect to the periodicity of the underlying lattice, and transition between \(\mathbb{Z}_{3}\) order phase and disorder phase belongs to unconventional non-conformal chiral universality class.
## III Dynamical phase transition
We first study DQPT of the nearest neighbor quantum three-state Potts chain (\(\theta=\phi=0\)) [70; 71; 72]. More precisely, we study the time evolution of the return rate after sudden quenches between the paramagnetic (PM) and Potts ferromagnetic phase (Potts FM).
We first consider a special limit in which the return rate can be obtained analytically: Start from the perfect PM phase (\(f_{0}=\infty\)) and quenched to the classical Potts FM phase (\(f_{1}=0\)). The initial state is given by
\[|\psi_{0}\rangle=\frac{1}{3^{N/2}}\prod_{i}\{|A\rangle_{i}+|B\rangle_{i}+|C \rangle_{i}\}, \tag{5}\]
where \(|A\rangle_{i}\), \(|B\rangle_{i}\), and \(|C\rangle_{i}\) are three degenerate Potts FM ground state, respectively. Since the final Hamiltonian is purely classical, the return amplitude can write the simple form
\[\mathcal{L}(t)=\mathrm{tr}\mathrm{M}^{\mathrm{N}},\quad\mathrm{M}=\begin{pmatrix} e^{2iJt}/3&e^{-iJt}/3&e^{-iJt}/3\\ e^{-iJt}/3&e^{2iJt}/3&e^{-iJt}/3\\ e^{-iJt}/3&e^{-iJt}/3&e^{2iJt}/3\end{pmatrix}, \tag{6}\]
where periodic boundary conditions on a chain with \(N\) lattice sites have been considered. The eigenvalues of the transfer matrix \(M\) are given by
\[\lambda_{1}=\frac{e^{-iJt}}{3}\,(e^{3Jt}+2),\lambda_{2}=\lambda_{3}=\frac{e^ {-iJt}}{3}\,(e^{3Jt}-1) \tag{7}\]
and we obtain the return amplitude \(\mathcal{L}(t)=\lambda_{1}(t)^{N}+2\lambda_{2}(t)^{N}\), which yields the return rate
\[\begin{split}& l(t)=-\frac{1}{N}\mathrm{ln}|(9\mathrm{cos}^{2} \tilde{t}+\mathrm{sin}^{2}\tilde{t})^{N}+\\ & 4^{N+1}\mathrm{sin}^{2}\tilde{t}+2(2i)^{N}(3\mathrm{cos}\tilde{t} +i\mathrm{sin}\tilde{t})^{N}\mathrm{sin}^{N}\tilde{t}\\ &+2(2i)^{N}(-3\mathrm{cos}\tilde{t}+i\mathrm{sin}\tilde{t})^{N} \mathrm{sin}^{N}\tilde{t}|+2\mathrm{ln}3,\end{split} \tag{8}\]
where \(\tilde{t}=3Jt/2\). The return rate is periodic \(l(t)=l(t+2\pi/3J),l(0)=0\), and shows non-analytic behavior at the critical times \(Jt^{*}=2\pi/9+2\pi n/3,n\in\mathbb{N}_{0}\), as shown in Fig. 2.
### Warm up: quantum Potts chains
In this section, we numerically investigate DQPT after quenches from the PM to the Potts FM phase. The quench protocol is implemented by suddenly switching the ratio between the transverse field \(f\) and exchange interaction \(J\) from its initial value \(f_{0}/J\) to its final value \(f_{1}/J\). We start from the PM state, which is obtained from initial Hamiltonian \(H_{0}\) with \(f_{0}=1.0,J=0.6,f_{0}/J=\infty\), quenched to a final state with parameter \(f_{1}=0.0,J=1.0,f_{1}/J=0.0\), located in Potts FM order phase. The general relation between the DQPT and the underlying equilibrium quantum critical point is unclear, but as shown in previous works [16; 17], it is argued that the DQPT usually occurs when the quench process is ramped through an equilibrium critical point. Indeed, as shown in Fig. 2 (a), the return rate exhibits non-analytical behavior with respect to time, implying the DQPT occurs (the finite-size scaling analysis see the Appendix A), which consist with Ref. [70]. Moreover, in order to explore the relationship between the return rate and the zeros of an order parameter, we also calculate the order parameter of the model. During the time evolution, we see that the valley of the order parameter corresponds to the peak of the return rate, as shown in Fig. 2 (b). Our numerical observation is fully consistent with previous studies [70].
Now, let us explore the entanglement structures and possible connections to the above observations. The entanglement entropy is efficient physical quantities to uncover entanglement structures of the model. To be more precise, the half-chain entanglement entropy, are singular values of the Schmidt decomposition across a bond, is easily accessed through the finite-size DMRG calculation. In Fig. 2 (c), sudden changes in the entanglement entropy are seen in the vicinity of the serval critical times, which means DQPT occurs. Furthermore, we clearly see that the behavior of entanglement entropy is similar to the return rate. The reason for this remains currently unclear but suggests some deeper relationship between entanglement structure and DQPT. In summary, starting from the initial PM state, after quenched across the equilibrium three-state Potts quantum critical point, the return rate, order parameter, and entanglement entropy both exhibit DQPT signatures.
### Chirality-enhanced dynamical phase transition
We note that the asymmetry in the Hamiltonian has important ramifications: the spatial chirality (\(\theta\neq 0\)) induces incommensurate floating phases with respect to the periodicity of the underlying lattice. To study whether chirality will affect the DQPT of the system, we numerically investigate the quantum chiral clock chain with \(\theta=0.12\pi\), quenches from the PM to the Potts FM phase. On the one hand, we start from the PM state, which is obtained from initial Hamiltonian \(H_{0}\) with \(f_{0}=1.0,J=0.0,f_{0}/J=\infty\), quenched to a final state with parameter\(f_{1}=0.0,J=1.0,f_{1}/J=0.0\), located in Potts FM order phase. As shown in Fig 3(a), we find that the return rate
Figure 2: (Color online) Time evolution of the (purple/cyan dashed line represent exact analytic results by transfer matrix) return rate (a), order parameter (b), entanglement entropy (c). All the figures are correspond to the quenches with \(f_{0}/J=\infty\) (PM) \(\to f_{1}/J=0.0\)(Potts FM), \(\theta=\phi=0,N=24\). \(t_{c}=2\pi/9\) is the first critical time for the three-state Potts chain as the system size \(N\) tends to \(\infty\).
exhibits a series of non-analytic behaviors with time, implying that the introduction of chirality is stable for observing DQPT. On the other hand, we also calculated the order parameter and entanglement entropy of the system, as shown in Fig 3(b) and (c), and found that the valley (peak) of the order parameter (entanglement entropy) corresponds to the peak of the return rate, which also exhibits the signatures of DQPT. Moreover, as shown in Fig 3(d), we found that the first critical time \(t_{1}^{*}\) for the DQPT decreases with the increase of the chiral interaction phase (see the Appendix B for the calculation of the DQPT of another chiral interaction phase). This means that the increase in chiral interaction will make DQPT easier to occur, we called "chirality-enhanced dynamical phase transition". In section IV C, we will give a simple physical argument to understand this phenomenon. Finally, we also calculated the DQPT from the Potts FM to the PM phase with the temporal chiral interaction \(\phi\). The results are shown in Appendix C.
## IV Dynamical scaling for chiral clock Potts chain
### Fidelity susceptibility and quantum critical point
The system undergoes a continuous phase transition from an ordered to a disordered phase when tuning the external field \(f\), at which the structure of the ground state wave function changes significantly. The quantum ground-state fidelity \(F(f,f+\delta f)\), defined as the wave function overlap of two neighboring ground states with respect to an external field \(f\), and its value is almost zero near quantum critical point \(f_{c}^{*}\). In practice, the more convenient quantity to characterize quantum phase transitions is the fidelity susceptibility, defined by the leading term of fidelity [73, 74, 75, 76, 77],
\[\chi_{F}(f)=\lim_{\delta f\to 0}\frac{2(1-F(f,f+\delta f))}{(\delta f)^{2}}. \tag{9}\]
The fidelity susceptibility is a geometric property of quantum states in the realm of quantum information that offers a distinct advantage in that it requires no a priori knowledge of order parameters or symmetry-breaking. It has been applied to detect a wide range of quantum phase transitions [78, 79, 71, 77, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81] induced by a sudden change in the structure of the wave function. Experimental detection of quantum phase transitions using fidelity susceptibility can be achieved via neutron scattering or angle-resolved photoemission spectroscopy (ARPES) techniques [73]. Here, we employ the fidelity susceptibility [82] to identify the critical point in the quantum Potts chain with chiral interaction and perform dynamical scaling in the obtained quantum critical point.
Figure 4(a) illustrates the scaling behavior of fidelity susceptibility per site \(\chi_{L}=\chi_{F}/N\) as a function of system size \(L\) for \(\theta=0.12\pi\) in the quantum Potts chain with chiral interaction. As the system size increases from \(N=8\) to \(24\), we observe that the peak position of the fidelity susceptibility curve gradually approaches the exact critical point \(f_{c}^{*}\). Our results indicate that the peak of fidelity susceptibility converges at \(N=24\) (see Appendix A for the finite-size effect of DQPT), providing an effective means of obtaining the quantum critical point. (see Appendix E) for details on the calculation of fidelity susceptibility for another chiral interaction.)
### Dynamical scaling law for Loschmidt echo
As a next step, we aim to examine the dynamical property of the system following a small quench and carry out dynamical scaling of Loschmidt echo to obtain critical exponents in the non-conformal chiral universality class. Recent studies [20] have demonstrated that the decay of the Loschmidt echo can be enhanced by the equilibrium quantum criticality. The first minimum of the Loschmidt echo at the \(t_{1}^{*}\) can be scale as:
\[1-L_{min}(N,g)\propto g^{2}N^{2/\nu}, \tag{10}\]
at the equilibrium chiral transition point. Here \(\nu\) is the correlation length exponent, and \(g\) is the small constant step define as: \(g=f_{1}-f_{0}\) with \(f_{0}\) and \(f_{1}\) are external field before and after quench protocols. The dynamical scaling law in Eq. 10 that governs the critically enhanced decay behavior of the Loschmidt echo with respect to \(N\) can be utilized to extract the correlation length exponent \(\nu\). In order to perform the scaling law in Eq.10 for the Loschmidt echo \(L_{min}(N,g)\), it should be computed at or close to the equilibrium critical point \(f_{c}^{*}\), which is obtained from fidelity susceptibility.
To this end, we first obtain the ground state \(\psi_{0}\) from Eq. 1 at the external field \(f_{0}\), and then compute the Loschmidt echo from Eq. 3 by quenching the chiral clock Potts chain from the initial \(f_{0}\) to a final \(f_{1}\) with a small constant step \(g=0.01\). The
Figure 3: (Color online) Time evolution of the return rate (a), order parameter (b), and entanglement entropy (c). The first critical time as a function of spatial chiral interaction \(\theta\) (d). The figures are correspond to the quenches with \(f_{0}/J=\infty\) (PM) \(\rightarrow\)\(f_{1}/J=0.0\) (Potts FM), \(\theta=0.12\pi,\phi=0,N=24\).
time-evolved wave function \(\psi\left(t\right)\) is obtained from the TDVP with the step \(t=0.02\) under periodic boundary conditions, where we set \(J=1\) during the numerical simulations. We perform numerical simulations upon a quench with the TDVP from this critical point \(f_{0}\) to \(f_{1}=f_{0}+g\) for \(N=8,12,16,20,24\) sites. The results of the Loschmidt echo \(L(N,h,g,t)\) for \(\theta=0.12\pi\) are shown in Fig. 4(b) exhibit a decay and revival dynamics. The first minimum of the Loschmidt echoes \(L_{min}(N,g)\) are plotted in Fig. 4(c) with respect to the lattice size \(N\). According to the scaling law in Eq. 10, we obtain the correlation length exponent \(\nu=0.780\pm 0.001\) for non-conformal chiral transition. Details on the dynamical scaling of Loschmidt echo for another chiral interaction can be found in Appendix. D, and the results of all \(\theta\) are summarized in Table. 1. The results exhibit correlation length exponent \(\nu\) decreases with increasing chiral interaction \(\theta\), which is consistent with previous literatures [39; 41; 42]
### Discussion
In this section, we present a brief discussion on the physical reason for the emergence of chirality-enhanced dynamical phase transition. Firstly, the introduction of the spatial chiral interaction \(\theta\) can be considered as an equivalent of introducing power-law long-range interactions. To be precise, when the spatial chiral interaction is increased (i.e., larger \(\theta\)), the interaction becomes more long-range. Recent studies [39] on Rydberg atom arrays have demonstrated that the van der Waals long-range interaction has a similar effect on the critical behavior as the chiral interaction. In particular, the longer the power-law interaction, the smaller the critical exponent of the correlation length \(\nu\), which is consistent with the results obtained by dynamical scaling (see Table 1). This is due to the fact that the long-range interactions enhance the in-equivalence between the two types of domain walls, leading to a faster deviation from the Potts exponent [41].
On the other hand, previous literature [83] reports have studied the dynamical phase transitions in the transverse field Ising model with power-law long-range interactions \(1/r^{\alpha}\). Within a certain range of interaction, it has been shown that the critical transverse field for Loschmidt echoes to exhibit kinks decreases with decreasing \(\alpha\) (i.e., longer interaction range), indicating a higher likelihood for DQPT to occur.
By combining the above two aspects, we provide a simple physical explanation for chirality-enhanced dynamical phase transition: The effect of spatial chiral interaction is akin to that of long-range interaction, and the introduction of long-range interaction enhances the propensity for DQPT in the system. Therefore, spatial chiral interaction can effectively enhance dynamical phase transition. Finally, we observe that the inclusion of temporal chiral interaction leads to the chirality-suppressed dynamical phase transition (see Appendix C), which merits further detailed investigation in future.
## V Conclusion and outlook
To summarize, we investigate the quench dynamics in the \(\mathbb{Z}_{3}\) symmetric spin chain with chiral interaction. To establish a baseline, we first consider the standard nearest-neighbor quan
\begin{table}
\begin{tabular}{c c} \(\theta\) & \(\nu\) \\ \hline \(0.0\pi\) & \(0.838(4)\) \\ \(0.02\pi\) & \(0.832(4)\) \\ \(0.04\pi\) & \(0.829(3)\) \\ \(0.06\pi\) & \(0.824(3)\) \\ \(0.08\pi\) & \(0.813(1)\) \\ \(0.10\pi\) & \(0.7978(5)\) \\ \(0.12\pi\) & \(0.780(1)\) \\ \(0.14\pi\) & \(0.754(2)\) \\ \(0.16\pi\) & \(0.724(3)\) \\ \end{tabular}
\end{table}
Table 1: Critical exponents of the Potts chain with different chiral interaction \(\theta\). Critical exponents are obtained by dynamical scaling for the Loschmidt echoes.
Figure 4: (Color online) (a) Fidelity susceptibility per site \(\chi_{L}\) of the quantum Potts chain with chiral interaction for \(\theta=0.12\pi\) and \(N=8,12,16,20,24\) sites as a function of external transverse field \(f\); symbols denote finite-size DMRG results. (b) The Loschmidt echo \(L(N,f,g,t)\) at the peak position \(f\) of \(\chi_{L}\) in (a) with \(g=0.01\) as a function of time \(t\) for lattice sizes \(N=8\)(red),12(blue),16(yellow),20(green),24(purple) (from top to bottom along the first minima). (c) Finite-size scaling of \(1-L_{min}(N,g)\) obtained from (b) as a function of lattice sizes \(N\) where the black square symbols are numerical values, and the block solid line denotes the fitting curve. The correlation length critical exponent \(nu=0.780\pm 0.001\) is obtained from the fitting curve.
tum three-state Potts chain and derived analytical results of the Loschmidt echo for the quench from the PM to the Potts FM phase. Our results show that the Loschmidt echo, order parameter, and entanglement entropy both exhibit DQPT signatures. We then investigate more general cases with a chiral interaction \(\theta\) using TDVP. The results reveal that the introduction of the chiral interaction will advance the critical time of DQPT, which we refer to as "chirality-enhanced dynamical phase transition". Additionally, we perform dynamical scaling for the Loschmidt echo and obtain the correlation length critical exponent \(\nu\) under different chiral interactions. The numerical results indicate that as the chiral interaction increases, the correlation length critical exponent \(\nu\) decreases, which has a similar effect of long-range interaction. Finally, we provide a simple physical argument to understand "chirality-enhanced dynamical phase transition."
Future work may explore the physical reason for the suppression of dynamical phase transition due to temporal chiral interaction and investigate the fate of quench dynamics in two-dimensional systems with different types of quantum critical points. Our work may provide new insights into many-body physics out of equilibrium with chiral interaction.
###### Acknowledgements.
X.-J. Yu thank G. Sun, S.Yang, and C.-X. Li for helpful discussions. Numerical simulations were carried out with the ITensor package [84].
|
2309.13121 | Dynamical defects in a two-dimensional Wigner crystal: self-doping and
kinetic magnetism | We study the quantum dynamics of interstitials and vacancies in a
two-dimensional Wigner crystal (WC) using a semi-classical instanton method
that is asymptotically exact at low density, i.e., in the $r_s\to \infty$
limit. The dynamics of these point defects mediates magnetism with much higher
energy scales than the exchange energies of the pure WC. Via exact
diagonalization of the derived effective Hamiltonians in the single-defect
sectors, we find the dynamical corrections to the defect energies. The
resulting expression for the interstitial (vacancy) energy extrapolates to 0 at
$r_s = r_{\rm mit} \approx 70$ ($r_s \approx 30$), suggestive of a self-doping
instability to a partially melted WC for some range of $r_s$ below $r_{\rm
mit}$. We thus propose a "metallic electron crystal'' phase of the
two-dimensional electron gas at intermediate densities between a low density
insulating WC and a high density Fermi fluid. | Kyung-Su Kim, Ilya Esterlis, Chaitanya Murthy, Steven A. Kivelson | 2023-09-22T18:01:06Z | http://arxiv.org/abs/2309.13121v2 | # Dynamical defects in a two-dimensional Wigner crystal:
###### Abstract
We study the quantum dynamics of interstitials and vacancies in a two-dimensional Wigner crystal (WC) using a semi-classical instanton method that is asymptotically exact at low density, i.e., in the \(r_{s}\to\infty\) limit. The dynamics of these point defects mediates magnetism with much higher energy scales than the exchange energies of the pure WC. Via exact diagonalization of the derived effective Hamiltonians in the single-defect sectors, we find the dynamical corrections to the defect energies. The resulting expression for the interstitial (vacancy) energy extrapolates to \(0\) at \(r_{s}=r_{\rm mir}\approx 70\) (\(r_{s}\approx 30\)), suggestive of a self-doping instability to a partially melted WC for some range of \(r_{s}\) below \(r_{\rm mir}\). We thus propose a "metallic electron crystal" phase of the two-dimensional electron gas at intermediate densities between a low density insulating WC and a high density Fermi fluid.
## I Introduction
Despite its prime importance in the field of condensed matter physics, some basic aspects remain unsettled concerning the physics of the two-dimensional electron gas (2DEG) at intermediate densities where various forms of "strongly correlated electron fluids" can arise. The ideal 2DEG is governed by the simple Hamiltonian
\[H=\sum_{i}\frac{{\bf p}_{i}^{2}}{2m}+\sum_{i<j}\frac{e^{2}}{4\pi\epsilon}\frac {1}{|{\bf r}_{i}-{\bf r}_{j}|}, \tag{1}\]
with a single dimensionless parameter, \(r_{s}=a_{0}/a_{\rm B}\), characterizing the ratio of the typical interaction strength to the kinetic energy. Here, \(a_{0}=1/\sqrt{\pi n}\) is the average interparticle distance, \(n\) is the electron density, and \(a_{\rm B}=4\pi\epsilon h^{2}/me^{2}\) is the effective Bohr radius. The phases of the 2DEG in the weak and strong coupling limits are well-understood: it forms a paramagnetic Fermi liquid (FL) for small \(r_{s}\) (weak coupling) and a Wigner crystal (WC) for large \(r_{s}\) (strong coupling) [1]. The present study addresses the intermediate coupling regime near the quantum metal-insulator transition (MIT). Landmark numerical studies suggested that the MIT occurs as a direct transition from a Fermi liquid to an insulating WC at \(r_{s}=r_{\rm melt}^{*}\approx 31\)[2; 3; 4]. However, recent experiments [5; 6; 7; 8] suggest that the actual transition may be more complex.
Apart from the charge ordering, there is another subtle issue regarding the magnetism. In the FL regime, the paramagnetic state seems to be most favored [4]. Deep within the WC phase, the magnetism is determined by various ring-exchange processes. The exchange coefficients can be calculated using the semi-classical instanton approximation [11; 12; 13; 14; 15], the validity of which is well-tested by a numerically exact path integral Monte Carlo calculation [9]. These calculations show that the WC is a ferromagnet for large enough \(r_{s}>r_{\rm F}^{\rm WC}\approx 175\)[9] and a (highly frustrated) antiferromagnet [13] below \(r_{\rm F}^{\rm WC}\) (Fig. 1). However, the predicted energy scale for the ring-exchange processes within the WC phase is too small to account for the typical magnetic energy scale of the insulating phase observed in the large \(r_{s}\) regime of various 2DEG systems [5; 7; 8]. This prompted some of the present authors to propose a kinetic mechanism that accounts for higher-temperature magnetism in such a phase mediated by interstitial hopping processes [16][17].
In this paper, using the semi-classical instanton approximation, we carry out a comprehensive study of the quantum dynamics of an interstitial and a vacancy de
Figure 1: Conjectured \(T=0\) phases of a clean 2DEG as a function of \(1/r_{s}\propto\sqrt{n}\): WC (Ferro) = fully polarized ferromagnetic WC; WC (Antiferro) = WC with some form of antiferromagnetism (or a spin liquid phase); Metallic electron crystal (McC) = metallic electron crystal characterized by more than one electron per unit cell; FL (Para) = paramagnetic Fermi liquid. The phase transition at \(r_{\rm F}^{\rm WC}\approx 175\)[9] is due to the change of dominant exchange interactions from ferromagnetic to antiferromagnetic and is likely to be first order. \(r_{\rm mit}\approx 70\) indicates the “true” metal-insulator transition due to interstitial self-doping proposed in this paper, and is distinct from \(r_{\rm melt}\) below which the crystalline order vanishes. \(r_{\rm melt}\) is expected to be smaller than the value for a direct FL–WC transition from quantum Monte Carlo calculations, \(r_{\rm melt}^{*}\approx 31\)[4], due to the existence of the intermediate McC phase. (Additional microemulsion phases may be expected [10] as well, especially for \(r_{s}\sim r_{\rm melt}^{*}\).) See Sec. IV for a detailed discussion of the conjectured phase diagram.
fect (Fig. 2), two point defects of a WC with the smallest classical creation energies [18; 19][20]. We first review the formulation of the standard instanton technique and apply it to derive effective Hamiltonians describing various exchange and defect hopping processes illustrated in Fig. 4 (Sec. II). In Sec. III, we calculate the energy of an interstitial and a vacancy via finite-size exact diagonalization of the derived effective Hamiltonians. Interestingly, the resulting semi-classical expression for the interstitial energy, when extrapolated to a large but finite \(r_{s}\), vanishes around \(r_{s}=r_{\rm mit}\approx 70\), signaling a possible self-doping instability to a partially melted WC below \(r_{\rm mit}\). From this, we propose the existence of a metallic electron crystal (MeC) phase as an intermediate phase of the 2DEG (Sec. IV). In Sec. V, we discuss the magnetic correlations induced by interstitial and vacancy hopping processes. Such kinetic processes induce magnetism with much higher energy scales than the ring-exchange processes of the pure WC; this could be experimentally probed by controlled doping of a WC that is commensurately locked to a weak periodic substrate potential. Our principal results are summarized in Figure 1. We conclude with a remark on the fate of the phase diagram in the presence of weak quenched disorder in Sec. VI.
## II The semi-classical approximation
We first review the standard semi-classical instanton method as applied to the ideal 2DEG (1) in the large \(r_{s}\) limit. The exact partition function of the (fermionic) 2DEG is
\[Z=\int d^{2N}{\bf r}_{0}\sum_{P\in S_{N}}\frac{(-1)^{P}}{N!}\sum _{\mathbf{\sigma}}\left\langle P{\bf r}_{0},P\mathbf{\sigma}\right|e^{-\beta H}\left| {\bf r}_{0},\mathbf{\sigma}\right\rangle, \tag{2}\] \[\left\langle P{\bf r}_{0},P\mathbf{\sigma}\right|e^{-\beta H}\left| {\bf r}_{0},\mathbf{\sigma}\right\rangle=\delta_{\mathbf{\sigma},P\mathbf{\sigma}}\left \langle P{\bf r}_{0}\right|e^{-\beta H}\left|{\bf r}_{0}\right\rangle,\] (3) \[\left\langle{\bf r}_{0}^{\prime}\right|e^{-\beta H}\left|{\bf r }_{0}\right\rangle=\int_{\tilde{\bf r}(0)=\tilde{\bf r}_{0}}^{\tilde{\bf r}( \tilde{\bf\beta})=\tilde{\bf r}_{0}^{\prime}}D\tilde{\bf r}(\tau)e^{-\sqrt{ \tilde{\bf r}_{s}}S},\] (4) \[S=\int_{0}^{\tilde{\beta}}d\tau\left[\frac{1}{2}\left(\frac{d \tilde{\bf r}}{d\tau}\right)^{2}+V(\tilde{\bf r})-V_{0}\right],\] (5) \[V({\bf r})\equiv\sum_{i<j}\frac{1}{|{\bf r}_{i}-{\bf r}_{j}|}, \tag{6}\]
where \({\bf r}(\tau)\equiv\{{\bf r}_{i}(\tau)\}\) are the positions of \(N\) electrons in imaginary time, \({\bf r}_{0}\equiv\{{\bf r}_{i}(\tau=0)\}\) are their initial positions, \(\mathbf{\sigma}\equiv\{\sigma_{i}=\uparrow,\downarrow\}\) are their respective spin indices, and \(\beta=1/k_{B}T\) is the inverse temperature. The sum over \(N!\) permutations, \(P\), of the coordinates and the sign factor \((-1)^{P}\) encode the fermionic exchange statistics. For bosonic particles, one should merely substitute \((-1)^{P}\rightarrow+1\). The 2DEG Hamiltonian (1) does not act on the electron spins, hence the \(\delta_{\mathbf{\sigma},P\mathbf{\sigma}}\) factor in the second line above. The third and fourth lines are the path integral representation of the \(N\)-electron propagator. The action is rescaled to make the \(r_{s}\) dependence manifest by introducing dimensionless coordinates, \(\tilde{\bf r}\equiv{\bf r}/a_{0}\), and dimensionless imaginary time, \(\tau\). Correspondingly, \(\tilde{\beta}\equiv\beta E^{*}\) is a dimensionless inverse temperature, where \(E^{*}\equiv e^{2}/(4\pi\epsilon a_{0}{\bf r}_{s}^{3/2})\). The path integral measure is also defined as an integration over the dimensionless coordinate \(\tilde{\bf r}(\tau)\). The minimum potential energy \(V_{0}=\min_{\tilde{\bf r}}V(\tilde{\bf r})\) is subtracted for later convenience [21]. The Coulomb interaction (last line) is computed numerically using the standard Ewald method. As usual, the presence of a uniform neutralizing positively-charged background is assumed. Henceforth, we will drop tildes from the rescaled coordinates to simplify notation: \(\tilde{\bf r}\rightarrow{\bf r}\). We focus on the zero temperature phase of the problem, and hence will always take \(\beta\rightarrow\infty\) in the end.
We approach this problem using a semi-classical instanton approximation, which is asymptotically exact in the \(r_{s}\rightarrow\infty\) (strong coupling) limit. In Sec. II.1, we briefly review the semi-classical derivation of ring-exchange processes in the WC. In Sec. II.2 and II.3, we consider tunneling processes involving a single interstitial and vacancy, respectively, and derive the corresponding effective Hamiltonians describing their dynamics. The application of the semi-classics to a bosonic system is addressed in Sec. II.
Figure 2: (a) A classical centered interstitial and (b) a vacancy configuration. Small black arrows are drawn to indicate the positions of the interstitial (panel a) and the vacancy (left panel of b). The vacancy configuration has \(D_{2}\) symmetry, and not the full \(D_{6}\) symmetry of the underlying WC; therefore, a vacancy has three possible orientations \(\alpha\). We introduce a pictorial notation for the vacancy for later convenience.
### Wigner crystal ring-exchange processes
In the \(r_{s}\to\infty\) limit, the classical ground state manifold consists of a triangular lattice WC with \(2^{N}\)-fold degeneracy in spin states. The lifting of this degeneracy and the nature of the resulting magnetic order is determined for \(1\ll r_{s}<\infty\) by WC ring-exchange processes. Various ring-exchange processes correspond to distinct instanton solutions of the action and can be calculated via the dilute instanton approximation [11; 12; 13; 14; 15; 16; 22], which we briefly review below. (See Refs. [14; 15] for more details.) The result is an effective spin Hamiltonian expressed as a sum over all ring-exchange processes:
\[H_{\rm eff}^{\rm wc}=-\sum_{a}(-1)^{P_{a}}\,J_{a}\,\big{(}\hat{\cal P}_{a}+\hat {\cal P}_{a}^{-1}\big{)}, \tag{7}\]
where the semi-classical calculation gives a leading-order large \(r_{s}\) asymptotic expression for \(J_{a}\). Here, \(\hat{\cal P}_{a}\) is the permutation operator corresponding to the permutation \(P_{a}\), and can be decomposed as a product of two-particle exchange operators. The two-particle exchange operators, in turn, can be written in terms of spin operators as \(\hat{\cal P}_{(i,j)}=2(\vec{S}_{i}\cdot\vec{S}_{j}+\frac{1}{4})\).
To illustrate how this works, recall the familiar problem of the semi-classical calculation of the tunnel splitting in a symmetric double-well potential [23; 24; 25]. For large enough \(\beta\) such that \(\beta\hbar\omega_{0}\gg 1\), the excited states in each well can be neglected. (Here, \(\omega_{0}\) is the oscillation frequency in either well.) In this limit, we obtain asymptotic relations
\[\langle{\bf r}_{0}|\,e^{-\beta H}\,|{\bf r}_{0}\rangle\sim|\psi({ \bf r}_{0})|^{2}e^{-\beta E_{0}}\cosh(\beta\Delta), \tag{8}\] \[\langle-{\bf r}_{0}|\,e^{-\beta H}\,|{\bf r}_{0}\rangle\sim|\psi( {\bf r}_{0})|^{2}e^{-\beta E_{0}}\sinh(\beta\Delta), \tag{9}\]
where the minima of the two wells are at \(\pm{\bf r}_{0}\), \(|\psi({\bf r}_{0})|^{2}=|\psi(-{\bf r}_{0})|^{2}\) is the probability density of the wave function at these positions, and \(E_{0}\) and \(2\Delta\) are, respectively, the mean energy and the splitting between the even and odd parity ground states. The right-hand side of each expression is obtained by inserting the resolution of the identity on the left-hand side.
In viewing this same problem from the path integral perspective in the semi-classical limit, one first solves for the instanton path--the smallest action path that begins at the bottom of one well and ends at the bottom of the other. The net duration (in imaginary time) of this tunneling event is of order \(\omega_{0}^{-1}\). We then sum over multiple such instanton events to obtain an expression of the same form as above, where the diagonal (off-diagonal) propagator in Eq. 8 (Eq. 9) contains all the terms with an even (odd) number of events. Expanding these expressions in power series, one sees that the typical number of tunneling events is \(\sim\beta\Delta\) and the mean imaginary time interval between them is of order \(\hbar/\Delta\). Note that in the semi-classical limit \(\hbar/\Delta\gg\omega_{0}^{-1}\), the instantons are dilute and hence effectively non-interacting (see Fig. 3). Looked at another way, for a range of temperature such that \(\hbar\omega_{0}\gg T\gg\Delta\), where multiple instanton events can be neglected, we can compute \(\Delta\) as
\[\Delta=\beta^{-1}\,\frac{\langle-{\bf r}_{0}|\,e^{-\beta H}\,|{\bf r}_{0} \rangle\,|_{\text{1-inst}}}{\langle{\bf r}_{0}|\,e^{-\beta H}\,|{\bf r}_{0} \rangle\,|_{\text{0-inst}}}, \tag{10}\]
where the subscripts designate the number of instanton events.
The analysis is somewhat more complicated but structurally similar for the present problem. Consider the propagator \(\langle P_{a}{\bf r}_{0}|\,e^{-\beta H}\,|{\bf r}_{0}\rangle\) where \({\bf r}_{0}\) is an initial WC configuration and \(P_{a}\) is the permutation corresponding to a particular ring-exchange process [see Fig. 4(a)]. In the semi-classical (large \(r_{s}\)) limit, this propagator is again expressible as a weighted sum over multi-instanton contributions. For temperatures such that \(\hbar\Omega\gg T\gg J_{a}\), where \(J_{a}\) is the tunnel splitting corresponding to the process \(P_{a}\), the propagator is dominated (up to symmetry) by a single \(``a"\) instanton contribution associated with the path \({\bf r}^{(a)}(\tau)\) with the smallest action subject to the boundary conditions \({\bf r}^{(a)}(0)={\bf r}_{0}\) and \({\bf r}^{(a)}(\bar{\beta})=P_{a}{\bf r}_{0}\). Here, \(\hbar\Omega/2\sim r_{s}^{-3/2}\) is the zero-point energy of the WC, while \(J_{a}\) is exponentially small in \(\sqrt{r_{s}}\) at large \(r_{s}\). The single-\(a\)-instanton contribution to the propagator can be expressed as
\[\langle P_{a}{\bf r}_{0}|\,e^{-\beta H}\,|{\bf r}_{0}\rangle\,|_{ \text{a,1-inst}}\] \[\approx e^{-\sqrt{r_{s}}S_{a}}\int_{\delta{\bf r}(0)={\bf 0}}^{ \delta{\bf r}(\bar{\beta})={\bf 0}}D\delta{\bf r}(\tau)\,e^{-\frac{1}{2}\sqrt{r_{s}} \int_{0}^{\bar{\beta}}\delta{\bf r}(\tau)^{T}\bar{\mathsf{M}}^{(a)}(\tau)\delta {\bf r}(\tau)}\] \[=e^{-\sqrt{r_{s}}S_{a}}\,\Big{(}\text{det}\,\big{[}\sqrt{r_{s}} \;\bar{\mathsf{M}}^{(a)}(\tau)\big{]}\Big{)}^{-1/2}\,, \tag{11}\] \[\hat{M}^{(a)}_{ij}(\tau)\equiv\frac{\delta^{2}S}{\delta r_{i}^{(a )}(\tau)\,\delta r_{j}^{(a)}(\tau)}=-\delta_{ij}\frac{\partial^{2}}{\partial \tau^{2}}+\partial_{i}\partial_{j}V\big{[}{\bf r}^{(a)}(\tau)\big{]}, \tag{12}\]
where \(S_{a}\equiv S\big{[}{\bf r}^{(a)}(\tau)\big{]}\) with the trajectory \({\bf r}^{(a)}(\tau)\) satisfying \(\delta S\big{[}{\bf r}^{(a)}(\tau)\big{]}=0\), and \(\delta{\bf r}(\tau)\equiv{\bf r}(\tau)-{\bf r}^{(a)}(\tau)\) is the fluctuation coordinate. Fluctuations are treated within a harmonic approximation around the semi-classical path.
Figure 3: An example of a multi-instanton configuration for the double well potential shown in the inset. The “size” of each instanton in imaginary time is \(\sim 1/\hbar\omega_{0}\) and the “distance” between them is \(\sim 1/\Delta\).
In Eq. (12), the derivative \(\partial_{i}\) is with respect to the normalized coordinates. Note that \(\mathbf{\hat{M}}^{(a)}\) has a zero eigenvalue solution \(\mathbf{\dot{r}}^{(a)}(\tau)\) corresponding to the translation in imaginary time, which has to be treated with care [23, 24, 25, 14, 23]. Separating the zero mode contribution from the full determinant, one obtains
\[\left\langle P_{a}\mathbf{r}_{0}\right|e^{-\beta H}\left|\mathbf{r }_{0}\right\rangle\left|{}_{a,\text{1-inst}}\right. \tag{13}\] \[=\beta\,\frac{e^{2}}{4\pi\epsilon a_{\text{B}}r_{s}^{3}}\sqrt{ \frac{S_{a}}{2\pi}}\cdot e^{-\sqrt{r_{s}}S_{a}}\left(\det^{\prime}\bigl{[} \sqrt{r_{s}}\,\mathbf{\hat{M}}^{(a)}(\tau)\bigr{]}\right)^{-\frac{1}{2}},\]
where the prime denotes that the zero eigenvalue must be omitted in the calculation of the determinant. Note that since an instanton is a localized object with a characteristic size \(\Delta\tau=\Omega^{-1}\), one can neglect the exponentially small correction from its tail provided \(\beta\hbar\Omega\gg 1\).
On the other hand, the diagonal propagator in the zero instanton sector \(\left\langle\mathbf{r}_{0}\right|e^{-\beta H}\left|\mathbf{r}_{0}\right\rangle \left|{}_{0\text{-inst}}\right.\) can be obtained by making a harmonic approximation of \(V\) around \(\mathbf{r}_{0}\):
\[\left\langle\mathbf{r}_{0}\right|e^{-\beta H}\left|\mathbf{r}_{0 }\right\rangle\left|{}_{0\text{-inst}}\approx\left(\det\left[\sqrt{r_{s}}\, \mathbf{\hat{M}}^{(0)}(\tau)\right]\right)^{-\frac{1}{2}}, \tag{14}\] \[\hat{M}^{(0)}(\tau)\equiv-\delta_{ij}\frac{\partial^{2}}{ \partial\tau^{2}}+\partial_{i}\partial_{j}V(\mathbf{r}_{0}). \tag{15}\]
Normalizing the propagator in the one instanton sector by that in the zero instanton sector, as in Eq. (10), one obtains
\[J_{a} =\beta^{-1}\,\frac{\left\langle P_{a}\mathbf{r}_{0}\right|e^{- \beta H}\left|\mathbf{r}_{0}\right\rangle\left|{}_{a,\text{1-inst}}\right.}{ \left\langle\mathbf{r}_{0}\right|e^{-\beta H}\left|\mathbf{r}_{0}\right\rangle \left|{}_{0\text{-inst}}\right.}\] \[=\frac{e^{2}}{4\pi\epsilon a_{\text{B}}}\cdot\frac{A_{a}}{r_{s}^ {5/4}}\sqrt{\frac{S_{a}}{2\pi}}\ e^{-\sqrt{r_{s}}S_{a}}>0, \tag{16}\] \[A_{a} =\left[\frac{\det^{\prime}\bigl{(}-\partial_{\tau}^{2}+V^{\prime \prime}\bigl{[}\mathbf{r}^{(a)}(\tau)\bigr{]}\bigr{)}}{\det\big{(}-\partial_{ \tau}^{2}+V^{\prime\prime}(\mathbf{r}_{0})\bigr{)}}\right]^{-\frac{1}{2}}, \tag{17}\]
where \(A_{a}\) is called a "fluctuation determinant," calculated in the normalized coordinates with \(r_{s}=1\), and the \(\beta\to\infty\) limit is implicitly taken in the end. In the second line, the extra factor of \(r_{s}^{1/4}\) comes from the normalization of the determinant
\[\left(\frac{\det^{\prime}\bigl{[}\sqrt{r_{s}}\,\mathbf{\hat{M}}^{(a)}(\tau) \bigr{]}}{\det\bigl{[}\sqrt{r_{s}}\,\mathbf{\hat{M}}^{(0)}(\tau)\bigr{]}} \right)^{-\frac{1}{2}}=r_{s}^{1/4}\left(\frac{\det^{\prime}\bigl{[}\mathbf{ \hat{M}}^{(a)}(\tau)\bigr{]}}{\det\bigl{[}\mathbf{\hat{M}}^{(0)}(\tau)\bigr{]} }\right)^{-\frac{1}{2}}.\]
Hence, \(A_{a}\) (17) (and also \(S_{a}\)) are dimensionless numbers with no \(r_{s}\) dependence. In Eq. (17), \(V^{\prime\prime}\) denotes the hessian matrix of \(V\). We refer readers to Appendix A for the
Figure 4: Tunneling processes considered in this paper. (a) WC exchange processes. (b) Exchange processes involving an interstitial. (c) Interstitial hopping processes. (d) Exchange processes involving a vacancy. (e) Vacancy hopping processes. In (b,c), black arrows indicate the positions of interstitials. In (e), a black (cyan) oval denotes an initial (final) vacancy configuration corresponding to each vacancy hopping process. \(t_{11},t_{12},t_{22},t_{23}\) exhaust all the nearest-neighbor vacancy hopping processes; others are related to one of these by symmetry. Panels (a-e) are adapted from Ref. [16].
details of the numerical calculation of \(S_{a}\) and \(A_{a}\). For the ring-exchange processes illustrated in Fig. 4(a), we quote the results for \(S_{a}\) and \(A_{a}\) from Ref. [15]: \(S_{2}=1.64\), \(A_{2}=1.30\); \(S_{3}=1.53\), \(A_{3}=1.10\); \(S_{4}=1.66\), \(A_{4}=1.24\); \(S_{5}=1.91\), \(A_{5}=1.57\); \(S_{6}=1.78\), \(A_{6}=1.45\). Our calculations, and those of Ref. [14], agree with these values. The resulting exchange coefficients calculated from the semi-classical expression (16) are shown in Fig. 5.
The remaining issue concerns the sign factor \((-1)^{P_{a}}\) that enters \(H_{\rm eff}^{\rm wc}\) in Eq. (7), which is due to the antisymmetry of the many-body electronic wave function (see chapter V of Ref. [12] for an explanation). As recognized by Thouless [22], this implies that a ring-exchange process involving an even (odd) number of electrons mediates an antiferromagnetic (ferromagnetic) interaction.
### Processes involving a single interstitial
Tunneling processes involving a single centered interstitial (CI) defect [Fig. 2(a)] were first considered in Ref. [16]. We correct and refine the results obtained there: (1) The sign error in the correlated hopping terms \(t_{2}\) and \(t_{2}^{\prime}\) in Eq. (4) of Ref. [16] is corrected in Eq. (18); (2) We improve the estimate of the classical action (which is done by solving the classical equations of motion for a finite sized system with periodic boundary conditions) using a hexagonal, instead of a rectangular, supercell with \(12\times 12+1\) electrons; and (3) We explicitly calculate the fluctuation determinants \(A_{a}\) rather than simply making dimensional estimates. Fig. 4(b-c) show the tunneling processes considered in this paper with the corresponding \(S_{a}\) and \(A_{a}\) listed in Table 1. The hopping matrix elements \(t_{a}>0\) are again expressed in terms of \(S_{a}\) and \(A_{a}\) as in Eq. 16. Note that four hopping processes have smaller actions than those of exchange processes and hence are more important when \(r_{s}\gg 1\). The effective Hamiltonian in the presence of a dilute concentration of interstitials is (corrected from Ref. [16])
\[H_{\rm eff}^{\rm i}= -t_{1}\sum_{\langle n,n^{\prime}\rangle}\sum_{\sigma}c_{n,\sigma }^{\dagger}c_{n^{\prime},\sigma}\] \[-t_{2}\sum_{\begin{subarray}{c}(n,j,n^{\prime})\\ \in(t_{2}\ {\rm path})\end{subarray}}\sum_{\sigma,\sigma^{\prime}}f_{j,\sigma}^{ \dagger}c_{n,\sigma^{\prime}}^{\dagger}f_{j,\sigma^{\prime}}c_{n^{\prime},\sigma}\] \[-t_{2}^{\prime}\sum_{\begin{subarray}{c}(n,j,n^{\prime})\\ \in(t_{2}^{\prime}\ {\rm path})\end{subarray}}\sum_{\sigma,\sigma^{\prime}}f_{j, \sigma}^{\dagger}c_{n,\sigma^{\prime}}^{\dagger}f_{j,\sigma^{\prime}}c_{n^{ \prime},\sigma}\] \[-t_{2}^{\prime\prime}\sum_{\begin{subarray}{c}(n,j,n^{\prime})\\ \in(t_{2}^{\prime}\ {\rm path})\end{subarray}}\sum_{\sigma,\sigma^{\prime}}f_{j, \sigma}^{\dagger}c_{n,\sigma^{\prime}}^{\dagger}f_{j,\sigma^{\prime}}c_{n^{ \prime},\sigma}\] \[-t_{2}^{\prime\prime}\sum_{\begin{subarray}{c}(n,j,n^{\prime})\\ \in(t_{2}^{\prime}\ {\rm path})\end{subarray}}\sum_{\sigma,\sigma^{\prime}}f_{j, \sigma}^{\dagger}c_{n,\sigma^{\prime}}^{\dagger}f_{j,\sigma^{\prime}}c_{n^{ \prime},\sigma}\] \[-\sum_{a\in({\rm CI\ ex.})}(-1)^{P_{a,\rm i}}J_{a,\rm i}\left( \hat{\mathcal{P}}_{a,\rm i}+\hat{\mathcal{P}}_{a,\rm i}^{-1}\right)\] \[+\ \cdots\ +\ \left[U=\infty\right], \tag{18}\]
where \(f_{j\sigma}^{\dagger}\) (\(c_{n,\sigma}^{\dagger}\)) is the creation operator of electrons that live on the WC sites \(j\) (triangular plaquette centers \(n\)
Figure 5: Exchange coefficients of the pure WC (in units of the Hartree energy \(e^{2}/4\pi\epsilon a_{B}\)) as a function of \(r_{s}\), calculated from the semi-classical expression (16). The processes corresponding to \(J_{2,\rm r}\), \(J_{6}\) are schematically illustrated in Fig. 4(a). Within the WC phase (\(r_{s}\gtrsim 30\)), the instanton approximation is well-justified for the calculation of these ring-exchange processes since \(\sqrt{r_{s}}S_{a}\gg 1\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Interstitial & \(S_{a}\) & \(A_{a}\) & Vacancy & \(S_{a}\) & \(A_{a}\) \\ \hline \(t_{1}\) (50,14) & 0.098 & 0.23 & \(\Delta\) (50,12) & 0.16 & 0.19 \\ \hline \(t_{2}\) (50,14) & 0.022 & 0.088 & \(t_{11}\) (50,12) & 0.011 & 0.050 \\ \hline \(t_{2}\)\({}^{\prime}\) (50,12) & 0.10 & 0.22 & \(t_{22}\) (40,10) & 0.31 & 0.19 \\ \hline \(t_{2}\)\({}^{\prime\prime}\) (50,12) & 0.23 & 0.41 & \(t_{12}\) (50,10) & 0.13 & 0.091 \\ \hline \(J_{2,\rm i}\) (50,14) & 0.37 & 0.26 & \(t_{23}\) (60,12) & 0.27 & 0.056 \\ \hline \(J_{3,\rm i}\) (50,12) & 0.56 & 0.29 & \(J_{2,\rm v}\) (50,16) & 0.68 & 0.23 \\ \hline \(J_{3,\rm i}^{\prime}\) (40,14) & 0.69 & 0.32 & \(I_{2,\rm v}^{\prime}\) (30,12) & 0.76 & 0.49 \\ \hline \(J_{3,\rm i}^{\prime\prime}\) (40,14) & 0.82 & 0.26 & \(I_{2,\rm v}^{\prime\prime}\) (50,16) & 2.02 & 2.65 \\ \hline \(J_{4,\rm i}\) (40,14) & 0.91 & 0.23 & \(J_{3,\rm v}\) (50,16) & 0.68 & 0.72 \\ \hline \(J_{4,\rm i}^{\prime}\) (40,16) & 0.54 & 0.69 & \(I_{3,\rm v}^{\prime}\) (30,16) & 1.90 & 2.72 \\ \hline & & & \(J_{4,\rm v}\) (50,16) & 0.66 & 0.58 \\ \hline & & & \(J_{6,\rm v}\) (50,16) & 1.23 & 1.27 \\ \hline \end{tabular}
\end{table}
Table 1: Dimensionless actions \(S_{a}\) and fluctuation determinants \(A_{a}\) for tunneling processes illustrated in Fig. 4(b-e) calculated in this paper. The parentheses in the first and fourth columns denote \((N_{\rm move},M)\), where \(N_{\rm move}\) is the number of electrons that are allowed to adjust their positions during minimization and \(M\) is the number of time slices for the discretized tunneling paths (i.e., there are \(M-1\) intermediate configurations). Processes for a centered interstitial (vacancy) are calculated in a hexagonal supercell with \(12\times 12+1\)\((10\times 10-1)\) electrons starting and ending at fully relaxed defect configurations.
and the \(U=\infty\) condition precludes any double occupancy. \(\sigma,\sigma^{\prime}=\uparrow,\downarrow\) are the spin indices that are summed over. \(a\in\) (CI ex.) denotes one of the exchange processes involving an interstitial shown in Fig. 4(b). The omitted terms correspond to hopping and exchange processes other than those shown in Fig. 4(b-c) and direct (elastic) interactions between interstitials [26]. Figure 6(a-b) shows the hopping matrix elements (\(t\)) and exchange coefficients (\(J\)) for processes involving an interstitial calculated from the semi-classical expression (16).
### Processes involving a single vacancy
The classical vacancy defect has \(D_{2}\) symmetry instead of the full \(D_{6}\) symmetry of the underlying triangular lattice [19] [see Fig. 2(b)]. Therefore, associated with each location of a vacancy, there are 3 inequivalent orientations related by \(C_{6}\) rotations. We will denote these by an index \(\alpha=1,2,3\); \(\alpha=2\) and 3 are related to \(\alpha=1\) by \(\mathcal{C}_{6}\) and \(\mathcal{C}_{6}^{2}\) respectively.
We considered tunneling processes involving a single vacancy defect as illustrated in Fig. 4(d-e), with their corresponding values of \(S_{a}\) and \(A_{a}\) listed in Table 1. The calculation is done in a hexagonal supercell containing \(10\times 10-1\) electrons. Again, matrix elements \(\Delta,t_{a},J_{\mathrm{a,v}}>0\) are given by Eq. (16) in terms of \(S_{a}\) and \(A_{a}\). Note that, as in the interstitial case, the tunnel barriers (determined by \(S_{a}\)) for hopping processes are smaller than those for exchange processes [27].
The resulting effective Hamiltonian describing the dynamics of vacancies can be written straightforwardly as follows. First, corresponding to each orientation \(\alpha\) of a vacancy, we introduce a (hard-core) bosonic operator \(b_{i,\alpha}^{\dagger}\) that suitably relaxes the positions of the WC electrons near the vacancy site \(i\) to the associated configuration that minimizes the (classical) Coulomb energy. The operator that creates a vacancy in the WC at site \(i\) with orientation \(\alpha\) is thus \(f_{i,\alpha}b_{i,\alpha}^{\dagger}\). Then, with the definitions \(\mathbf{b}_{i}^{\dagger}\equiv[b_{i,1}^{\dagger},b_{i,2}^{\dagger},b_{i,3}^{ \dagger}]\) and
\[\mathfrak{D}\equiv\begin{bmatrix}0&\Delta&\Delta\\ \Delta&0&\Delta\\ \Delta&\Delta&0\end{bmatrix},\ \Upsilon\equiv\begin{bmatrix}t_{11}&t_{12}&t_{12}\\ t_{12}&t_{22}&t_{23}\\ t_{12}&t_{23}&t_{22}\end{bmatrix},\ \mathcal{C}_{6}=\begin{bmatrix}0&1&0\\ 0&0&1\\ 1&0&0\end{bmatrix}, \tag{19}\]
the effective Hamiltonian in the presence of a dilute con
Figure 6: Hopping matrix elements and exchange coefficients involving a defect (in units of \(e^{2}/4\pi a_{B}\)) within the semi-classical approximation. Note that the y-axis scale here is a factor of 100 larger than in Fig. 5. Hence, the dynamical processes involving an interstitial or a vacancy have much larger energy scales than the exchange processes in the pure WC.
centration of vacancies is
\[H_{\rm eff}^{\rm v}=-\sum_{i,\sigma}\Bigg{[} \mathbf{b}_{i}^{\dagger}\mathcal{D}\mathbf{b}_{i}+\sum_{\delta= \pm\mathbf{e}_{1}}f_{i,\sigma}^{\dagger}f_{i+\delta,\sigma}\mathbf{b}_{i+ \delta}^{\dagger}\Upsilon\,b_{i}\] \[+\sum_{\delta=\pm\mathbf{e}_{2}}f_{i,\sigma}^{\dagger}f_{i+\delta,\sigma}\mathbf{b}_{i+\delta}^{\dagger}\ \mathcal{C}_{6}^{-1}\Upsilon\mathcal{C}_{6}\ \mathbf{b}_{i}\] \[+\sum_{\delta=\pm\mathbf{e}_{3}}f_{i,\sigma}^{\dagger}f_{i+\delta,\sigma}\mathbf{b}_{i+\delta}^{\dagger}\ \mathcal{C}_{6}^{-2}\Upsilon\mathcal{C}_{6}^{2}\ \mathbf{b}_{i}\Bigg{]}\] \[-\sum_{a\in({\rm V}\ {\rm ex.})}(-1)^{P_{a,\rm v}}J_{a,\rm v}\left( \hat{\mathcal{P}}_{a,\rm v}+\hat{\mathcal{P}}_{a,\rm v}^{-1}\right)\] \[+\ \cdots\ +\ [U=\infty]\,, \tag{20}\]
where \(\mathbf{e}_{1}=[1,0]\), \(\mathbf{e}_{2}=[1/2,\sqrt{3}/2]\), \(\mathbf{e}_{3}=[-1/2,\sqrt{3}/2]\), and \(f_{i,\sigma}^{\dagger}\) is again the creation operator of an electron living at the WC site \(i\). The first term describes on-site orientation-mixing processes corresponding to \(\Delta\); the second term describes vacancy hopping processes in the \(\pm\mathbf{e}_{1}\) directions; and the third (fourth) term describes vacancy hopping processes in the \(\pm\mathbf{e}_{2}\)\((\pm\mathbf{e}_{3})\) directions, which can be related to the second term by \(\mathcal{C}_{6}\) (\(\mathcal{C}_{6}^{2}\)) rotation [see Fig. 4(e)]. In the fifth term, \(a\in({\rm V}\ {\rm ex.})\) denotes one of the exchange processes around a vacancy shown in Fig. 4(d). The omitted terms correspond to hopping and exchange processes other than those shown in Fig. 4(d-e) and direct (elastic) interactions between vacancies [26]. Figures 6(c-d) shows the hopping matrix elements (\(t\)) and exchange coefficients (\(J\)) for processes involving a vacancy calculated from the semi-classical expression (16).
### Two-Dimensional Bose Gas
For Coulomb-interacting bosonic particles, one merely needs to substitute \((-1)^{P_{a}}\to+1\) in \(H_{\rm eff}^{\rm wc}\) (7) without changing the forms of \(H_{\rm eff}^{\rm i}\) and \(H_{\rm eff}^{\rm v}\) [(18) and (20)]. The consequence is that all ring-exchange processes and interstitial and vacancy hopping processes mediate ferromagnetism. This is a special case of a more general result that the ground state of an interacting multi-component bosonic system is a fully polarized ferromagnet [28; 29].
## III A single defect: exact diagonalization study
In this section, we present the results of a finite-size exact diagonalization study (up to \(3\times 6\pm 1\) electrons) of the derived effective Hamiltonians in the single-defect sector.(18,20) [30]. Figure 7 summarizes the result of the exact diagonalization calculation.
The maximum kinetic energy gain for an interstitial is calculated by obtaining the ground state of \(H_{\rm eff}^{\rm i}\) (18) in the single-interstitial sector. We retained all the terms shown in Fig. 4(b-c) except for \(J_{3,\rm i}^{\prime\prime}\) and \(J_{4,\rm i}\). (In the range of \(20\leq r_{s}\leq 100\) considered, they are more than an order of magnitude smaller than the dominant terms in the Hamiltonian.) The resulting kinetic energy gain, \(E_{\rm i}^{\rm kin}(r_{s})<0\), is calculated for a system of \(3\times 6\) WC sites with an additional interstitial (i.e., a total of \(3\times 6+1\) electrons) with periodic boundary conditions. Including the classical Coulomb energy and the zero-point vibrational energy, the minimum interstitial energy (in units of \(e^{2}/4\pi\epsilon a_{\rm B}\)) is
\[E_{\rm i}(r_{s})=\frac{C_{1,\rm i}}{r_{s}}+\frac{C_{3/2,\rm i}}{r_{s}^{3/2}}+E _{\rm i}^{\rm kin}(r_{s})+\cdots\,. \tag{21}\]
Here, we have neglected terms corresponding to higher order perturbative corrections (i.e, higher powers of \(r_{s}^{-1/2}\)) from phonon anharmonicity as well as higher order corrections to the semi-classical instanton approximation. We calculated \(C_{1,\rm i}\) and \(C_{3/2,\rm i}\) for supercells up to size \(28\times 28+1\); extrapolation to an infinite supercell size gives \(C_{1,\rm i}=0.0769\) and \(C_{3/2,\rm i}=-0.295\)[31]. The semi-classical expression for the interstitial energy (21)
Figure 7: Exact diagonalization results for the effective Hamiltonians (18, 23) on a \(3\times 6\) triangular lattice WC with periodic boundary conditions in the presence of a single defect. The semi-classical expressions (Fig. 6) are used as an input for the various matrix elements. (a) The ground state energy in the presence of a single interstitial (\(E_{\rm i}\)) and a vacancy (\(E_{\rm v}\)) as a function of \(r_{s}\) from the resulting semi-classical expressions (21, 25). \(E_{\rm i}(r_{s})\) [\(E_{\rm v}(r_{s})\)] crosses zero around \(r_{s}=r_{\rm mit}\approx 70\) [\(r_{s}\approx 30\)]. (b) The relative spin polarization (\(0\leq 2S_{\rm tot}^{2}/N_{e}\leq 1\)) induced by a single interstitial and vacancy.
is plotted as a function of \(r_{s}\) in Fig. 7(a).
For the vacancy, the on-site orientation-mixing term \(\Delta\) is the largest energy scale in the range of \(20\leq r_{s}\leq 100\), as shown in Fig. 6(c-d). Therefore, we simplify the vacancy problem by projecting it into the "isotropic single vacancy sector," whose basis states are equal superpositions of all the vacancy orientations \(\alpha\) at a site \(i\):
\[\ket{i;\{\sigma\}_{\rm wc}} \equiv\frac{1}{\sqrt{3}}\sum_{\alpha=1}^{3}\ket{i,\alpha;\{ \sigma\}_{\rm wc}} \tag{22}\] \[=\frac{1}{\sqrt{3}}\sum_{\alpha=1}^{3}b_{i,\alpha}^{\dagger}\,f_{ 1,\sigma_{1}}^{\dagger}\cdots\hat{\mathcal{N}}_{i,\sigma_{i}}^{\dagger}\cdots f _{N,\sigma_{N}}^{\dagger}\ket{\emptyset}.\]
Here \(\{\sigma\}_{\rm wc}\) are the spins of the WC electrons, and the slash in \(\hat{\mathcal{N}}_{i,\sigma_{i}}\) denotes that the corresponding operator is omitted from the product. The projection of \(H_{\rm eff}^{\rm v}\) (20) to the isotropic single vacancy sector is straightforward, and yields
\[H_{\rm eff}^{\rm v}|_{\Delta} =-2\Delta-t_{\Delta}^{\rm eff}\sum_{\langle i,j\rangle}(f_{i, \sigma}^{\dagger}f_{j,\sigma}+{\rm H.c.})\] (23) \[+\sum_{i}(1-n_{i})\bigg{[}J_{2}^{\rm eff}\sum_{\langle j,k\rangle }\hat{\mathcal{P}}_{j,k}-J_{3}^{\rm eff}\sum_{\langle j,k,l\rangle}\hat{ \mathcal{P}}_{j,k,l}\] \[\phantom{=}+J_{4}^{\rm eff}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
be stable against quantum melting at somewhat higher densities (smaller \(r_{s}\)). Thus, this carries with it the likely implication that \(r_{\rm melt}<r_{\rm melt}^{*}\approx 31\).
## V Kinetic magnetism
Here, we discuss the magnetic correlations induced by defect hopping processes [40].
Distinct interstitial hopping terms induce different magnetic correlations in the underlying WC. The character of the dominant magnetic correlations induced by each hopping process is determined by the parity of the smallest spin permutation it induces [22]. For example, by applying \(t_{2}^{p}\) terms twice on the interstitial, one recovers the same charge configuration but with 3 electrons (spins) permuted. This is an even permutation and mediates ferromagnetism as discussed in Sec. II.1. Similarly, the smallest permutation that the \(t_{2}\) terms induce involves 7 electrons (even permutation) and also mediates ferromagnetism. On the other hand, the smallest spin permutation induced by \(t_{2}^{\prime}\) process involves 4 electrons (odd permutation) and mediates antiferromagnetism. The \(t_{1}\) hopping term does not couple with the underlying WC and hence does not induce magnetism by itself. Taken together, the various hopping terms, in combination with exchange processes \(J_{a,i}\), lead to a complicated problem with competing magnetic tendencies.
Interestingly, the interstitial dynamics induces nontrivial spin polarization \(2S_{\rm tot}^{z}/N_{e}\), as shown in Fig. 7(b), where \(S_{\rm tot}^{z}\) is the total \(S^{z}\) quantum number and \(N_{e}\) is the number of electrons in the system. For \(20\leq r_{s}<70\), the interstitial seems to always favor a single spin-flip in a fully polarized background (this is also true for smaller systems of \(3\times 4+1\) or \(3\times 5+1\) electrons) [41].
In the presence of small antiferromagnetic WC exchange interactions, a single interstitial can only delocalize in a finite region, forming a large magnetic polaron of size \(\sim a_{0}^{2}\sqrt{t/J}\)[16; 35; 42], where \(t\) and \(J\) are characteristic values of interstitial hopping matrix elements and WC exchange coefficients, respectively. At \(r_{s}\approx 45\), we estimate that a single interstitial induces a magnetic polaron involving \(\sim 40\) WC spins.
On the other hand, it is known that the dynamics of a single hole in the \(U=\infty\) Hubbard model on a non-bipartite lattice leads to some form of antiferromagnetism [43; 44; 45; 46; 47]; therefore, assuming that the isotropic vacancy is energetically favored, its hopping processes mediate antiferromagnetic correlations around it. In the presence of competing exchange interactions of the underlying WC, a vacancy similarly forms a finite-sized antiferromagnetic polaron.
By controlled doping of a WC in the presence of a smoothly varying weak external periodic potential, one can obtain the defect-doped commensurate WC phase as a stable ground state, as the following reasoning shows. Consider a weak commensurate potential that has minima \(-W<0\) at the triangular lattice WC sites. When the density is tuned away (but not too far away) from the commensurate value, the defect-doped commensurate WC has an energy per electron \(\Delta E_{\rm comm}/N\approx-W+E_{\rm def}|\delta|+O(W|\delta|,\delta^{2})\) as compared to the pure incommensurate WC, where \(\delta\) is the ratio of defect electrons to the total number of electrons, and \(E_{\rm def}=E_{\rm i}\) (\(E_{\rm v}\)) is the energy of an interstitial (vacancy) defect in the absence of the external potential. Therefore, for a range of doping \(-W/E_{\rm v}<\delta<W/E_{\rm i}\), the system will form a defect-doped metallic WC phase that is commensurately locked to the external potential. Such a phase, in turn, is characterized by defect-induced magnetic correlations with much higher energy scales than the exchange processes of the pure WC. Therefore, one expects that the magnetic energy scale increases as one moves away from the commensurate filling. Such a proposal may be experimentally tested in certain Moire systems that support a commensurately locked WC phase [48; 49; 50; 51; 52].
## VI Effects of weak disorder
Before concluding, we remark on the effect of small quenched disorder on the phase diagram (Fig. 1). Firstly, even weak disorder is expected to destroy any long-range crystalline order; hence all the electronic crystalline states we have discussed are defined only in an approximate sense as short-range ordered states. Also, the MeC phase is characterized by the reduced density of mobile electrons and their increased effective mass; hence, even weak disorder is likely to result in strong localization and destroy the metallic character of the phase. The resulting disorder-induced intermediate insulating phase is characterized by large magnetic energy scales, associated with the dynamical processes of defects. This may be an explanation for the recently observed insulating phases with much higher magnetic energy than the exchange scales of the pure WC [5; 7; 8]. Note that such a proposal predicts an exponential reduction of magnetic energy scales with increasing \(r_{s}\) for \(r_{s}>r_{\rm mit}\)[53].
## Acknowledgement
We thank Boris Spivak for initial insights which led to this investigation and Akshat Pandey for collaboration on a previous work. We appreciate Veit Elser, Brian Skinner and Shafayat Hossain for interesting comments on the draft. K-S.K. acknowledges the hospitality of the Massachusetts Institute of Technology, where this work was completed, and thanks Aidan Reddy, Seth Musser and Yubo Paul Yang for helpful discussions. I.E. acknowledges Eugene Demler, Hongkun Park, Jibo Sung, Pavel Volkov, Jue Wang, and Yubo Yang for helpful discussions on related work. K-S.K. and SAK were supported in part by the Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering, under contract DE-AC02-76SF00515
at Stanford. I.E. was supported by AFOSR Grant No. FA9550-21-1-0216 and the University of Wisconsin-Madison. C.M. was supported in part by the Gordon and Betty Moore Foundation's EPiQS Initiative through GBMF8686, and in part by the National Science Foundation under Grants No. NSF PHY-1748958 and PHY-2309135. Parts of the computing for this project were performed on the Sherlock computing cluster at Stanford University.
## Appendix A Numerical calculations of \(S\) and \(A\)
In this section, we review a numerical method for calculating \(S_{a}\) (5) and \(A_{a}\) (17), closely following Ref. [15]. Although we applied the semi-classical instanton calculation to the 2DEG specifically, the method outlined here applies to any system with a general potential \(V(\mathbf{r})\) with degenerate minima in the semi-classical limit. We first calculate the instanton action \(S_{a}\) (5) by discretizing a tunneling path:
\[S_{a} =\int_{\mathbf{r}_{0}}^{\mathbf{r}_{0}^{\prime}}d\mathbf{r}\, \sqrt{2\Delta V(\mathbf{r})} \tag{18}\] \[\approx\sum_{k=1}^{N_{\text{time}}}\frac{1}{2}|\mathbf{r}_{k}- \mathbf{r}_{k-1}|\cdot\big{[}\sqrt{2\Delta V(\mathbf{r}_{k})}+\sqrt{2\Delta V (\mathbf{r}_{k-1})}\big{]},\]
where we defined \(\Delta V(\mathbf{r})\equiv V(\mathbf{r})-V(\mathbf{r}_{0})\) and used the semi-classical equation of motion \(\ddot{\mathbf{r}}=\nabla V(\mathbf{r})\). \(\mathbf{r}_{0}\) and \(\mathbf{r}_{0}^{\prime}\) are initial and final minimum configurations of \(V\), respectively, \(\mathbf{r}_{k}\equiv\mathbf{r}(\tau_{k})\) is the collective coordinate of particles at time \(\tau_{k}\), where \(0\equiv\tau_{0}<\tau_{1}<\tau_{2}<\cdots<\tau_{M}\equiv\tilde{\beta}\), and \(\mathbf{r}_{M}\equiv\mathbf{r}_{0}^{\prime}\). In order to make the distances \(|\mathbf{r}_{k}-\mathbf{r}_{k-1}|\) approximately equal, each \(\mathbf{r}_{k}\) is taken to be constrained in the hyperplane defined by \((\mathbf{r}_{k}-\mathbf{r}_{0})\cdot(\mathbf{r}_{0}^{\prime}-\mathbf{r}_{0})= \frac{k}{M}|\mathbf{r}_{0}^{\prime}-\mathbf{r}_{0}|\). Numerical minimization of the discretized action (18) is performed with a standard optimization package [54]. We will henceforth denote by \(\mathbf{r}_{k}\) (\(k=0,1,..,M\)) the optimized tunneling path for the \(a\)-instanton process.
The fluctuation determinant \(A_{a}\) captures the Gaussian fluctuations around the semi-classical path
\[A_{a}=\frac{F^{\prime}\big{[}\mathbf{r}^{(a)}(\tau)\big{]}}{F \big{[}\mathbf{r}_{0}\big{]}}=\left[\frac{\det^{\prime}\big{(}-\partial_{\tau }^{2}+V^{\prime\prime}\big{[}\mathbf{r}^{(a)}(\tau)\big{]}\big{)}}{\det\big{(} -\partial_{\tau}^{2}+V^{\prime\prime}(\mathbf{r}_{0})\big{)}}\right]^{-\frac{ 1}{2}}, \tag{19}\] \[F\big{[}\mathbf{r}(\tau)\big{]}\equiv\int_{\delta\mathbf{r}(0)=0 }^{\delta\mathbf{r}(\tilde{\beta})=0}D\delta\mathbf{r}(\tau)\exp\left[-\frac{1} {2}\int_{0}^{\tilde{\beta}}\Big{(}\delta\mathbf{\dot{r}}(\tau)^{2}+\delta \mathbf{r}(\tau)^{\mathrm{T}}V^{\prime\prime}\big{[}\mathbf{r}^{(a)}(\tau) \big{]}\delta\mathbf{r}(\tau)\Big{)}\right]=\bra{\mathbf{0}}\mathcal{T}\exp(- \int_{0}^{\tilde{\beta}}dr\,h[\mathbf{r}^{(a)}(\tau)])\ket{\mathbf{0}},\] (20) \[h[\mathbf{r}(\tau)]\equiv-\frac{1}{2}\nabla^{2}+\frac{1}{2} \delta\mathbf{r}(\tau)^{\mathrm{T}}V^{\prime\prime}\big{[}\mathbf{r}(\tau) \big{]}\delta\mathbf{r}(\tau), \tag{21}\]
where \(\mathcal{T}\exp(\cdots)\) denotes the imaginary-time-ordered exponential, \(\delta\mathbf{r}(\tau)\equiv\mathbf{r}(\tau)-\mathbf{r}^{(a)}(\tau)\) is the fluctuation coordinate, and the primed determinant in the first line is again computed with the zero mode omitted. \(\tilde{\beta}\rightarrow\infty\) is implicitly taken in the end in calculating \(A_{a}\). As discussed below, the calculation of \(A_{a}\) can be done numerically by first computing \(F\big{[}\mathbf{r}^{(a)}(\tau)\big{]}/F\big{[}\mathbf{r}_{0}\big{]}\) that includes the zero mode contribution, and then multiplying by the square root of the smallest eigenvalue (which is exponentially small in \(\tilde{\beta}\)) of the operator \(-\partial_{\tau}^{2}+V^{\prime\prime}\big{[}\mathbf{r}^{(a)}(\tau)\big{]}\).
\(F\big{[}\mathbf{r}^{(a)}(\tau)\big{]}\) can be calculated by discretizing the path integral expression (20). First, we further define the time slices intermediate to those defined above
\[0<\tau_{1/2}<\tau_{1}<\tau_{1+1/2}<\cdots<\tau_{M-1/2}<\tau_{M}\equiv\tilde{ \beta}, \tag{22}\]
where each interval, \(\Delta\tau_{k}\equiv\tau_{k+\frac{1}{2}}-\tau_{k-\frac{1}{2}}\) (\(k=1,\cdots,M-1\)), is calculated by inverting the semi-classical equation of motion
\[\Delta\tau_{k} \equiv\int_{\mathbf{r}_{k-\frac{1}{2}}}^{\mathbf{r}_{k+\frac{1}{ 2}}}\frac{d\mathbf{r}}{\sqrt{2\Delta V[\mathbf{r}^{(a)}(\tau)]}}\] \[\approx\frac{1}{\sqrt{2\Delta V(\mathbf{r}_{k})}}\cdot\frac{1}{ 2}\left(|\mathbf{r}_{k+1}-\mathbf{r}_{k}|+|\mathbf{r}_{k}-\mathbf{r}_{k-1}| \right), \tag{23}\]
and analogously for the end intervals, \(\Delta\tau_{0}\equiv\tau_{1}\approx\frac{1}{\sqrt{2\Delta V(\mathbf{r}_{0})}} \cdot\frac{1}{2}|\mathbf{r}_{1}-\mathbf{r}_{0}|\) and \(\Delta\tau_{M}\equiv\tau_{M}-\tau_{M-\frac{1}{2}}\approx\frac{1}{\sqrt{2 \Delta V(\mathbf{r}_{M})}}\cdot\frac{1}{2}|\mathbf{r}_{M}-\mathbf{r}_{M-1}|\). (Note that the end intervals formally diverge, \(\Delta\tau_{0,M}\rightarrow\infty\), as \(\tilde{\beta}\rightarrow\infty\).) Then, the propagator at each interval can be approximated by that of the quantum harmonic oscillator (Mehler kernel) of \(h[\mathbf{r}^{(a)}(\tau)]\approx h_{k}\equiv h[\mathbf{r}_{k}]\)
\[\left\langle\delta{\bf r}_{k+\frac{1}{2}}\right|e^{-\Delta\tau_{k}h_{k} }\left|\delta{\bf r}_{k-\frac{1}{2}}\right\rangle=\prod_{n=1}^{2N}\left(\sqrt{B _{n,k}^{(a)}}\exp[-S_{n,k}^{(a)}]\right), \tag{100}\] \[S_{n,k}^{(a)}=\frac{A_{n,k}^{(a)}}{2}\left[\left\langle{\bf v}_{ n,k}\left|\delta{\bf r}_{k-\frac{1}{2}}\right\rangle^{2}+\left\langle{\bf v}_{n,k} \left|\delta{\bf r}_{k+\frac{1}{2}}\right\rangle^{2}\right]-B_{n,k}^{(a)} \left\langle{\bf v}_{n,k}\left|\delta{\bf r}_{k-\frac{1}{2}}\right\rangle \left\langle{\bf v}_{n,k}\left|\delta{\bf r}_{k+\frac{1}{2}}\right\rangle,\right.\right. \tag{101}\]
where
\[V^{\prime\prime}({\bf r}_{k}){\bf v}_{n,k}\equiv(\omega_{n,k})^{ 2}{\bf v}_{n,k},\quad(n=1,\cdots,2N), \tag{102}\] \[A_{n,k}^{(a)}\equiv\frac{\omega_{n,k}}{\tanh(\omega_{n,k}\Delta \tau_{k})},\ B_{n,k}^{(a)}\equiv\frac{\omega_{n,k}}{\sinh(\omega_{n,k}\Delta \tau_{k})}. \tag{103}\]
Eq. 102 defines normal mode frequencies and eigenmodes at each time slice \(k\). Note that at intermediate times \(k\neq 0,M\), \(\omega_{n,k}\) is in general complex. At the end intervals \(k=0,M\), one substitutes \(\delta{\bf r}_{-\frac{1}{2}}\rightarrow\delta\tau_{0}=0\) and \(\delta{\bf r}_{M+\frac{1}{2}}\rightarrow\delta\tau_{M}=0\) in the above expressions. (Note that as \(\tilde{\beta}\rightarrow\infty\), the propagators at the end intervals approach zero exponentially. However, as we will see below, such contributions cancel when calculating \(A_{a}\) as we are calculating the ratio between two \(F\)s.)
\(F\big{[}{\bf r}^{(a)}(\tau)\big{]}\) can finally be computed by integrating over the intermediate fluctuation coordinates \(\delta{\bf r}_{k-\frac{1}{2}}\)
\[F\big{[}{\bf r}^{(a)}(\tau)\big{]} =\int\left(\prod_{k=1}^{M}d^{2N}\delta{\bf r}_{k-\frac{1}{2}} \right)\left\langle{\bf 0}\middle|e^{-\Delta\tau_{M}h_{M}}\middle|\delta{\bf r}_{M -\frac{1}{2}}\right\rangle\left\langle\delta{\bf r}_{M-\frac{1}{2}}\middle|e^ {-\Delta\tau_{M-1}h_{M-1}}\middle|\delta{\bf r}_{M-\frac{3}{2}}\right\rangle \cdots\left\langle\delta{\bf r}_{\frac{1}{2}}\middle|e^{-\Delta\tau_{0}h_{0}} \middle|{\bf 0}\right\rangle\] \[=\left(\prod_{k=0}^{M}\prod_{n=1}^{2N}\sqrt{B_{n,k}^{(a)}}\right) \det({\cal M}^{(a)})^{-\frac{1}{2}}, \tag{104}\] \[{\cal M}^{(a)} \equiv\sum_{k=1}^{M}e_{k,k}\otimes({\cal A}_{k-1}^{(a)}+{\cal A} _{k}^{(a)})-\sum_{k=1}^{M-1}(e_{k,k+1}+e_{k+1,k})\otimes{\cal B}_{k}^{(a)}\] \[=\begin{bmatrix}{\cal A}_{0}^{(a)}+{\cal A}_{1}^{(a)}&-{\cal B}_{ 1}^{(a)}&{\bf 0}&\cdots&{\bf 0}\\ -{\cal B}_{1}^{(a)}&{\cal A}_{1}^{(a)}+{\cal A}_{2}^{(a)}&-{\cal B}_{2}^{(a)}& \cdots&{\bf 0}\\ {\bf 0}&-{\cal B}_{2}^{(a)}&{\cal A}_{2}^{(a)}+{\cal A}_{3}^{(a)}&\ddots&\vdots\\ \vdots&\vdots&\ddots&\ddots&-{\cal B}_{M-1}^{(a)}\\ {\bf 0}&{\bf 0}&\cdots&-{\cal B}_{M-1}^{(a)}&{\cal A}_{M-1}^{(a)}+{\cal A}_{M}^{( a)}\end{bmatrix},\] (105) \[{\cal A}_{k}^{(a)} \equiv\sum_{n=1}^{2N}A_{n,k}^{(a)}{\bf v}_{n,k}{\bf v}_{n,k}^{ \rm T},\ {\cal B}_{k}^{(a)}\equiv\sum_{n=1}^{2N}B_{n,k}^{(a)}{\bf v}_{n,k}{\bf v}_{n,k}^ {\rm T}. \tag{106}\]
Here, \({\cal M}^{(a)}\) is a real symmetric block tridiagonal matrix, \(e_{i,j}\) is the \(M\times M\) matrix with 1 at the \((i,j)\)-th entry with all other entries 0, \(\otimes\) is the Kronecker product of two matrices and \({\cal A}_{k}^{(a)}\) and \({\cal B}_{k}^{(a)}\) are \(2N\times 2N\) matrices. [Note that in the present WC problem, one needs to project out two zero eigen-modes \({\bf v}_{n,k}\) (for each \(k\)) corresponding to uniform translations in the \(x\) and \(y\) directions; hence \({\cal A}_{k}^{(a)}\) and \({\cal B}_{k}^{(a)}\) become \((2N-2)\times(2N-2)\) matrices.]
In calculating \(F[{\bf r}_{0}]\)--which essentially is the propagator of a quantum harmonic oscillator--with the same procedure, one merely substitutes \(h_{k}\to h_{0}\) in every equation Eq. (100-105)
\[F[{\bf r}_{0}]=\left(\prod_{k=0}^{M}\prod_{n=1}^{2N}\sqrt{B_{n,k}^ {(0)}}\right)\det({\cal M}^{(0)})^{-\frac{1}{2}} \tag{107}\] \[A_{n,k}^{(0)}\equiv\frac{\omega_{n,0}}{\tanh(\omega_{n,0}\Delta \tau_{k})},\ \ B_{n,k}^{(0)}\equiv\frac{\omega_{n,0}}{\sinh(\omega_{n,0}\Delta \tau_{k})},\] (108) \[{\cal A}_{k}^{(0)}\equiv\sum_{n=1}^{2N}A_{n,k}^{(0)}{\bf v}_{n,0}{ \bf v}_{n,0}^{\rm T},\ {\cal B}_{k}^{(0)}\equiv\sum_{n=1}^{2N}B_{n,k}^{(0)}{\bf v}_{n,0}{\bf v}_{n,0}^ {\rm T},\] \[{\cal M}^{(0)}\equiv\sum_{k=1}^{M}e_{k,k}\otimes({\cal A}_{k-1}^{(0 )}+{\cal A}_{k}^{(0)})\]
\[-\sum_{k=1}^{M-1}\left(e_{k,k+1}+e_{k+1,k}\right)\otimes\mathcal{B}_{k}^{(0)}. \tag{16}\]
Therefore,
\[\frac{F\big{[}\mathbf{r}^{(a)}(\tau)\big{]}}{F\big{[}\mathbf{r}_{0} \big{]}}=\left(\prod_{k=1}^{M-1}\prod_{n=1}^{2N}\frac{B_{n,k}^{(a)}}{B_{n,k}^{(0 )}}\right)^{\frac{1}{2}}\left[\frac{\det(\mathcal{M}^{(a)})}{\det(\mathcal{M}^ {(0)})}\right]^{-\frac{1}{2}}. \tag{17}\]
Here the product over \(k\) runs only from \(1\) to \(M-1\) because the end interval contributions (\(k=0,M\)) of \(B_{n,k}^{(a)}\) and \(B_{n,k}^{(0)}\) are identical although they formally approach \(0\) as \(\tilde{\beta}\to\infty\) [since \(\Delta\tau_{0,M}\to\infty\)]. Similarly, one takes \(A_{n,0}^{(a)}=A_{n,0}^{(0)}=A_{n,M}^{(a)}=A_{n,M}^{(0)}=\omega_{n,0}\) in calculating \(\det\mathcal{M}\), as \(\Delta\tau\to\infty\) [since \(\tanh(\omega_{n,0}\Delta\tau)\to 1\)].
Finally, one needs to divide Eq. 17 by the (formally diverging) zero mode contribution to \(F[\mathbf{r}^{(a)}(\tau)]=\det\big{(}-\partial_{\tau}^{2}+V^{\prime\prime} \big{[}\mathbf{r}^{(a)}(\tau)\big{]}\big{)}^{-1/2}\). For this, we numerically find the smallest \(\lambda\) such that
\[\frac{1}{F_{\lambda}[\mathbf{r}^{(a)}(\tau)]}\equiv\det(-\partial_{\tau}^{2}+ V^{\prime\prime}\big{[}\mathbf{r}^{(a)}(\tau)\big{]}-\lambda)^{\frac{1}{2}}=0, \tag{18}\]
where the left hand side is calculated similarly as in Eqs. (14-15) with the substitution \(V^{\prime\prime}(\mathbf{r}_{k})\to V^{\prime\prime}(\mathbf{r}_{k})-\lambda\). The fluctuation determinant is then obtained as
\[A_{a}=\sqrt{\lambda}\cdot\frac{F\big{[}\mathbf{r}^{(a)}(\tau)\big{]}}{F\big{[} \mathbf{r}_{0}\big{]}}. \tag{19}\]
|
2309.15550 | Bohr's power series theorem in the Minkowski space | The main aim of this paper is to study the $n$-dimensional Bohr radius for
holomophic functions defined on Reinhardt domain in $\mathbb{C}^n$ with
positive real part. The present investigation is motivated by the work of Lev
Aizenberg [Proc. Amer. Math. Soc. 128 (2000), 2611--2619]. A part of our
investigation in the present paper includes a connection between the classical
Bohr radius and the arithmetic Bohr radius of unit ball in the Minkowski space
$\ell^n_{q}\, , 1\leq q\leq \infty$. Further, we determine the exact value of
Bohr radius in terms of arithmetric Bohr radius. | Vasudevarao Allu, Himadri Halder, Subhadip Pal | 2023-09-27T10:18:22Z | http://arxiv.org/abs/2309.15550v2 | # Bohr's power series theorem in the Minkowski space
###### Abstract.
The main aim of this paper is to study the \(n\)-dimensional Bohr radius for holomorphic functions defined on Reinhardt domain in \(\mathbb{C}^{n}\) with positive real part. The present investigation is motivated by the work of Lev Aizenberg [Proc. Amer. Math. Soc. 128 (2000), 2611-2619]. A part of our investigation in the present paper includes a connection between the classical Bohr radius and the arithmetic Bohr radius of unit ball in the Minkowski space \(\ell_{q}^{n}\,,1\leq q\leq\infty\). Further, we determine the exact value of Bohr radius in terms of arithmetic Bohr radius.
Key words and phrases:Bohr radius, Arithmetic Bohr radius, Holomorphic functions, Reinhardt domain 2020 Mathematics Subject Classification: Primary 32A05, 32A10, 32A17; Secondary 30B10
## 1. Introduction
A domain \(\Omega\) centered at the origin in \(\mathbb{C}^{n}\) is said to be complete Reinhardt domain if \(z=(z_{1},\ldots,z_{n})\in\Omega\), then \((\xi_{1}z_{1},\ldots,\xi_{n}z_{n})\in\Omega\) for all \(\xi_{i}\in\overline{\mathbb{D}}\), \(i=1,\ldots,n\). Let \(\mathcal{F}(\Omega)\) be the space of all holomorphic mappings \(f\) in \(\Omega\) into \(\mathbb{C}\). We write \(\ell_{p}^{n}\) for the Banach space defined by \(\mathbb{C}^{n}\) endowed with the \(p\)-norm \(\left\|z\right\|_{p}:=\left(\sum_{i=1}^{n}\left|z_{i}\right|^{p}\right)^{1/p},\)\(1\leq q<\infty\) and \(\left\|z\right\|_{\infty}:=\sup_{i=1}^{n}\left|z_{i}\right|\). For \(q\in[1,\infty]\), consider the unit balls in Minkowski space \(\ell_{q}^{n}\) as
\[B_{\ell_{q}^{n}}=\left\{z\in\mathbb{C}^{n}:\left\|z\right\|_{q}=\left(\sum_{i =1}^{n}\left|z_{i}\right|^{q}\right)^{1/q}<1\right\}\ \text{ for }\ 1\leq q<\infty\]
and \(B_{\ell_{\infty}^{n}}=\{z\in\mathbb{C}^{n}:\left\|z\right\|_{\infty}=\sup_{1 \leq i\leq n}\left|z_{i}\right|<1\}\) which are Reinhardt domains of special interest in our context. For each Reinhardt domain \(\Omega\), denote the Bohr radius by \(K^{n}(\Omega)\) with respect to \(\mathcal{F}(\Omega)\) as the supremum of all \(r\in[0,1]\) such that
\[\sup_{z\in\mathcal{F}\Omega}\sum_{\alpha\in\mathbb{N}_{0}^{n}}\left|x_{\alpha }(f)z^{\alpha}\right|\leq\left\|f\right\|_{\Omega} \tag{1.1}\]
for all \(f\in\mathcal{F}(\Omega)\) with \(f(z)=\sum_{\alpha\in\mathbb{N}_{0}^{n}}x_{\alpha}(f)z^{\alpha}\) and \(\left\|f\right\|_{\Omega}=\sup\{\left|f(z)\right|:z\in\Omega\}\) and \(\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\). We write \(K^{n}(\Omega)=K(\Omega)\) for \(n=1\). The celebrated theorem of Bohr [10] states that \(K(\mathbb{D})=1/3\). We usually say the inequality (1.1) is a Bohr inequality and the occurrence of this type inequality for all functions in \(\mathcal{F}(\Omega)\) is known as Bohr phenomenon. When \(\Omega=\mathbb{D}\), (1.1) is the classical Bohr inequality and \(K(\mathbb{D})=1/3\) is the classical Bohr radius. Surprisingly, the exact value of the constant \(K^{n}(\Omega)\) is not known for any other domain. The primary results of Boas and Khavinson [8] and Boas [9] have been able to provide a partial successful estimates for the Bohr radius \(K^{n}(\Omega)\) for \(\Omega=B_{\ell_{q}^{n}}\), \(q\in[1,\infty]\). Their way of approaches towards finding the estimates of \(K^{n}(B_{\ell_{q}^{n}})\) shows the difficulties to obtain exact value of \(K^{n}(B_{\ell_{q}^{n}})\). Therefore, it is always challenging to work on finding estimates of \(K^{n}(\Omega)\) for any arbitrary Reinhardt domain.
In the recent year, there has been a great progress in finding the exact value of multi-dimension Bohr radius. Bohr phenomenon problem has been studied in different aspects of mathematics. For instance, for Banach algebras and uniform algebras (see [26, 27]), for complex manifolds (see [2, 3]), for ordinary and vector valued Dirichlet series (see [6, 17]), for elliptic equations (see [4]), for Faber-Green condenser (see [24]), for free holomorphic functions (see [29]), for vector-valued holomorphic functions (see [19]), for local Banach space theory (see [13]), for domain of monomial convergence (see [18]), for harmonic and pluriharmonic mappings (see [11]), Hardy spaces (see [7]), and also in multidimensional settings (see [1, 8, 9, 14, 15, 25]). The classical Bohr inequality was overlooked and did not get much attention for many years until it was used by Dixon [20] to answer a long-standing open question related to Banach algebra satisfying von Neumann inequality. In 1989, Dineen and Timoney [22] first initiated the study of the constant \(K^{n}(B_{\ell^{n}_{\infty}})\) and their result has been clarified by Boas and Khavinson in [8]. In 1997, Boas and Khavinson [8] obtained the following lower and upper bounds of \(K^{n}(B_{\ell^{n}_{\infty}})\) for each \(n\in\mathbb{N}\) with \(n\geq 2\),
\[\frac{1}{3\sqrt{n}}<K^{n}(B_{\ell^{n}_{\infty}})<2\sqrt{\frac{\log n}{n}}. \tag{1.2}\]
The exact value of \(K^{n}(B_{\ell^{n}_{\infty}})\) is still an open problem and the paper of Boas and Khavinson [9] has aroused new interest in the multidimensional Bohr radius problem, and it has been a source of inspiration for many researchers to work further on this problem. Later, Aizenberg [1] has obtained the following estimates of the constant \(K^{n}(B_{\ell^{n}_{1}})\),
\[\frac{1}{3e^{1/3}}<K^{n}(B_{\ell^{n}_{1}})\leq\frac{1}{3}. \tag{1.3}\]
In 2000, Boas [9] extended the estimates (1.2) and (1.3) to \(K^{n}(B_{\ell^{n}_{q}})\) for \(1<q<\infty\). For fixed \(n>1\), Boas [9] has shown that, if \(1\leq q<2\), then
\[\frac{1}{3\sqrt[n]{e}}\left(\frac{1}{n}\right)^{1-\frac{1}{q}}\leq K^{n}(B_{ \ell^{n}_{q}})<3\left(\frac{\log\,n}{n}\right)^{1-\frac{1}{q}} \tag{1.4}\]
and if \(2\leq q\leq\infty\), then
\[\frac{1}{3}\sqrt{\frac{1}{n}}\leq K^{n}(B_{\ell^{n}_{q}})<2\sqrt{\frac{\log\, n}{n}}. \tag{1.5}\]
In view of (1.4) and (1.5), we see that the upper bounds contain a logarithmic factor but the lower bounds do not. For almost nine years, it was understood that the lower bound of (1.2), (1.4), and (1.5) could not be improved. Later, in 2006, Defant and Frerick [15] obtained a logarithmic lower bound which is almost correct asymptotic estimates for the Bohr radius \(K^{n}(B_{\ell^{n}_{q}})\) with \(1\leq q\leq\infty\). In particular, Defant and Frerick have proved that, if \(1\leq q\leq\infty\) then there is a constant \(c>0\) such that
\[\frac{1}{c}\left(\frac{\log\,n/\log\,\log\,n}{n}\right)^{1-\frac{1}{\min(q,2) }}\leq K^{n}(B_{\ell^{n}_{q}})\ \ \mbox{for all}\ \ n>1. \tag{1.6}\]
The systematic and groundbreaking progress on Bohr problem for bounded holomorphic functions inspires us to study Bohr phenomenon problem for functions that are not necessarily bounded, more precisely for function whose images lie in the right half-plane. It was Aizenberg, Aytuna,and Djakov [2] who first made an incredible contribution to this problem by using an abstract approach in a more general setting and in the spirit of Functional
Analysis. Aizenberg _et al._[2] have proved that if \(f(z)=\sum_{k=0}^{\infty}a_{k}z^{k}\) be any holomorphic function with positive real part and \(f(0)>0\), then
\[\sum_{k=0}^{\infty}|a_{k}z^{k}|\leq 2f(0) \tag{1.7}\]
for \(|z|\leq 1/3\) and the constant \(1/3\) cannot be improved. It is worth mentioning that without loss of generality we can assume \(f(0)=1\). Let \(\mathcal{B}(\Omega)\) be the class of all holomorphic functions \(f:\Omega\to\mathbb{C}\) such that \(\mathrm{Re}(f(z))>0\) and \(f(0)=1\). Later this work and (1.7) have been extended to several variable settings by Aizenberg _et al._[5] while \(p\)-Bohr radius settings for functions in \(\mathcal{B}(\Omega)\) in single variable settings have been extensively studied in [21]. Motivated by the approaches in [5] and [21], Das [12] has recently considered (1.7) in more general setting for holomorphic functions in \(B_{\ell^{n}_{\infty}}\) with positive real part. Here we consider (1.7) for functions in \(\mathcal{B}(\Omega)\), where \(\Omega\) is arbitrary Reinhardt domain in \(\mathbb{C}^{n}\). For \(p>0\), denote \(H^{n}_{p}(\Omega)\) be the supremum of all such \(r\geq 0\) such that
\[\sup_{z\in\mathcal{H}\Omega}\left\{\frac{1}{2}\left(|c_{0}(f)|^{p}+\sum_{m=1} ^{\infty}\sum_{|\alpha|=m}|c_{\alpha}(f)z^{\alpha}|^{p}\right)^{\frac{1}{p}} \right\}\leq 1 \tag{1.8}\]
for all \(f\in\mathcal{B}(\Omega)\) with \(f(z)=\sum_{\alpha\in\mathbb{N}^{n}_{0}}c_{\alpha}(f)z^{\alpha}\). It is easy to see that \(H^{1}_{1}(\mathbb{D})=1/3\) while \(H^{1}_{p}(\mathbb{D})=((2^{p}-1)/(2^{p+1}-1))^{1/p}\) for any \(p>0\) (see [21]). For different values of \(p\), \(H^{n}_{p}(B_{\ell^{n}_{\infty}})\) has the following surprising asymptotic behavior due to [12].
**Theorem 1.1**.: _[_12_]_ _For any \(n>1\),_
\[H^{n}_{p}(B_{\ell^{n}_{\infty}})=\left(\frac{2^{p}-1}{2^{p+1}-1}\right)^{\frac {1}{p}}\]
_for \(p\in[2,\infty)\) and_
\[H^{n}_{p}(B_{\ell^{n}_{\infty}})\sim\left(\frac{\log\ n}{n}\right)^{\frac{2-p }{2p}}\]
_for \(p\in(0,2)\)._
The main aim of this paper is to study the exact value of \(H^{n}_{p}(\Omega)\) in terms of arithmetic Bohr radius which has been introduced and extensively studied by Defant _et. al._[16]. To the best of our knowledge, nothing has been done to describe \(H^{n}_{p}(\Omega)\) in terms of arithmetic Bohr radius. As an extended study recently, Kumar [23] has studied the arithmetic Bohr radius and answered certain questions raised by Defant _et. al._ in [16]. Arithmetic Bohr radius has rich properties and one of them is in describing the domain of existence of the monomial expansion of bounded holomorphic functions in a complete Reinhardt domain (see [28]). Rich properties of arithmetic Bohr radius for bounded holomorphic functions defined in complete Reinhardt domain inspire us to study the following notion. _Arithmetic Bohr radius_ of \(\Omega\) with respect to the class \(\mathcal{B}(\Omega)\), denoted by \(A_{p}(\mathcal{B}(\Omega))\) and defined by
\[A_{p}(\mathcal{B}(\Omega)):=\sup\left\{\frac{1}{n}\sum_{j=1}^{n}r_{j}\,|\,r\in \mathbb{R}^{n}_{\geq 0},\,\forall\,f\in\mathcal{B}(\Omega):\frac{1}{2}\left(|c_{0}(f) |^{p}+\sum_{m=1}^{\infty}\sum_{|\alpha|=m}|c_{\alpha}r^{\alpha}|^{p}\right)^{ \frac{1}{p}}\leq 1\right\},\]
where \(1\leq p<\infty\) and \(\mathbb{R}^{n}_{\geq 0}=\{r=(r_{1},\ldots,r_{n})\in\mathbb{R}^{n}:r_{i}\geq 0,1 \leq i\leq n\}\). We write \(A_{p}(\Omega)\) for \(A_{p}(\mathcal{B}(\Omega))\). It is worth to note that \(A_{p}(\cdot)\) is increasing, that is, \(A_{p}(\Omega_{1})\leq A_{p}(\Omega_{2})\)
whenever \(\Omega_{1}\subset\Omega_{2}.\) Let \(\mathcal{P}(\Omega)\) be the set of all polynomials in \(\mathcal{B}(\Omega)\) and \(\mathcal{P}^{m}(\Omega)\) denote the set of all \(m\)-homogeneous polynomials in \(\mathcal{B}(\Omega)\) defined on \(\Omega\).
## 2. Main Results
In our first result, we provide an estimate for arithmetic Bohr radius of \(\mathcal{B}(\Omega)\) in terms of the arithmetic Bohr radius for \(m\)-homogeneous polynomials in \(\mathcal{B}(\Omega)\), where \(\Omega\) being complete Reinhardt domain.
**Proposition 2.1**.: _Let \(\Omega\) be a complete Reinhardt domain in \(\mathbb{C}^{n}\) and \(1\leq p<\infty\). Then we have_
\[\frac{1}{3^{1/p}}\,A_{p}\left(\bigcup_{m=1}^{\infty}\mathcal{P}^{m}(\Omega) \right)\leq A_{p}\left(\mathcal{B}(\Omega)\right)\leq A_{p}\left(\bigcup_{m=1 }^{\infty}\mathcal{P}^{m}(\Omega)\right). \tag{2.2}\]
We present the next main result as Theorem 2.1 where we obtain the exact value of \(n\)-dimensional Bohr radius \(H_{p}^{n}(B_{\ell_{q}^{n}})\) in terms of the arithmetic Bohr radius \(A_{p}(B_{\ell_{q}^{n}})\) for the unit ball in \(\ell_{q}^{n}\)-spaces. Before briefing Theorem 2.1, we establish a relation between the arithmetic Bohr radius \(A_{p}(\Omega)\) and the Bohr radius \(H_{p}^{n}(\Omega)\) for bounded Reinhardt domain \(\Omega\) in \(\mathbb{C}^{n}\), which we offer as Lemma 2.4. To make the statement precise, we require the following notation from [14]. For bounded Reinhardt domains \(\Omega_{1},\Omega_{2}\subset\mathbb{C}^{n}\), let
\[S(\Omega_{1},\Omega_{2}):=\inf\left\{t>0:\Omega_{1}\subset t\Omega_{2}\right\}.\]
By a Banach sequence space \(X\), we mean a complex Banach space \(X\subset\mathbb{C}^{\mathbb{N}}\) such that \(\ell_{1}\subset X\subset\ell_{\infty}\). If \(\Omega\) is a bounded Reinhardt domain in \(\mathbb{C}^{n}\) and \(X\) and \(Y\) are Banach sequence spaces we write
\[S(\Omega,B_{X_{n}})=\sup_{z\in\Omega}\left\|z\right\|_{X}\quad\text{and }\quad S(B_{X_{n}},B_{Y_{n}})=\left\|\mathrm{id}:X_{n}\to Y_{n}\right\|, \tag{2.3}\]
where \(X_{n}\)(resp. \(Y_{n}\)) is the space spanned by first \(n\) canonical basis vectors \(e_{n}\) in \(X\)(resp. \(Y\)).
**Remark 2.1**.: For a bounded Reinhardt domain \(\Omega\) in \(\mathbb{C}^{n}\), it is easy observe that \(S(\Omega,t\Omega)=1/t\) and \(S(t\Omega,\Omega)=t\) for all \(t>0\).
The following lemma relates the Bohr radius \(H_{p}^{n}(\Omega)\) and the arithmetic Bohr radius \(A_{p}(\Omega)\) for bounded Reinhardt domain \(\Omega\).
**Lemma 2.4**.: _Let \(\Omega\subset\mathbb{C}^{n}\) be a bounded Reinhardt domain in \(\mathbb{C}^{n}\) and \(1\leq p<\infty\). Then we have_
\[A_{p}(\Omega)\geq\frac{S(\Omega,B_{\ell_{1}^{n}})}{n}H_{p}^{n}(\Omega).\]
As discussed before, now we show the exact value of Bohr radius \(H_{p}^{n}(\Omega)\) in terms of the arithmetic Bohr radius \(A_{p}(\Omega)\) for \(\Omega=B_{\ell_{q}^{n}}\), \(1\leq q\leq\infty\).
**Theorem 2.1**.: _Let \(1\leq p<\infty\). Then for every \(1\leq q\leq\infty\) and for all \(n\in\mathbb{N}\), we have_
\[A_{p}(B_{\ell_{q}^{n}})=\frac{H_{p}^{n}(B_{\ell_{q}^{n}})}{n^{1/q}}.\]
Next, we obtain an interesting relation between the classical Bohr radius \(H_{p}^{1}(\mathbb{D})\) and the arithmetic Bohr radius \(A_{p}(B_{\ell_{q}^{n}})\) for \(1\leq q<\infty\). Further, we shall see that this relation helps us to compare the classical Bohr radii for unit disk and unit ball in \(\mathbb{C}^{n}\).
**Theorem 2.2**.: _Let \(1\leq p<\infty\). Then for every \(n\in\mathbb{N}\) and \(1\leq q<\infty\) we have_
\[\frac{H_{p}^{1}(\mathbb{D})}{n}\leq A_{p}\left(B_{\ell_{q}^{n}}\right)\leq\left( \frac{H_{p}^{1}(\mathbb{D})}{n^{1/p}}\right)^{1/q}.\]
In view of Theorem 2.1 and Theorem 2.2, we obtain the following interesting estimate.
**Theorem 2.3**.: _For every \(1\leq p,q<\infty\) and \(n\in\mathbb{N}\), we have_
\[\frac{H_{p}^{1}(\mathbb{D})}{n^{1-(1/q)}}\leq H_{p}^{n}(B_{\ell_{q}^{n}})\leq \left(\frac{H_{p}^{1}(\mathbb{D})}{n^{(1/p)}-1}\right)^{1/q}.\]
The exact value of Bohr radius \(H_{p}^{n}(B_{\ell_{\infty}^{n}})\) for the unit polydisc has been studied by Das [12] as we have seen in Theorem 1.1, whereas the exact value for unit polyballs in \(\ell_{q}^{n}\)-spaces (\(1\leq q<\infty\)) is still an open problem. In view of Theorem 2.3, we observe that the exact value of Bohr radius \(H_{1}^{1}(B_{\ell_{1}^{n}})\) for the unit ball in \(\ell_{1}^{n}\) space is exactly \(1/3\).
**Corollary 2.5**.: _For every \(n\in\mathbb{N}\), we have_
\[H_{1}^{1}(B_{\ell_{1}^{n}})=H_{1}^{1}(\mathbb{D})=1/3.\]
We also study the case \(q=\infty\) in Theorem 2.2 and obtain the following estimate for the arithmetic Bohr radius \(A_{p}(B_{\ell_{\infty}^{n}})\) in terms of the classical Bohr radius \(H_{p}^{1}(\mathbb{D})\).
**Theorem 2.4**.: _Let \(1\leq p<\infty\). Then for each \(n\in\mathbb{N}\), we have_
\[\frac{H_{p}^{1}(\mathbb{D})}{n}\leq A_{p}(B_{\ell_{\infty}^{n}})\leq\frac{H_{ p}^{1}(\mathbb{D})}{n^{(1/p)-1}}.\]
In the following section, we present the proof of Proposition 2.1, Lemma 2.4, Theorem 2.1, Theorem 2.2 and Theorem 2.4.
## 3. Proof of Main Results
**Proof of Proposition 2.1.** Since we have the following inclusion
\[\bigcup_{m=1}^{\infty}\mathcal{P}^{m}(\Omega)\subset\mathcal{B}(\Omega),\]
the right-hand inequality of (2.2),
\[A_{p}\left(\mathcal{B}(\Omega)\right)\leq A_{p}\left(\bigcup_{m=1}^{\infty} \mathcal{P}^{m}(\Omega)\right) \tag{3.1}\]
holds. Choose \(r\in\mathbb{R}_{\geq 0}^{n}\) be such that for all \(m\)-homogeneous polynomial \(g_{m}\in\mathcal{P}^{m}(\mathbb{C}^{n})\) contained in \(\mathcal{B}(\mathbb{C}^{n})\),
\[\frac{1}{2}\left(\sum_{|\alpha|=m}|c_{\alpha}(g_{m})|^{p}r^{p\alpha}\right)^{ \frac{1}{p}}\leq 1. \tag{3.2}\]
Our aim is to show that
\[\frac{1}{3^{1/p}}\,\sum_{i=1}^{n}r_{i}\leq A_{p}(\mathcal{B}(\Omega)). \tag{3.3}\]
Take \(f(z)=\sum_{\alpha\in\mathbb{N}_{0}^{n}}c_{\alpha}(f)z^{\alpha}\in\mathcal{B}(\Omega)\). Then, in view of (3.2) we obtain
\[\frac{1}{2}\left(\sum_{\alpha\in\mathbb{N}_{0}^{n}}|c_{\alpha}(f)| ^{p}\left(\frac{r^{p}}{3}\right)^{\alpha}\right)^{1/p} =\frac{1}{2}\left(|c_{0}(f)|^{p}+\sum_{m=1}^{\infty}\sum_{|\alpha |=m}|c_{\alpha}(f)|^{p}\left(\frac{r^{p}}{3}\right)^{\alpha}\right)^{1/p}\] \[=\frac{1}{2}\left(|c_{0}(f)|^{p}+\sum_{m=1}^{\infty}\frac{1}{3^{m} }\sum_{|\alpha|=m}|c_{\alpha}(f)r^{\alpha}|^{p}\right)^{1/p}\] \[\leq\frac{1}{2}\left(1+2^{p}\sum_{m=1}^{\infty}\frac{1}{3^{m}} \right)^{1/p}=\frac{1}{2}\left(1+2^{p-1}\right)^{1/p}\leq 1,\]
which gives the estimate (3.3). Hence,
\[\frac{1}{3^{1/p}}A_{p}(\mathcal{P}^{m}(\Omega)\leq A_{p}(\mathcal{B}(\Omega) )\quad\text{for all}\;\;m\geq 1.\]
As a consequence, we obtain the left-hand inequality of (2.2). This completes the proof.
Proof of Lemma 2.4.By the virtue of (2.3), we have
\[S(\Omega,B_{\ell_{1}^{n}})=\sup_{z\in\Omega}\|z\|_{\ell_{1}^{n}}\,.\]
Thus for given \(0<\epsilon<H_{p}^{n}(\Omega)\), we can find an element \(z_{0}\in\Omega\) such that
\[\|z_{0}\|_{\ell_{1}^{n}}\geq S(\Omega,B_{\ell_{1}^{n}})-\epsilon.\]
Let \(t:=H_{p}^{n}(\Omega)-\epsilon\), \(v:=sz_{0}\), and \(r:=s|z_{0}|=|v|.\) Since \(v\in t\Omega\) and \(t<H_{p}^{n}(\Omega)\), for \(f=\sum_{\alpha\in\mathbb{N}_{0}^{n}}c_{\alpha}(f)z^{\alpha}\in\mathcal{B}(\Omega)\), we have
\[\frac{1}{2}\left(|c_{0}(f)|^{p}+\sum_{m=1}^{\infty}\sum_{|\alpha|=m}|c_{ \alpha}(f)|^{p}r^{\alpha}\right)^{1/p}=\frac{1}{2}\left(|c_{0}(f)|^{p}+\sum_{ m=1}^{\infty}\sum_{|\alpha|=m}|c_{\alpha}v^{\alpha}|^{p}\right)^{1/p}\leq 1.\]
Therefore, we obtain
\[A_{p}(\Omega)\geq\frac{1}{n}\sum_{i=1}^{n}r_{i}=\frac{\|r\|_{1}}{n}\frac{H_{p }^{n}(\Omega)-\epsilon}{n}\left\|z_{0}\right\|_{\ell_{1}^{n}}\geq\frac{H_{p}^ {n}(\Omega)-\epsilon}{n}\left(S(\Omega,B_{\ell_{1}^{n}})-\epsilon\right)\]
holds for all \(\epsilon>0.\) Letting \(\epsilon\to 0\), we have
\[A_{p}(\Omega)\geq\frac{S(\Omega,B_{\ell_{1}^{n}})}{n}H_{p}^{n}(\Omega).\]
This completes the proof.
Proof of Theorem 2.1.In view of Holder's inequality, we have \(S(B_{\ell_{q}^{n}},B_{\ell_{1}^{n}})=n^{1-(1/q)}\). Using this fact in Lemma 2.4, we obtain the inequality
\[A_{p}(B_{\ell_{q}^{n}})\geq\frac{H_{p}^{n}(B_{\ell_{q}^{n}})}{n^{1/q}}.\]
Therefore, we need to show that
\[A_{p}(B_{\ell_{q}^{n}})\leq\frac{H_{p}^{n}(B_{\ell_{q}^{n}})}{n^{1/q}}. \tag{3.4}\]
Let \(r=(r_{1},\ldots,r_{n})\in\mathbb{R}_{\geq 0}^{n}\) be such that for all \(h(z)=\sum_{\alpha\in\mathbb{N}_{0}^{n}}c_{\alpha}(h)z^{\alpha}\in\mathcal{B}(B_{ \ell_{q}^{n}}),\)
\[\frac{1}{2}\left(|c_{0}(h)|^{p}+\sum_{m=1}^{\infty}\sum_{|\alpha|=m}|c_{\alpha}( h)|^{p}r^{p\alpha}\right)^{1/p}\leq 1.\]
To prove (3.4), it suffices to prove that
\[n^{\frac{1}{q}-1}\left\|r\right\|_{1}\leq H_{p}^{n}(B_{\ell_{q}^{n}}).\]
Let \(f\in\mathcal{B}(B_{\ell_{q}^{n}})\). It is worth to note that for \(u\in n^{(1/q)-1}\left\|r\right\|_{1}\overline{B_{\ell_{q}^{n}}}\), we have \(\left\|u\right\|_{1}\leq\left\|r\right\|_{1}\). Therefore,
\[\frac{1}{2}\left(|c_{0}(f)|^{p}+\sum_{m=1}^{\infty}\sum_{|\alpha|=m}|c_{\alpha }(f)u^{\alpha}|^{p}\right)^{1/p}\leq 1\]
for every \(u\in n^{(1/q)-1}\left\|r\right\|_{1}\overline{B_{\ell_{q}^{n}}}.\) So, we obtain \(n^{(1/q)-1}\left\|r\right\|_{1}\leq H_{p}^{n}(B_{\ell_{q}^{n}})\). Consequently, it follows that
\[n^{\frac{1}{q}}A_{p}(B_{\ell_{q}^{n}})\leq H_{p}^{n}(B_{\ell_{q}^{n}}),\]
which gives our conclusion. This completes the proof.
Proof of Theorem 2.2.First we show the left-hand inequality
\[\frac{H_{p}^{1}(\mathbb{D})}{n}\leq A_{p}\left(B_{\ell_{q}^{n}}\right).\]
Assume \(r=H_{p}^{1}(\mathbb{D})\) and \(f\in\mathcal{B}(B_{\ell_{q}^{n}})\). We define \(g(z)=f(ze_{1})=f(z,0,\ldots,0)\) for \(z\in\mathbb{D}\). Then \(g:\mathbb{D}\to\mathbb{C}\) will be a holomorphic function on \(\mathbb{D}\) with \(\mathrm{Re}(g(z))>0\) and \(g(0)=1\). Therefore,
\[\frac{1}{2}\left\{|c_{0}(f)|^{p}+\sum_{k=1}^{\infty}\sum_{|\alpha|=k}|c_{ \alpha}(f)|(r,0,\ldots,0)^{p\alpha}\right\}^{\frac{1}{p}}=\frac{1}{2}\left\{|c _{0}(g)|^{p}+\sum_{k=1}^{\infty}|c_{k}(g)|r^{pk}\right\}^{\frac{1}{p}}\leq 1\]
for all \(f(z)=\sum_{\alpha\in\mathbb{N}_{0}^{n}}c_{\alpha}(f)z^{\alpha}\in\mathcal{B}( B_{\ell_{q}^{n}})\). Hence, we obtain \(r/n\leq A_{p}(B_{\ell_{q}^{n}})\), which gives our desired inequality.
On the other hand, we want to prove that
\[A_{p}\left(B_{\ell_{q}^{n}}\right)\leq\left(\frac{H_{p}^{1}(\mathbb{D})}{n^{1/ p}}\right)^{1/q}.\]
Let \(r\in\mathbb{R}_{\geq 0}^{n}\) be such that for all \(u\in\mathcal{B}(B_{\ell_{q}^{n}})\), we have
\[\frac{1}{2}\left\{|c_{0}(u)|^{p}+\sum_{k=1}^{\infty}\sum_{|\alpha|=k}|c_{ \alpha}(u)|^{p}r^{p\alpha}\right\}^{\frac{1}{p}}\leq 1.\]
Now it is enough to show that
\[\frac{1}{n}\left(\sum_{j=1}^{n}r_{j}\right)\leq\left(\frac{H_{p}^{1}(\mathbb{D })}{n^{1/p}}\right)^{1/q}. \tag{3.5}\]
Fix \(f\in\mathcal{B}(\mathbb{D})\), and define the function
\[v(z)=z_{1}^{q}+\cdots+z_{n}^{q},\quad z=(z_{1},\ldots,z_{n})\in B_{\ell_{q}^{n}}.\]
Consider \(u=f\circ v\) such that \(\operatorname{Re}(u(z))=\operatorname{Re}(f(v(z)))>0\) and \(u(0)=f(v(0))=f(0)=1\). Moreover, for each \(z\in B_{\ell_{q}^{n}}\), we have
\[u(z)=\sum_{k=0}^{\infty}c_{k}(f)v(z)^{k}=\sum_{k=0}^{\infty}c_{k}(f)\sum_{| \alpha|=k}\frac{k!}{\alpha!}z^{q\alpha}=\sum_{\alpha\in\mathbb{N}_{0}^{n}}c_{ \alpha}(u)z^{q\alpha},\]
where \(c_{\alpha}(u)=c_{k}(f)(k!/\alpha!)\) whenever \(|\alpha|=k\). Then for all \(z\in B_{\ell_{q}^{n}}\), we have
\[\frac{1}{2}\left\{|c_{0}(u)|^{p}+\sum_{k=1}^{\infty}\sum_{|\alpha |=k}|c_{\alpha}(u)z^{q\alpha}|^{p}\right\}^{\frac{1}{p}} =\frac{1}{2}\left\{|c_{0}(f)|^{p}+\sum_{k=1}^{\infty}|c_{k}(f)|^{ p}\sum_{|\alpha|=k}\left(\frac{k!}{\alpha!}\right)^{p}|z|^{pq\alpha}\right\}^{ \frac{1}{p}}\] \[\geq\frac{1}{2}\left\{|c_{0}(f)|^{p}+\sum_{k=1}^{\infty}|c_{k}(f )|^{p}\sum_{|\alpha|=k}\frac{k!}{\alpha!}|z|^{pq\alpha}\right\}^{\frac{1}{p}}\] \[=\frac{1}{2}\left\{|c_{0}(f)|^{p}+\sum_{k=1}^{\infty}|c_{k}(f)|^{ p}\,\|z\|_{pq}^{pqk}\right\}^{\frac{1}{p}}\]
so that finally we have
\[\frac{1}{2}\left\{|c_{0}(f)|^{p}+\sum_{k=1}^{\infty}|c_{k}(f)|^{p}\,\|r\|_{pq}^ {pqk}\right\}^{\frac{1}{p}}\leq\frac{1}{2}\left\{|c_{0}(u)|^{p}+\sum_{k=1}^{ \infty}\sum_{|\alpha|=k}|c_{\alpha}(u)r^{q\alpha}|^{p}\right\}^{\frac{1}{p}} \leq 1.\]
It follows that \(\|r\|_{pq}^{q}\leq H_{p}^{1}(\mathbb{D})\). By the virtue of Holder's inequality, we have \(\|r\|_{1}^{pq}\leq n^{pq-1}\,\|r\|_{pq}^{pq}\). Hence, we obtain \(n^{1-pq}\,\|r\|_{1}^{pq}\leq(H_{p}^{1}(\mathbb{D}))^{p}\), which gives the estimate (3.5). This completes the proof.
**Proof of Theorem 2.4.** Let \(r=H_{p}^{1}(\mathbb{D})\) and \(f(z)=\sum_{\alpha\in\mathbb{N}_{0}^{n}}c_{\alpha}(f)z^{\alpha}\in\mathcal{B}( B_{\ell_{\infty}^{n}})\). Consider the function \(g(z)=f(\xi z)\), where \(\xi=(1,0,\ldots,0)\) and \(z\in\mathbb{D}\). Clearly, \(g\) is an holomorphic function on unit disk \(\mathbb{D}\) with \(\operatorname{Re}(g(z))>0\) and \(g(0)=1\). Then we have
\[\frac{1}{2}\left(|c_{0}(f)|^{p}+\sum_{k=1}^{\infty}\sum_{|\alpha|=k}|c_{\alpha} (f)(r,0,\ldots,0)^{\alpha}|^{p}\right)=\frac{1}{2}\left(|c_{0}(g)|^{p}+\sum_{k =1}^{\infty}|c_{k}(g)|^{p}r^{pk}\right)\leq 1.\]
Therefore, it gives us \((r/n)\leq A_{p}(B_{\ell_{\infty}^{n}})\), and hence we obtain \((H_{p}^{1}(\mathbb{D})/n)\leq A_{p}(B_{\ell_{\infty}^{n}})\). Conversely, we prove that
\[A_{p}(B_{\ell_{\infty}^{n}})\leq\frac{H_{p}^{1}(\mathbb{D})}{n^{1/p-1}}.\]
Suppose \(r\in\mathbb{R}_{\geq 0}^{n}\) such that for all \(h\in\mathcal{B}\),
\[\frac{1}{2}\left(|c_{0}(h)|^{p}+\sum_{k=1}^{\infty}\sum_{|\alpha|=k}|c_{\alpha }(h)|^{p}r^{p\alpha}\right)^{\frac{1}{p}}\leq 1.\]
Let \(f:\mathbb{D}\to\mathbb{C}\) be a holomorphic function such that \(\mathrm{Re}f(z)>0\) and \(f(0)=1\). Now we consider the function \(s:B_{\ell_{\infty}^{n}}\to\mathbb{D}\) defined by
\[s(z)=\frac{1}{n}(z_{1}+\cdots+z_{n}),\quad z\in B_{\ell_{\infty}^{n}}.\]
Now if we set \(h=f\circ s\), then we have \(h\in\mathcal{B}(B_{\ell_{\infty}^{n}})\) with \(\mathrm{Re}(h(z))>0\) and \(h(0)=1\). Also, for each \(z\in B_{\ell_{\infty}^{n}}\),
\[h(z)=\sum_{k=1}^{\infty}c_{k}(f)s(z)^{k}=\sum_{k=0}^{\infty}\frac{c_{k}(f)}{n^ {k}}\sum_{|\alpha|=k}\frac{k!}{\alpha!}z^{\alpha}=\sum_{\alpha\in\mathbb{N}_{0 }^{n}}c_{\alpha}(h)z^{\alpha},\]
where
\[c_{\alpha}(h)=\frac{k!}{\alpha!}\left(\frac{c_{k}(f)}{n^{k}}\right)\]
whenever \(|\alpha|=k\). Then for all \(z\in B_{\ell_{\infty}^{n}}\), we have
\[\frac{1}{2}\left(|c_{0}(h)|^{p}+\sum_{k=1}^{\infty}\sum_{|\alpha| =k}|c_{\alpha}(h)z^{\alpha}|^{p}\right)^{\frac{1}{p}} =\frac{1}{2}\left(|c_{0}(f)|^{p}+\sum_{k=1}^{\infty}\frac{|c_{k}( f)|^{p}}{n^{kp}}\sum_{|\alpha|=k}\left(\frac{k!}{\alpha!}\right)^{p}z^{p \alpha}\right)^{\frac{1}{p}}\] \[\geq\frac{1}{2}\left(|c_{0}(f)|^{p}+\sum_{k=1}^{\infty}\frac{|c_{ k}(f)|^{p}}{n^{kp}}\sum_{|\alpha|=k}\left(\frac{k!}{\alpha!}\right)z^{p\alpha} \right)^{\frac{1}{p}}\] \[=\frac{1}{2}\left(|c_{0}(f)|^{p}+\sum_{k=1}^{\infty}\frac{|c_{k}( f)|^{p}}{n^{pk}}\left\|z\right\|_{p}^{pk}\right)^{\frac{1}{p}}.\]
Finally, we observe that
\[\frac{1}{2}\left(|c_{0}(f)|^{p}+\sum_{k=1}^{\infty}\frac{|c_{k}(f)|^{p}}{n^{pk }}\left\|r\right\|_{p}^{pk}\right)^{\frac{1}{p}}\leq\frac{1}{2}\left(|c_{0}(h) |^{p}+\sum_{k=1}^{\infty}\sum_{|\alpha|=k}|c_{\alpha}(h)|^{p}r^{pk}\right)^{ \frac{1}{p}}\leq 1.\]
This shows that \((1/n)\left\|r\right\|_{p}\leq H_{p}^{1}(\mathbb{D}).\) Again we have \(\left\|r\right\|_{1}^{p}\leq n^{p-1}\left\|r\right\|_{p}^{p}.\) Hence we obtain
\[\frac{1}{n}\left\|r\right\|_{1}\leq n^{1-(1/p)}H_{p}^{1}(\mathbb{D}),\]
which gives our desired inequality. This completes the proof.
**Acknowledgment:** The research of the first named author is supported by SERB-CRG (DST), Govt. of India. The research of the second named author is supported by Institute Post-Doctoral Fellowship of IIT Bombay, and the research of the third named author is supported by DST-INSPIRE Fellowship (IF 190721), New Delhi, India.
|
2309.09179 | Syntax Tree Constrained Graph Network for Visual Question Answering | Visual Question Answering (VQA) aims to automatically answer natural language
questions related to given image content. Existing VQA methods integrate vision
modeling and language understanding to explore the deep semantics of the
question. However, these methods ignore the significant syntax information of
the question, which plays a vital role in understanding the essential semantics
of the question and guiding the visual feature refinement. To fill the gap, we
suggested a novel Syntax Tree Constrained Graph Network (STCGN) for VQA based
on entity message passing and syntax tree. This model is able to extract a
syntax tree from questions and obtain more precise syntax information.
Specifically, we parse questions and obtain the question syntax tree using the
Stanford syntax parsing tool. From the word level and phrase level, syntactic
phrase features and question features are extracted using a hierarchical tree
convolutional network. We then design a message-passing mechanism for
phrase-aware visual entities and capture entity features according to a given
visual context. Extensive experiments on VQA2.0 datasets demonstrate the
superiority of our proposed model. | Xiangrui Su, Qi Zhang, Chongyang Shi, Jiachang Liu, Liang Hu | 2023-09-17T07:03:54Z | http://arxiv.org/abs/2309.09179v1 | # Syntax Tree Constrained Graph Network for Visual Question Answering
###### Abstract
Visual Question Answering (VQA) aims to automatically answer natural language questions related to given image content. Existing VQA methods integrate vision modeling and language understanding to explore the deep semantics of the question. However, these methods ignore the significant syntax information of the question, which plays a vital role in understanding the essential semantics of the question and guiding the visual feature refinement. To fill the gap, we suggested a novel Syntax Tree Constrained Graph Network (STCGN) for VQA based on entity message passing and syntax tree. This model is able to extract a syntax tree from questions and obtain more precise syntax information. Specifically, we parse questions and obtain the question syntax tree using the Stanford syntax parsing tool. From the word level and phrase level, syntactic phrase features and question features are extracted using a hierarchical tree convolutional network. We then design a message-passing mechanism for phrase-aware visual entities and capture entity features according to a given visual context. Extensive experiments on VQA2.0 datasets demonstrate the superiority of our proposed model.
Keywords:Visual question answering Syntax tree Message passing Tree convolution Graph neural network.
## 1 Introduction
Visual Question answering (VQA) aims to automatically answer natural language questions related to a given image content. It requires both computer vision technology to understand the visual content of images and natural language processing technology to understand the deep semantics of questions. VQA has various potential applications, including image retrieval, image captioning, and visual dialogue systems, therefore becoming an important research area.
Recently, various VQA methods [9, 4, 7, 12] have been proposed to capture significant question semantics and visual features by mining explicit or implicit entity relationships. For example, BAN [9] considers the bilinear interaction between two sets of input channels of images and questions by calculating the
bilinear distribution of attention to fuse visual and textual information. Murel et al. [4] design an atomic inference unit to enrich the interaction between the problem region and the image region and optimize the visual and problem interaction by using a unit sequence composed of multiple atomic units. LCGN [7] designs a question-aware messaging mechanism, uses question features to guide the refinement of entity features based on the entity complete graph, and realizes the integration of entity features and context information. However, these methods typically capture explicit or implicit entity relationships in images while ignoring the syntax relation between words, which contributes to capturing the deep semantics of the question.
Intuitively, using a syntax tree in VQA tasks has two major benefits. First, questions are usually short in length, and adding more syntactic information is necessary to understand the semantics of the questions. Secondly, the syntax tree hierarchically organizes keywords and context words through a tree structure, which is effective for capturing key information in the questions. As shown in the illustration in Fig. 1 (a), the words "person", "left", and "woman" which are far apart in the original question are adjacent in the syntax tree. These three words are the core information of this question. Therefore, the syntax tree can better capture the key information in terms of feature extraction.
Besides, in the field of VQA, images are the key information source to infer answers. It is also one core objective of the VQA model to understand the information in images. Since images are composed of many visual entities, there are many implicit relationships between these entities. Intuitively, it is necessary to perceive these implicit relationships and achieve effective message passing between entities for obtaining the entity features of scene context awareness.
As shown in Fig. 1 (b), in the process of answering the question, Entity 1 firstly transmits its own features to Entity 2 based on the phrase "next to the students"; Entity 2 and Entity 4 then pass on its features to Entity 3 based on the phrase "on the right side of the tent" and "with an orange front", and Entity 3 is accordingly able to integrate information from surrounding entities
Figure 1: Examples of a syntax tree and message passing respectively derived from the VQA2.0 dataset. The left figure shows a syntax three, and the right figure shows the message passing mechanism.
to answer the question more accurately. Obviously, phrase features can be used to guide entities to carry out targeted message passing, such that it is necessary for the VQA system to pay more attention to the area most relevant to the question.
In light of the above observation, we propose a Syntax Tree Constrained Graph Network (STCGN) by modeling the syntax tree and entity message-passing mechanism. The main contributions of this paper are summarized:
* We propose a novel VQA model, named STCGN, which is equipped with a tree hierarchical convolutional network that learns syntax-aware question representation and phrase representation comprehensively, a phrase-aware entity message passing network that captures context-aware entity representations of visual scenes.
* Extensive experimental results on VQA2.0 datasets demonstrate the significant superiority of our STCGN and prove the validity of phrase-aware messaging network through visualization experiments.
## 2 Related Work
Recently, various methods have been proposed to improve the accuracy of visual question answering. The image is typically represented by a fixed-size grid feature extracted from a pre-trained model such as VGG [17] and AlexNet [22], and the question is typically embedded with LSTM [6]. Both of these features are combined by addition or element-wise multiplication [2, 21]. But using the base fusion method to achieve feature fusion is too simple to capture a key part of the image. Therefore some neural Network based method [16] is proposed to fuse both the visual and question features. For example, Ren [16] propose an LSTM-based fusion strategy by projecting extracted visual feature into the same word embedding space. However, not all of the pictures are strongly related to the question, thus some of the non-relevant grids in the picture should be filtered.
The attention mechanism [1, 14, 8, 19] is used to calculate the importance of each grid area. For example, Bottom-Up and Top-Down attention (BUTD) [1] uses a bottom-up and top-down approach to capture attention. In the bottom-up process, they used Faster-Rcnn to extract the features of all objects and their significant regions. In a top-down process, each attention candidate is weighted using task-specific context to obtain a final visual representation. Besides, the bilinear models [20, 14, 3, 5] show a strong ability of cross-modal feature interaction. For example, MFH [20] fully mines the correlation of multi-mode features through factorization and high-order pooling, effectively reducing the irrelevant features, and obtaining more effective multi-mode fusion features. BGN [5] on the basis of the BAN [9] model designs a bilinear graph structure to model the context relationship between text words and visual entities and fully excavates the implicit relationship between text words and visual entities. However, these methods do not pay enough attention to the entity relationship in the pictures and the relationship between the words in the questions. Extracting entity features can also be optimized through implicit or explicit relational modeling.
To address the above problem, some relation-based VQA methods, e.g., Murel [4], LCGN [7], and ReGAT [12], and more effective attention mechanisms, e.g., UFSCAN [23] and MMMAN [11] have been proposed. Murel, LCGN, and ReGAT refine visual representations by explicitly or implicitly modeling relationships and enhance feature interactions between modalities. UFSCAN and MMMAN improve attention scores in visual areas through more efficient attention mechanisms. Compared to these prior works, Our model proposes a syntax-aware tree hierarchical convolutional network to extract syntax relation-aware question representation and phrase representation from questions. It further proposes a phrase-aware message passing network to capture the entity features of visual scene context-awareness based on implicit entity relations.
## 3 The STCGN Model
Given a question \(q\) and a corresponding image \(I\), the goal of VQA is to predict an answer \(\hat{a}\in\mathcal{A}\) that best matched the ground-truth answer \(a^{*}\). Following previous VQA models, it can be defined as a classification task:
\[\hat{a}=arg\max_{a\in\mathcal{A}}F_{\theta}(a|I,q) \tag{1}\]
where \(F\) is the trained model, and \(\theta\) is the model parameter.
Denote the given picture as \(\mathcal{I}\), \(K\) visual entities are extracted from it using Faster R-CNN, and the feature representation of the i-th entity is denoted as \(\mathbf{v}_{i}\). Meanwhile, we have a picture-related question consisting of N words \(\mathcal{Q}=(q_{1},q_{2},...,q_{N})\). The features of each word are initialized using the Glove [15] word embedding model and the word sequence feature is \(\mathbf{X}=(\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{N})\). The task of the visual question answering is to use image and text information, extract relevant features and fuse them to generate an answer probability distribution vector \(\mathbf{\hat{p}}=(\hat{p}_{1},\hat{p}_{2},...,\hat{p}_{N_{ans}})\), where \(N_{ans}\) denotes the number of categories of answers and \(\hat{p}_{i}\) denotes the probability that the i-th answer is the final answer. We take the answer with the highest probability in \(\mathbf{\hat{p}}\) as the final predicted answer of the visual question answering system.
### Network Architecture
The architecture of STCGN is shown in Fig. 2, which consists of three main modules: (1) **Syntax-aware Tree Convolution** module utilizes the syntax tree of the question and uses the tree hierarchical convolution model to extract the syntax-aware phrase features and question features. (2) **Phrase-aware Entity Message Passing** module discovers the implicit connections between visual entities and relevant message features based on the syntax-aware phrase features, question features, and entity complete graphs, and then builds the scene context-aware visual entity features. (3) **Top-down Attention-based Answer Prediction** module fuses the question features and visual entity features by using Top-down attention and performs the final answer prediction.
### Syntax-aware Tree Convolution
In VQA tasks, understanding the essence of the question correctly is the first priority to make a good answer. Syntax information plays a very important role in the question text because it can offer important help in understanding the question. The syntax information contains the dependencies between words and the part of speech (POS) of these words. Therefore, we can construct a syntax tree \(\mathcal{T}=(\mathcal{Q},\mathcal{E})\) based on the dependencies between words. By observing Fig. 1 (b), we can see that the phrases in question play an important role in instructing visual entities for message passing. Therefore, we propose a tree-based hierarchical convolutional network to model the syntax-aware phrase features. The network consists of two layers: word-level convolution and phrase-level convolution.
For word-level convolution, we first construct the syntax subtree with each non-leaf word node and its direct children in the syntax tree. In this way, we can decompose the syntax tree into a set of syntax subtree \(F=(f_{1},f_{2},...,f_{s})\), where each subtree \(f_{i}=(q_{i},q_{c1},q_{c2},...,q_{cn})\) as the convolution unit, \(cn\) denotes the number of children of \(i\) word nodes. Furthermore, we propose a convolution method based on text convolution [24]. First, we use the Glove word embedding model to map \(f_{i}\) to high-dimensional dense word features. Then, we obtain the learnable POS feature vector by using a POS feature dictionary of length 42 for random initialization. As a result, we can obtain the sequence of POS features in the question \(\mathbf{T}_{i}=(\mathbf{t}_{i},\mathbf{t}_{c1},...,\mathbf{t}_{cn})\). Finally, we concatenate word features and POS features to obtain word-level convolutional input features \(\mathbf{X}_{i}^{cat}\) of words:
\[\mathbf{X}_{i}=\text{Glove}(f_{i}),\ \mathbf{T}_{i}=\text{RandomInit}(t_{i},t_{ c1},...,t_{cn}),\ \mathbf{X}_{i}^{cat}=[\mathbf{X}_{i}\oplus\mathbf{T}_{i}] \tag{2}\]
We define a text convolution kernel \(\mathbf{G}\) for each syntax subtree. First, we concatenate the input features \(\mathbf{X}_{i}^{cat}\) to extract the key information in the text. Then, we use the maximum pooling for further feature filtering.
\[\mathbf{g}_{i}=\max(\hat{\mathbf{g}}_{i}),\quad s.t.,\ \hat{\mathbf{g}}_{i}= \text{ReLU}(\mathbf{G}*[\mathbf{X}_{i}^{cat}\oplus\mathbf{X}_{c1}^{cat}\oplus...\oplus\mathbf{X}_{cn}^{cat}]+\mathbf{b}_{i}) \tag{3}\]
Figure 2: The network architecture of our STCGN model.
\(*\) indicates the text convolution, and \(\mathbf{b}_{i}\) is the offset term.
For the phrase-level convolution, we design the syntax relation-aware graph attention network for capturing syntax phrase features \(\mathbf{h}_{i}\) from multiple aspects based on the convolution features \(g_{i}\) of each subtree and the syntax tree \(\mathcal{T}\):
\[\mathbf{h}_{i}^{*} =||_{m=1}^{M}\sigma\left(\sum_{j\in\mathcal{N}_{i}}\alpha_{ij} \cdot\mathbf{W}_{dir(i,j)}\mathbf{g}_{j}+\mathbf{b}_{dep(i,j)}\right) \tag{4}\] \[\alpha_{ij} =\frac{exp((\mathbf{U}\mathbf{x}_{i}^{{}^{\prime}})^{\top}\cdot \mathbf{V}_{dir(i,j)}\mathbf{g}_{j}+\mathbf{b}_{dep(i,j)}^{{}^{\prime}})}{\sum_ {j\in\mathcal{N}_{i}}exp((\mathbf{U}\mathbf{x}_{i}^{{}^{\prime}})^{\top}\cdot \mathbf{V}_{dir(i,j)}\mathbf{g}_{j}+\mathbf{b}_{dep(i,j)}^{{}^{\prime}}} \tag{5}\]
where \(\mathbf{W}_{\{\cdot\}}\in\mathbb{R}^{(d_{h}/M)\times(d_{x}+d_{t})}\), \(\mathbf{V}_{\{\cdot\}}\in\mathbb{R}^{(d_{h}/M)\times(d_{x}+d_{t})}\), and \(\mathbf{U}\in\mathbb{R}^{(d_{h}/M)\times(d_{x}+d_{t})}\) are parameters matrices, \(\mathbf{b}_{\{\cdot\}}\), \(\mathbf{b}_{\{\cdot\}}^{{}^{\prime}}\) are offset terms, \(dir(i,j)\) denotes the direction of each relation, and \(dep(i,j)\) denotes the different kinds of dependencies.
Then, to capture the sequence correlation between phrase features and to obtain the features of the whole sentence, we performed phrase sequence feature extraction using a bidirectional GRU (biGRU) network:
\[h_{t}=\text{biGRU}([h_{t-1},\mathbf{h}_{i}^{*}]) \tag{6}\]
where \(\mathbf{W}_{z}\in\mathbb{R}^{d_{h}\times(2*d_{h})}\), \(\mathbf{W}_{r}\in\mathbb{R}^{d_{h}\times(2*d_{h})}\), \(\mathbf{W}_{h}\in\mathbb{R}^{d_{h}\times(2*d_{h})}\) are the parameter matrices shared by the network, and \(\sigma\) denotes the _sigmoid_ function.
The final output is \(\mathbf{H}=(\mathbf{h}_{1},\mathbf{h}_{2},...,\mathbf{h}_{s})\in\mathbb{R}^{s \times(2*d_{h})}\) with \(\mathbf{q}\in\mathbb{R}^{2*d_{h}}\), where \(\mathbf{h}_{i}=[\overrightarrow{h}_{i}\oplus\overleftarrow{h}_{i}]\), \(\mathbf{H}\) represents the syntax-aware phrase feature sequence, \(\mathbf{q}=\mathbf{h}_{s}\), and \(\mathbf{q}\) is the output of the last hidden layer, which incorporates the information of all iteration steps and can represent the syntax-aware question features.
### Phrase-aware Entity Message Passing
The inputs to the phrase-aware entity message passing module include the syntax-aware question feature \(\mathbf{q}\), the syntax-aware phrase feature \(\mathbf{H}=(\mathbf{h}_{1},\mathbf{h}_{2},...,\mathbf{h}_{s})\) and the original visual entity features \(\mathbf{V}=(\mathbf{v}_{1},\mathbf{v}_{2},...,\mathbf{v}_{K})\). We propose a phrase-aware multi-step instruction calculation method, which sends each word separately with question features into the instruction calculation network at each time step to calculate the contribution of each word to the question. Then, we weight the words according to their contribution levels to obtain the instruction vector \(\mathbf{c}_{t}\in\mathcal{R}^{d_{c}}\) that guides the visual entities for message passing:
\[\mathbf{c}_{t}=\sum_{i=1}^{N}\alpha_{t,i}\cdot\mathbf{h}_{i},\quad\alpha_{t,i}= \underset{i}{softmax}\left(\mathbf{W}_{1}\left(\mathbf{h}_{i}\odot\left( \mathbf{W}_{2}^{(t)}\operatorname{ReLU}\left(\mathbf{W}_{3}\mathbf{q}\right) \right)\right)\right) \tag{7}\]
where \(\mathbf{W}_{1},\mathbf{W}_{3}\) are parameter matrices shared by all iteration steps, while \(\mathbf{W}_{2}^{(t)}\) specifies to time step \(t\) to generate different messages at different iteration steps.
We use \(w_{j,i}^{(t)}\) to denote the weight of message delivered from \(j\) entities to \(i\) entities at time step \(t\) and \(m_{j,i}^{(t)}\) to denote the sum of messages received by the \(i\)th
entity from other entities at time step \(t\). At time step \(t\), taking the \(i\)-th entity as an example, we compute the features of messages delivered by adjacent entities to the \(i\)-th vector based on the instruction vector \(\mathbf{c}_{t}\):
\[\tilde{\mathbf{v}}_{i,t}=\left[\mathbf{v}_{i}\oplus\mathbf{v}_{i,t-1}^{ctx} \oplus\left((\mathbf{W}_{4}\mathbf{v}_{i})\odot(\mathbf{W}_{5}\mathbf{v}_{i,t- 1}^{ctx})\right)\right] \tag{8}\]
\[w_{j,i}^{(t)}=softmax\left((\mathbf{W}_{6}\tilde{\mathbf{v}}_{i,t})^{\top}( \mathbf{W}_{7}\tilde{\mathbf{v}}_{j,t})\odot(\mathbf{W}_{8}\mathbf{c}_{t})\right) \tag{9}\]
\[m_{j,i}^{(t)}=w_{j,i}^{(t)}\cdot\left((\mathbf{W}_{9}\tilde{\mathbf{v}}_{j,t}) \odot(\mathbf{W}_{10}\mathbf{c}_{t})\right) \tag{10}\]
And then we use the residual network for message aggregation to achieve the recognition of the scene context by the current visual entity \(\mathbf{v}_{i,t}^{ctx}\). Finally, we concatenate the original entity features \(\mathbf{v}_{i}\) with the scene context features \(\mathbf{v}_{i,T}^{ctx}\) and obtain the final scene context-aware entity features \(\mathbf{v}_{i}^{out}\):
\[\mathbf{v}_{i}^{out}=\mathbf{W}_{12}\left[\mathbf{v}_{i}\oplus\mathbf{v}_{i,T }^{ctx}\right],\quad\mathbf{v}_{i,t}^{ctx}=\mathbf{W}_{11}\left[\mathbf{v}_ {i,t-1}^{ctx}\oplus\sum_{j=1}^{K}m_{j,i}^{(t)}\right] \tag{11}\]
where \(\mathbf{W}_{\{\cdot\}}\) denotes the parameter matrix shared by all iteration steps.
### Top-down Attention-based Answer Prediction
Given a picture \(\mathcal{I}\), we combine the set of visual entity features \(\{\mathbf{v}_{i}\}_{i=1}^{K}\) with syntax-aware question features \(\mathbf{q}\) and use a top-down attention mechanism for feature fusion. This mechanism first calculates the attention scores between each visual entity feature and the question feature as follows:
\[\beta_{i}=\underset{i}{softmax}\left(\mathbf{W}_{13}tanh(\mathbf{W}_{14} \mathbf{v}_{i}^{out}+\mathbf{W}_{15}\mathbf{q}))\right) \tag{12}\]
where \(\mathbf{W}_{13},\mathbf{W}_{14},\mathbf{W}_{15}\) denote the weight matrices. Then, we weigh each visual entity feature using a top-down attention mechanism and perform a nonlinear transformation of the joint features by a two-layer perceptron model. Then, we calculate the probability score for each answer using the \(softmax\) function:
\[\mathbf{p}=\mathbf{W}_{16}ReLU\left(\mathbf{W}_{17}\left[\sum_{i=1}^{N}\beta_ {i}\mathbf{v}_{i}^{out}\oplus\mathbf{q}\right]\right) \tag{13}\]
where \(\mathbf{W}_{16},\mathbf{W}_{17}\) are the weight matrices, and \(ReLU\) is the activation function.
### Loss Function
Finally, we train our model by minimizing the cross-entropy loss function:
\[\mathcal{L}=-\frac{1}{N_{ans}}\sum_{i=1}^{N_{ans}}\left(y_{i}\cdot log(\hat{p}( y_{i}))+(1-y_{i})\cdot log((1-\hat{p}_{(}y_{i})))\right) \tag{14}\]
where \(N_{ans}\) denotes the number of answer categories, and \(y_{i}\) is defined as \(y_{i}=min(\frac{\#humans\ provided\ ans}{3},1)\) where \(\#humans\ provided\ ans\) denotes the total times by which the answer was selected during the data annotation period. \(\hat{p}(y_{i})\) denotes the probability that the output belongs to the \(i\)-th class of answers.
## 4 Experiments
### Experimental Settings
**Datasets**: We adopt two datasets: 1) MSCOCO [13] with more than 80k images and 444k questions (training), 40k images and 214k questions (test), 80k images, and 448k questions (validation), respectively. **Baselines**: We select the following 11 state-of-the-art methods as baselines to evaluate the performance of our STCGN: LSTM-VGG [2], SAN [18], DMN [10], MUTAN [3], BUTD [1], BAN [9], LCGN [7], Murel [4], ReGAT [12], UFSCAN [23], MMMAN [11]. Refer to Appendix 0.A for more details about datasets, baselines, and settings.
### Performance Comparison
Table 1 shows the overall performance of all comparative methods, with the best results highlighted in boldface, where we draw the following conclusions:
1) LSTM+VGG adopts the classic VQA framework with a single model module and only uses a simple vector outer product to achieve feature fusion. The modal information interaction is too simple, resulting in the performance of the final model being inferior to other models.
2) SAN and DMN adopt the typical NMN architecture to decompose the VQA task into a sequence of independent neural networks executing subtasks. The SAN achieves multi-step queries by stacking multiple attention modules, which significantly outperforms the LSTM+VGG network in terms of performance. DMN structure is more modular, using a dynamic memory network module to achieve episodic memory ability, all modules cooperate to complete the question-answering task, in the ability to answer three types of questions is better than the previous two models.
3) MUTAN, BUTD, BAN, and BAN+Counter are typical VQA models based on bilinear pooling and attentional mechanisms. Compared with classical frameworks and NMN architectures, these four models have more fine-grained feature
extraction and feature fusion, significantly improving performance. In particular, the bilinear attention mechanism of BAN fully exploits the implicit interaction information between the question and the picture. BAN+Counter further integrates Counter's counting module on the basis of BAN, effectively improving the performance of counting tasks and the overall performance.
4) MuRel, LCGN, and ReGAT outperform other comparison models, focusing on extracting information about entity relationships in images and playing a key role in joint feature learning. LCGN constructs a complete graph of the entity and implements implicit message passing between entities through problem guidance, while MuRel models the implicit relationship between detailed image regions through cross-modal attention, gradually refining the interaction between the picture and the problem. ReGAT explicitly extracts entity relationships in images and uses relational aware graph attention to realize accurate learning of joint features, so its performance is better than the other three models.
5) UFSCAN and MMMAN used more effective attention mechanisms. UFSCAN adopted a feature-based attention mechanism and obtained better results on counting questions by suppressing irrelevant features and emphasizing informative features. MMMAN proposed a multi-level mesh mutual attention, utilizing mutual attention to fully explore the information interaction between visual and language modalities and improve the model accuracy on Y/N questions.
6) Finally, these methods are inferior to our proposed method in accuracy. It is attributed to two points: (1) STCGN uses a syntax tree to model the question features, introduces syntactic structure features, and designs a tree hierarchical convolutional network to convolute the syntax tree structure, to fully extract the grammatical information of the question and improve its performance. (2) STCGN introduces a phrasing-aware message-passing module, which uses different phrase information in the question representation to guide the message passing between entities in multiple steps and extracts the context-aware entity features of the scene, which further promotes the performance of the model.
### Ablation Study
Fig. 3 shows the performance of STCGN Variants. The results show that when any module is removed, the performance of STCGN on the Test-dev and Test-std subsets of VQA2.0 will decrease significantly. When the SHA module is lost, it has the greatest impact on the model, indicating that feature fusion is the module that has the greatest impact on the accuracy of visual question answering. Secondly, the influence of the TCN module on model performance is greater than that of the MPN module, which may be due to the fact that the tree convolution module is based on a syntax tree and plays an important role in extracting question features and guiding entities for message passing.
### Parameter Sensitivity
In this section, we analyze the effect of different iteration steps \(T\). We fixed other parameters as the optimal parameters, gradually increased \(T\), and obtained the
curve of answering accuracy of different types of questions with the change of \(T\), as shown in Fig. 4. As \(T\) increases, the rotation of messages between entities increases to incorporate more scene context information. The performance of all questions is progressively improved and optimally reached at \(T=4\). As \(T\) continues to increase, the performance of the model gradually decreases, since message passing leads to receive redundant information of the entity and reduce the accuracy of entity representation. The reason why the outliers appear is that the binary question has lower requirements for understanding questions and pictures than other questions, leading to better performance of the model in the binary questions but the inferior overall performance. Therefore, we chose \(T=4\) as the final total number of messaging iteration steps.
### Attention Visualization
To better illustrate the effectiveness of a phrase-based messaging mechanism, we experiment with visualizations in this section. We visualized the attention score between different entities in multiple iteration steps and different words of the question, as shown in Fig. 5. In the attention diagram, we can see: (1) Entity 2, Entity 4, and Entity 10 have significantly higher attention weights related to multiple phrases "the man", "in orange shoes" and "the other players" than other entity-word attention blocks. This suggests that the degree to which an entity is important in answering a question is closely related to multiple phrases. Syntax-aware phrase features in the messaging module provide guidance so that the VQA system can gradually understand entities that contribute more to the task. (2) The initial attention map can only initially locate important entities 2,
Figure 4: Parameter sensitivity of both iteration steps.
Figure 3: The overall accuracy of STCGN Variants.
4, and 10, but their attention weights are not high. The initial visual attention map is very messy, and the contribution of each entity to the answer is not different. As the message passing iteration steps increase, the attention map becomes clearer and the attention weight of the key entity increases from 0.03 to 0.35. This is the result of non-critical entities passing information to critical entities, while also getting the entity representation of scene context awareness.
## 5 Conclusion
In this work, we propose a Syntax Tree Constrained Graph Network. We design a hierarchical tree convolutional network and extract phrase representation and question representation of syntactic structure perception from syntactic tree structure by combining text convolution with graph attention. At the same time, we also suggest a phrase-aware entity message-passing mechanism based on the observation of the data set. In multiple iteration steps, different instruction vectors are calculated using phrase features and question features to capture the scene context-aware entity features.
|
2309.08267 | A Hybrid Quantum-assisted Column Generation Algorithm for the Fleet
Conversion Problem | The problem of Fleet Conversion aims to reduce the carbon emissions and cost
of operating a fleet of vehicles for a given set of tours. It can be modelled
as a column generation scheme with the Maximum Weighted Independent Set (MWIS)
problem as the slave. Quantum variational algorithms have gained significant
interest in the past several years. Recently, a method to represent Quadratic
Unconstrained Binary Optimization (QUBO) problems using logarithmically fewer
qubits was proposed. Here we use this method to solve the MWIS Slaves and
demonstrate how quantum and classical solvers can be used together to approach
an industrial-sized use-case (up to 64 tours). | Yagnik Chatterjee, Zaid Allybokus, Marko J. Rančić, Eric Bourreau | 2023-09-15T09:23:15Z | http://arxiv.org/abs/2309.08267v3 | # A Hybrid Quantum-assisted Column Generation Algorithm for the Fleet Conversion Problem
###### Abstract
The problem of Fleet Conversion aims to reduce the carbon emissions and cost of operating a fleet of vehicles for a given set of tours. It can be modelled as a column generation scheme with the Maximum Weighted Independent Set(MWIS) problem as the slave. Quantum variational algorithms have gained significant interest in the past several years. Recently, a method to represent Quadratic Unconstrained Binary Optimization(QUBO) problems using logarithmically fewer qubits was proposed. Here we use this method to solve the MWIS Slaves and demonstrate how quantum and classical solvers can be used together to approach an industrial-sized use-case (up to 128 tours).
## I Introduction
Fleet conversion is the process of transitioning a fleet of vehicles to more sustainable and environmentally friendly alternatives. With the growing recognition of the detrimental effects of traditional fossil fuel powered vehicles on the environment and the need to mitigate climate change, businesses and organizations are increasingly looking for ways to reduce their carbon footprint and operate more efficiently. The transportation sector is one of the largest contributors to greenhouse gas emissions, primarily due to their reliance on fossil fuels. By transitioning fleets to electric or hybrid vehicles, companies can significantly reduce their carbon emissions. Beyond the environmental benefits, fleet conversion also offers compelling cost-saving opportunities for businesses.
In the fleet conversion problem, a certain number of tours need to be carried out between several locations. In order to carry out these tours we have at our disposal several vehicles of different models. Each vehicle model has an associated cost. On top of the capital expenditure corresponding to the purchase of one vehicle of one model, this cost may also capture the environmental cost - e.g. the carbon footprint; the cost of operation - e.g. energy usage, or both. The objective is to minimize the total cost of carrying out all the tours including capital and operational expenditures. Therefore, fleet conversion goes beyond simply choosing the best possible vehicles and also incorporates sharing the same vehicles for multiple tours when possible, thereby reducing the cost.
Quantum computing [1; 2; 3] is a potentially disruptive field that could have applications in several domains including financial modelling [4; 5], cryptography [6; 7], chemistry [8; 9] and optimization. Within the scope of optimization applications, there has been a growing interest in quantum variational algorithms [10; 11; 12; 13; 14; 15; 16; 17]. Among them, the Quantum Approximate Optimization Algorithm (QAOA) has been heavily researched [18; 19; 20; 21; 22; 23]. A well known issue with QAOA is that it does not scale well with problem size limiting its applications to toy problems. Recently, an algorithm to treat Quadratic Unconstrained Binary Optimization (QUBO) [24; 25; 26; 27] problem using _logarithmically_ fewer qubits has been demonstrated[28; 29]. In this paper, we use column generation [30; 31; 32; 33; 34] to describe our problem as a master problem and several sub-problems henceforth referred to as slaves. The slave problem in our case is the Maximum Weighted Independent Set (MWIS) problem which can be represented as a QUBO problem. We propose an algorithm that handles the master problem using a commercial linear program solver Gurobi and the slave problems using a quantum solver based on [29]. In our experiments, we solve instances up to a size of 128 tours using only 7 qubits to represent the MWIS Slaves. This shows that the method is compatible with the quantum computers of the NISQ era.
The paper is structured as follows. In section II.1 the fleet conversion problem is defined. In section II.2 the problem is stated in the form of a graph problem followed by section II.3 where the column generation algorithm is described. In section II.4 and II.5, we describe the quantum model to solve the sub-problems and how we can use the quantum solver and classical solver together to develop a quantum-assisted algorithm. Finally, we present the experimental results in section III. |
2301.13703 | Dissecting the Effects of SGD Noise in Distinct Regimes of Deep Learning | Understanding when the noise in stochastic gradient descent (SGD) affects
generalization of deep neural networks remains a challenge, complicated by the
fact that networks can operate in distinct training regimes. Here we study how
the magnitude of this noise $T$ affects performance as the size of the training
set $P$ and the scale of initialization $\alpha$ are varied. For gradient
descent, $\alpha$ is a key parameter that controls if the network is
`lazy'($\alpha\gg1$) or instead learns features ($\alpha\ll1$). For
classification of MNIST and CIFAR10 images, our central results are: (i)
obtaining phase diagrams for performance in the $(\alpha,T)$ plane. They show
that SGD noise can be detrimental or instead useful depending on the training
regime. Moreover, although increasing $T$ or decreasing $\alpha$ both allow the
net to escape the lazy regime, these changes can have opposite effects on
performance. (ii) Most importantly, we find that the characteristic temperature
$T_c$ where the noise of SGD starts affecting the trained model (and eventually
performance) is a power law of $P$. We relate this finding with the observation
that key dynamical quantities, such as the total variation of weights during
training, depend on both $T$ and $P$ as power laws. These results indicate that
a key effect of SGD noise occurs late in training by affecting the stopping
process whereby all data are fitted. Indeed, we argue that due to SGD noise,
nets must develop a stronger `signal', i.e. larger informative weights, to fit
the data, leading to a longer training time. A stronger signal and a longer
training time are also required when the size of the training set $P$
increases. We confirm these views in the perceptron model, where signal and
noise can be precisely measured. Interestingly, exponents characterizing the
effect of SGD depend on the density of data near the decision boundary, as we
explain. | Antonio Sclocchi, Mario Geiger, Matthieu Wyart | 2023-01-31T15:22:24Z | http://arxiv.org/abs/2301.13703v2 | # Dissecting the Effects of SGD Noise in Distinct Regimes of Deep Learning
###### Abstract
Understanding when the noise in stochastic gradient descent (SGD) affects generalization of deep neural networks remains a challenge, complicated by the fact that networks can operate in distinct training regimes. Here we study how the magnitude of this noise \(T\) affects performance as the size of the training set \(P\) and the scale of initialization \(\alpha\) are varied. For gradient descent, \(\alpha\) is a key parameter that controls if the network is 'lazy' (\(\alpha\gg 1\)) or instead learns features (\(\alpha\ll 1\)). For classification of MNIST and CIFAR10 images, our central results are: _(i)_ obtaining phase diagrams for performance in the \((\alpha,T)\) plane. They show that SGD noise can be detrimental or instead useful depending on the training regime. Moreover, although increasing \(T\) or decreasing \(\alpha\) both allow the net to escape the lazy regime, these changes can have opposite effects on performance. _(ii)_ Most importantly, we find that key dynamical quantities (including the total variations of weights during training) depend on both \(T\) and \(P\) as power laws, and the characteristic temperature \(T_{c}\), where the noise of SGD starts affecting performance, is a power law of \(P\). These observations indicate that a key effect of SGD noise occurs late in training, by affecting the stopping process whereby all data are fitted. We argue that due to SGD noise, nets must develop a stronger'signal', i.e. larger informative weights, to fit the data, leading to a longer training time. The same effect occurs at larger training set \(P\). We confirm this view in the perceptron model, where signal and noise can be precisely measured. Interestingly, exponents characterizing the effect of SGD depend on the density of data near the decision boundary, as we explain.
Machine Learning, ICML
## 1 Introduction
Optimizing the generalization performances of over-parametrized neural networks is one of the main challenges in machine learning. A crucial role is played by gradient-based training algorithms, which converge to solutions which generalize well also when no explicit regularization of the model is used (Zhang et al., 2021). Mini-batch stochastic gradient descent (SGD) is the workhorse algorithm to train modern neural networks. Yet, key aspects of these algorithms are debated.
_Effect on performance:_ A popular idea has been that mini-batch SGD can generalize better than full batch gradient descent (GD) (Heskes and Kappen, 1993; LeCun et al., 2012; Keskar et al., 2016; Hochreiter and Schmidhuber, 1997; Jastrzebski et al., 2017; Chaudhari et al., 2019), yet this view is debated (Hoffer et al., 2017; Dinh et al., 2017; Shallue et al., 2018; Zhang et al., 2019). In fact, comparing SGD and GD at fixed number of training epochs leads to a generalization gap (Keskar et al., 2016) that can be closed by training longer with a fixed number of training steps (Hoffer et al., 2017; Smith et al., 2020). More generally, the choice of the computational budget can affect which algorithm performs better (Shallue et al., 2018; Smith et al., 2020).
_Theories for the role of SGD:_ Several works have argued that larger SGD stochasticity leads the dynamics toward flatter minima of the loss landscape, and it has been argued that this effect leads to improved performances (Hochreiter and Schmidhuber, 1997; Keskar et al., 2016; Zhang et al., 2018; Smith and Le, 2018; Wu et al., 2018). By contrast, other studies suggest that the SGD noise biases the model in a manner similar to initializing the network with small weights, and helps recovering sparse predictors (Blanc et al., 2020; HaoChen et al., 2021; Pesme et al., 2021).
### This work
In this work, we clarify these two debates by performing systematic empirical studies of how performance is affected by the noise magnitude of SGD or temperature \(T\) (the ratio between the learning rate \(\eta\) and the batch size \(B\)(Jastrzebski et al., 2017; Zhang et al., 2019; Smith et al., 2020)), by the initialization scale \(\alpha\), and by the size of the training set \(P\). The initialization scale \(\alpha\) was rarely considered in empirical studies so far, yet it governs the training regimes in
which nets operate. For large \(\alpha\), tiny changes of weights are sufficient to fit the data: the predictor is approximately linear in its parameters, corresponding to the _kernel_ or _lazy_ regime (Jacot et al., 2018; Chizat et al., 2019). By contrast for small initialization, networks can learn the relevant features of the task and the dynamics is non-linear, corresponding to the so-called feature-learning regime (Rotskoff and Vanden-Eijnden, 2018; Mei et al., 2018; Sirignano and Spiliopoulos, 2020).
We also deal with the computational budget issue by considering the hinge loss \(l(y,\hat{y})=(1-y\hat{y})^{+}\), allowing us to train networks until the time \(t^{*}\) where the loss is strictly zero, and the dynamics stops.
Our central empirical results are:
1. obtaining phase diagrams for performance in the \((\alpha,T)\) plane. They show that SGD noise can be detrimental or instead useful depending on the training regime, even in the absence of budget constraints. This observation clarifies why different conclusions on the benefits of SGD were previously made.
2. Although we find that increasing \(T\) or decreasing \(\alpha\) both allow the net to escape the lazy regime, these changes can have opposite effects on performance, in disagreement with simple models (Pesme et al., 2021).
3. Most importantly, we reveal that several observables characterizing the dynamics follow scaling laws. Denote by \(\Delta\omega\) the relative weights variation accumulated after training, \(t^{*}\) the training time defined as the learning rate times the number of training steps required to bring a hinge loss to zero, and \(T_{c}\) the characteristic temperature scale for which performance is affected by SGD. We find that for \(\alpha\gg 1\), these quantities do not depend on \(\alpha\) and follow: \[\Delta\omega\sim T^{\delta}P^{\gamma}\;\;t^{*}\sim TP^{b}\;\;T_{c}\sim P^{-a}\] where \(\delta,\gamma,b,a\) are exponents. Assuming that \(T_{c}\) is the temperature scale where the network exits the lazy regime, i.e. \(\Delta\omega=\mathcal{O}(1)\), gives \(a=\gamma/\delta\) in agreement with our observations.
4. We rationalize these findings using a teacher-student perceptron model, for which \(\Delta\omega\) and \(t^{*}\) also display power-law dependence on \(T\) and \(P\). We show that SGD noise increases weights in directions irrelevant to the task, implying that the correct weights must grow much larger to fit data, thus increasing both \(t^{*}\) and \(\Delta\omega\). We compute the dependence of these effects on the size of the training set, and show that this dependence varies qualitatively with the distribution of data near the decision boundary.
Overall, instead of a static view where SGD noise would bias networks toward broader minima of the population loss, these results support a dynamical viewpoint where SGD noise delays the end of training. This effect allows the weights to grow more, affecting performance the most when one escapes the lazy regime.
### Related works
More related works are indicated in Appendix A.
## 2 Empirical analysis
### General setting and notation
We consider binary classification on the data \(\{\mathbf{x}_{\mu}\}_{\mu=1,...,P}\in\mathbb{R}^{d}\) with labels \(\{y_{\mu}\}_{\mu=1,...,P}\in\{-1,+1\}\). \(P\) is the size of the training set. Given a predictor \(\hat{y}_{\mu}\), the hinge loss on the sample \(\mu\) is defined as \(l(y_{\mu},\hat{y}_{\mu})=(1-y_{\mu}\hat{y}_{\mu})^{+}\), where \((x)^{+}=\max(0,x)\). To control between feature and lazy training, we multiply the model output by \(\alpha\)(Chizat et al., 2019). For the hinge loss, it is equivalent to changing the loss margin to \(1/\alpha\), therefore we study the training loss:
\[L(\mathbf{w})=\frac{1}{P}\sum_{\mu=1}^{P}(\alpha^{-1}-y_{\mu}F(\mathbf{w},\mathbf{x}_{\mu }))^{+} \tag{1}\]
where \(F(\mathbf{w},\mathbf{x}_{\mu})\) is the model predictor with weights \(\mathbf{w}\) on the datum \(\mathbf{x}_{\mu}\). The model predictor at time \(t\) corresponds to \(F(\mathbf{w},\mathbf{x}_{\mu})=f(\mathbf{w}^{t},\mathbf{x}_{\mu})-f(\mathbf{w}^{0},\mathbf{x}_{\mu})\), where \(f(\mathbf{w}^{t},\mathbf{x}_{\mu})\) is the output of a neural net with weights \(\mathbf{w}^{t}\) at time \(t\) and \(\mathbf{w}^{0}\) are the weights at initialization. For a network of width \(h\), the weights are initialized as Gaussian random numbers with standard deviation \(1/\sqrt{h}\) for the hidden layers and \(1/h\) for the output layer. Such an initialization ensures that the feature learning limit corresponds to \(\alpha\ll 1\) while the lazy training limit corresponds to \(\alpha\gg 1\), and that every layer has a similar change of weights (Geiger et al., 2020; Yang and Hu, 2021).
The stochastic gradient descent updating equation is:
\[\mathbf{w}^{t+\eta}=\mathbf{w}^{t}+\frac{\eta}{B}\sum_{\mu\in\mathbb{B}_{t}}\theta \left(\alpha^{-1}-y_{\mu}F(\mathbf{w},\mathbf{x}_{\mu})\right)y_{\mu}\nabla_{\mathbf{w}}f( \mathbf{w}^{t},\mathbf{x}_{\mu}), \tag{2}\]
where \(\theta(x)\) is the Heaviside step function, \(\mathbb{B}_{t}\subset\{1,...,P\}\) is the batch at time \(t\) and \(B\) is its size. The time \(t\) corresponds to the number of training steps times the learning rate \(\eta\). The batch \(\mathbb{B}_{t}\) is randomly selected at each time step among all the \(P\) data. The learning rate \(\eta\) is kept constant during training. The end of training is reached when \(L(\mathbf{w}^{t^{*}})=0\).
The batch size \(B\) is taken small enough to be in the "noise dominated" regime (Smith et al., 2020; Zhang et al., 2019), where the dynamics depends on the SGD temperature \(T=\eta/B\). Empirical verification of this fact is provided in
## Appendix D
Below we use a 5-hidden-layers fully-connected (FC) network and a 9-hidden-layers convolutional neural network (CNN) (MNAS architecture (Tan et al., 2019)). In Appendix D we report data also for a 3-hidden layers CNN (simple-CNN). We consider the binary datasets MNIST (even vs odd numbers) and CIFAR10 (animals vs the rest). All the networks use ReLU as activation functions. The code with all the details of the experiments is provided at [https://tinyurl.com/yh6kay4b](https://tinyurl.com/yh6kay4b).
### Performance in the \((\alpha,T)\) phase diagram
Fig. 1-(a) shows the test error for a FC network trained on MNIST and Fig. 1-(b) shows the same quantity obtained after training a CNN on CIFAR10. The black dots correspond to training loss exploding to infinity due to too large learning rate. Therefore, the dashed back lines indicate the maximal temperature \(T_{max}\) for which SGD converges.
From Fig. 1 we make the following observations:
(i) In the feature regime, both \(T_{max}\) and the temperature of optimal performance \(T_{opt}\) follow \(T_{max}\sim T_{opt}\sim\alpha^{k}\). In Appendix B, we relate the exponent \(k\) to the number \(D\) of hidden layers of the network as \(k=(D-1)/(D+1)\). In the lazy regime, \(T_{max}\) and \(T_{opt}\) are independent of \(\alpha\).
(ii) In Fig. 1-(a), in the lazy regime (largest \(\alpha\)), increasing \(T\) leads to an initial slight degradation of the test error followed by an improvement just before reaching the instability \(T_{max}\).
(iii) In Fig. 1-(b), in the lazy regime, increasing \(T\) leads to a degradation of the test error before reaching the instability \(T_{max}\) (for larger \(P\), a region of good performance appears near \(T_{max}\), see below). In this regime increasing \(T\) or decreasing \(\alpha\) have opposite effects, showing that in general an increase of SGD noise is not equivalent to making the initialization smaller.
point where the test error starts improving (Fig. 2-(a-I, b-I)). This establishes the existence of a characteristic temperature \(T_{c}\) where SGD affects performances, having an asymptotic dependence on \(P\) as
\[T_{c}\sim P^{-a} \tag{3}\]
with exponent values \(a\simeq 0.5\) as reported in Table 1.
_Changes of weights:_ To rationalize this finding, it is useful to consider how the total weights variation relative to their initialization, \(\Delta w=\frac{||\mathbf{w}^{*}-\mathbf{w}^{0}||}{||\mathbf{w}^{0}||}\), increases with \(T\). In Fig. 2-(II) we observe an empirical scaling
\[\Delta w\sim T^{\delta}P^{\gamma} \tag{4}\]
with exponents' values \(\delta\simeq 1\) (slightly lower for CNNs where \(\delta\simeq 0.8,0.9\)) and \(\gamma\simeq 0.5\). The values are reported in Table 1.
The dependence of the weight variations on \(T\) apparent in Eq. 4 suggests the following hypothesis: the characteristic temperature \(T_{c}\) governing the test error corresponds to the exit from the kernel regime, which occurs when \(\Delta w=\mathcal{O}(1)\). We test this hypothesis in two ways. Firstly, if it is true then the test error plotted as a function of \(\Delta w\) should be maximum at the same value of this argument, independently of the size of the training set \(P\). We confirm this result in Fig. 2-(III). Secondly, imposing that \(\Delta w=\mathcal{O}(1)\) and using Eq.4 leads to a characteristic temperature \(T_{c}\sim P^{-\gamma/\delta}\), yielding Eq.3 with \(a=\frac{\gamma}{\delta}\). This prediction is approximately verified, as shown in Table 1.
_Convergence time:_ We expect that a larger change of weights requires a longer training time \(t^{*}\). We confirm that indeed the increase of \(T\) in the lazy regime is accompanied by an increase of the training time \(t^{*}\) (Fig. 2-IV) and we empirically find the asymptotic behaviour
\[t^{*}\sim TP^{b} \tag{5}\]
with values of \(b\) around \(1.3\) (see Table 1).
In table 1 we report the exponents \(a\), \(b\), \(\gamma\) and \(\delta\) of \(T_{c}\sim P^{-a}\), \(t^{*}\sim TP^{b}\) and \(||\Delta w||\sim T^{\delta}P^{\gamma}\) that we use to align the data in the Figs. 2 and Figs. 10, 11, 12, 13 in Appendix D. We observe that the relationship \(a=\gamma/\delta\) is approximately verified.
### Neural networks
Local alignment of decision boundaries.In binary classification, the true decision boundary in data space is the locus of points between \(\mathbf{x}\)'s with different labels \(y(\mathbf{x})=\pm 1\), while the decision boundary learnt by the model \(F(\mathbf{x})\) corresponds to the \(\mathbf{x}\)'s such that \(F(\mathbf{x})=0\). Considering a point \(\mathbf{x}^{*}\) where the two boundaries cross and its neighbourhood \(B_{\epsilon}\) of diameter \(\epsilon\), the local alignment of the model boundary with the true one is given by
\[\frac{||\partial_{\mathbf{x}}F_{\parallel}||}{||\partial_{\mathbf{x}}F_{ \perp}||} \tag{6}\]
at linear order in \(\epsilon\), where \(\partial_{\mathbf{x}}F_{\parallel}\) is the component of the gradient \(\partial_{\mathbf{x}}F(\mathbf{x}^{*})\) in the direction perpendicular to the true decision boundary, while \(\partial_{\mathbf{x}}F_{\perp}=\partial_{\mathbf{x}}F(\mathbf{x}^{*})-\partial_{\mathbf{x}}F _{\parallel}\) is orthogonal to it (see Fig. 3). The angle between the two boundaries corresponds to \(\theta=\text{arccot}\left(\frac{||\partial_{\mathbf{x}}F_{\parallel}||}{|| \partial_{\mathbf{x}}F_{\perp}||}\right)\) and perfect learning requires that \(\frac{||\partial_{\mathbf{x}}F_{\parallel}||}{||\partial_{\mathbf{x}}F_{\perp}||}\to\infty\). \(\partial_{\mathbf{x}}F_{\parallel}\) identifies the direction that is informative for the task, while \(||\partial_{\mathbf{x}}F_{\perp}||\) is the component in the non-informative directions, which act as noise.
Fitting condition.When considering the hinge loss in Eq. 1 with margin \(\alpha^{-1}\) defined in Sec. 2.1, a training point \((\mathbf{x}^{\mu},y^{\mu})\) is fitted (i.e. it has zero training loss) when \(y^{\mu}\)\(F(\mathbf{x}^{\mu})\geq\alpha^{-1}\). Having \(P\) training points and calling \(\mathbf{x}^{\pm}\) the two of them in \(B_{\epsilon}\) with \(y(\mathbf{x}^{\pm})=\pm 1\) that have the shortest distances \(\delta^{\pm}\) from the true decision boundary, their fitting conditions \(\pm F(\mathbf{x}^{\pm})\geq\alpha^{-1}\) imply \(F(\mathbf{x}^{+})-F(\mathbf{x}^{-})\geq 2\alpha^{-1}\). Assuming \(F(\mathbf{x})\) is differentiable in \(B_{\epsilon}\), the last inequality can be approximated at linear order in \(\epsilon\) as
\[\partial_{\mathbf{x}}F(\mathbf{x}^{*})\cdot\left(\mathbf{x}^{+}-\mathbf{x}^{-} \right)\geq 2\alpha^{-1}. \tag{7}\]
Defining \(\delta_{\parallel}\) and \(c\) as \(\delta_{\parallel}=\delta^{+}+\delta^{-}=\frac{\partial_{\mathbf{x}}F_{\parallel} }{||\partial_{\mathbf{x}}F_{\parallel}||}\cdot(\mathbf{x}^{+}-\mathbf{x}^{-})\) and \(c=-\frac{\partial_{\mathbf{x}}F_{\perp}}{||\partial_{\mathbf{x}}F_{\perp}||}\cdot(\mathbf{ x}^{+}-\mathbf{x}^{-})\), inequality 7 becomes
\[\frac{||\partial_{\mathbf{x}}F_{\parallel}||}{||\partial_{\mathbf{x}}F_{ \perp}||}\geq\frac{1}{\delta_{\parallel}}\left(\frac{2\alpha^{-1}}{||\partial _{\mathbf{x}}F_{\perp}||}+c\right). \tag{8}\]
Role of the training set size \(P\) and of the SGD temperature \(T\).Considering Eq. 8:
1. we argue that increasing \(P\) corresponds to shorter distances \(\delta_{\parallel}\), which require a better alignment of the model decision boundary with the true one, that is a larger \(\frac{||\partial_{\mathbf{x}}F_{\parallel}||}{||\partial_{\mathbf{x}}F_{\perp}||}\).
2. Since increasing \(T\) makes the training dynamics more noisy, we propose that a larger \(T\) increases the non-informative component \(||\partial_{\mathbf{x}}F_{\perp}||\). This implies, according to Eq. 8, a larger informative component \(||\partial_{\mathbf{x}}F_{\parallel}||\) to fit the training set.
According to (1) and (2), both \(T\) and \(P\) increase the gradients magnitude \(||\partial_{\mathbf{x}}F(\mathbf{x}^{*})||\), but only increasing \(P\) gives a better boundary alignment, that is a larger \(||\partial_{\mathbf{x}}F_{\parallel}||/||\partial_{\mathbf{x}}F_{\perp}||\). This effect is illustrated in Fig. 4 for two-dimensional data.
Overall, both increasing \(P\) and \(T\) require larger gradient magnitudes \(||\partial_{\mathbf{x}}F(\mathbf{x}^{*})||\) to fit the training set, which corresponds to a larger relative variation of the weights, in accordance with the observation of Eq. 4. This larger growth of the weights requires a longer training time, in accordance
\begin{table}
\begin{tabular}{l c c c c c}
**MODEL** & \(b\) & \(\gamma\) & \(\delta\) & \(\gamma/\delta\) & \(a\) \\ \hline FC on CIFAR & 1.4 & 0.5 & 1 & 0.5 & 0.5 \\ FC on MNIST & 1.3 & 0.4 & 1 & 0.4 & 0.5 \\ MUAS on CIFAR & 1.3 & 0.5 & 0.8 & 0.6 & 0.5 \\ MNAS on MNIST & 1.2 & 0.3 & 0.75 & 0.4 & 0.5 \\ simpleCNN on CIFAR & 1.5 & 0.6 & 0.9 & 0.67 & 0.6 \\ simpleCNN on MNIST & 1.4 & 0.35 & 0.9 & 0.45 & 0.5 \\ perceptron \(\chi=1.5\) & 1.8 & 0.4 & 1 & 0.4 \\ perceptron \(\chi=4\) & 1.4 & 0.2 & 1 & 0.2 \\ \end{tabular}
\end{table}
Table 1: Exponents \(b\), \(\gamma\), \(\delta\), \(a\) of the empirical observations 3,4,5 in the lazy regime of neural networks and for the perceptron with data distribution parameter \(\chi\).
Figure 3: **Pictorial representation of a neighbourhood \(B_{\epsilon}\) of the true decision boundary (purple dashed line).** Red (blue) dots are training points with labels \(+1\) (\(-1\)) and the point \(\mathbf{x}^{+}\) (\(\mathbf{x}^{-}\)) is the closest to the true decision boundary. The decision boundary of the trained model \(F(\mathbf{x})\) corresponds to the \(\mathbf{x}\)’s such that \(F(\mathbf{x})=0\). The gradients \(\partial_{\mathbf{x}}F\) on it quantify the local alignment between the model boundary and the true one: \(\partial_{\mathbf{x}}F_{\parallel}\) is the component in the direction of correct alignment, while \(\partial_{\mathbf{x}}F_{\perp}\) is orthogonal to it.
with the observation of Eq. 5. In this view, a key effect of increasing \(P\) is to diminish the distance between data of different labels, which are the last points to be fitted. We thus expect that changing \(P\) affects the dynamics only late in training, as we demonstrate in Fig. 5. Therefore, the hardest data to fit affect both the growth of the weights and the training time.
### Perceptron model
We consider a linearly-separable classification task on \(d\)-dimensional data \(\mathbf{x}\in\mathbb{R}^{d}\) with labels \(y(\mathbf{x})=\pm 1\) given by the signs of the first components:
\[y(\mathbf{x})=\text{sign}(x_{1}). \tag{9}\]
The true decision boundary in this problem is the hyperplane \(x_{1}=0\). We study this problem with a linear classifier, called perceptron:
\[F(\mathbf{w},\mathbf{x})=\frac{1}{\sqrt{d}}\mathbf{w}\cdot\mathbf{x} \tag{10}\]
initialized with \(\mathbf{w}^{0}=0\).
Although the perceptron is always in the lazy regime1 and does not have a characteristic temperature of SGD controlling performance, it is of interest because the interpretation discussed in Sec. 3.1 can be tested. In fact, the gradient \(\partial_{\mathbf{x}}F(\mathbf{x}^{*})\) corresponds to the perceptron's weights \(\mathbf{w}/\sqrt{d}\), with the informative and non-informative components respectively \(||\partial_{\mathbf{x}}F_{\parallel}||=w_{1}/\sqrt{d}\) and \(||\partial_{\mathbf{x}}F_{\perp}||=||\mathbf{w}_{\perp}||/\sqrt{d}\). The alignment of the perceptron decision boundary with the true one is given by the ratio
Footnote 1: Because it is linear with respect to the weights \(\mathbf{w}\).
\[w_{1}/||\mathbf{w}_{\perp}||. \tag{11}\]
The fitting condition on the data point \((\mathbf{x}^{\mu},y^{\mu})\) requires that
Figure 4: **Decision boundary for binary classification in 2 dimensions: (a) one-hidden-layer FC neural network and (b) perceptron model.** Red (blue) dots are training points with label \(+1\) (\(-1\)) and the purple dashed line is the true decision boundary. The black line is the decision boundary obtained from training the model \(F(\mathbf{x})\) with SGD. **(I)-(II).** Increasing the SGD temperature \(T\) gives larger gradients \(\partial_{\mathbf{x}}F\) but not a better alignment between the decision boundaries: it increases the non-informative component (\(\mathbf{w}_{\perp}\) for the perceptron). **(I)-(III).** Increasing the number of training points \(P\) gives larger gradients \(\partial_{\mathbf{x}}F\) and a better alignment between the decision boundaries.
Figure 5: **FC on MNIST: training error in time, fixed \(T\), changing \(P\).** Increasing the training set size \(P\) delays the point when the training error goes to zero, while the first part of the dynamics stays unchanged.
the weights \(\mathbf{w}=[w_{1};\mathbf{w}_{\perp}]\) satisfy
\[w_{1}|x_{1}^{\mu}|+y^{\mu}\mathbf{w}_{\perp}\cdot\mathbf{x}_{\perp}^{\mu}\geq\frac{\sqrt{ d}}{\alpha} \tag{12}\]
which, by defining the random quantities \(c_{\mu}=-y^{\mu}\frac{\mathbf{w}_{\perp}}{||\mathbf{w}_{\perp}||}\cdot\mathbf{x}_{\perp}^{\mu}\), can be recast as
\[\frac{w_{1}}{||\mathbf{w}_{\perp}||}\geq\frac{1}{|x_{1}^{\mu}|}\left(\frac{\sqrt{ d}}{\alpha||\mathbf{w}_{\perp}||}+c_{\mu}\right). \tag{13}\]
This relationship is a special case of Eq. 8. In fact, increasing \(P\) gives smaller values of \(|x_{1}^{\mu}|\) which require larger \(\frac{w_{1}}{||\mathbf{w}_{\perp}||}\) to fit the training set, while increasing \(T\) corresponds to increasing \(||\mathbf{w}_{\perp}||\). A qualitative confirmation of this effect is reported in Fig. 4-(b).
In the following, we consider the regime of large \(T\) and large \(\alpha\), corresponding to \(\frac{\sqrt{d}}{\alpha||\mathbf{w}_{\perp}||}\ll|c_{\mu}|\), for which condition 13 becomes
\[\frac{w_{1}}{||\mathbf{w}_{\perp}||}\geq\frac{c_{\mu}}{|x_{1}^{\mu}|}\left(1+o(1) \right). \tag{14}\]
Data distribution and setting.To control the density of data near the decision boundary \(x_{1}=0\), we consider a distribution on the first component \(x_{1}\) parametrized by \(\chi\geq 0\) (Fig. 6):
\[\rho(x_{1})=|x_{1}|^{\chi}e^{-x_{1}^{2}/2}/Z, \tag{15}\]
with \(Z=2^{\frac{1+\chi}{2}}\Gamma(\frac{1+\chi}{2})\) the normalization constant. The other \(d-1\) components \(\mathbf{x}_{\perp}=[x_{i}]_{i=2,\dots,d}\) are distributed as standard multivariate Gaussian numbers, i.e. \(\mathbf{x}_{\perp}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{d-1})\). \(\chi=0\) corresponds to the Gaussian case. This data distribution has been first considered in Tomasini et al. (2022). The learning setting is defined identically to the one of neural networks in Sec. 2.1. We consider the case \(1\ll d\ll P\), where \(d\) is the dimension of the data and the perceptron weights and \(P\) is the number of training points. We consider this being a realistic limit when considering the effective dimension \(d_{\text{eff}}\) of real datasets (\(d_{\text{eff}}\approx 15\) for MNIST and \(d_{\text{eff}}\approx 35\) for CIFAR-10 (Spigler et al., 2020)) with respect to the number of training samples \(P>10^{3}\).
**Empirical observations.** A key result is that the perceptron displays asymptotic behaviours in the change of weights and training time similar to those of neural networks. For the considered perceptron initialized with \(\mathbf{w}^{0}=0\), the weights variation \(\Delta w\) corresponds to \(||\mathbf{w}||\). Since \(w_{1}/||\mathbf{w}_{\perp}||\gg 1\) for large \(P\), we have \(\Delta w=||\mathbf{w}||\simeq w_{1}\). Eqs. 4 and 5 are verified with exponents reported in Table 1, as shown in Fig. 7-(a,c).
In addition, we observe that \(||\mathbf{w}_{\perp}||\) at the end of training is proportional to \(T\) and independent of \(P\) (Fig. 7-(b)):
\[||\mathbf{w}_{\perp}||\sim T. \tag{16}\]
This observation is a positive test about the effect of \(T\) on \(||\partial_{\mathbf{x}}F_{\perp}||\) proposed in Sec. 3.1.
Non-universality of the exponents.Remarkably, the exponents \(\gamma\) and \(b\) of \(P\) for the perceptron depend on the parameter \(\chi\) of the data distribution. This finding can be rationalized by considering condition 14 at the end of training. In fact, satisfying 14 for every training point requires \(\frac{w_{1}}{||\mathbf{w}_{\perp}||}\geq\underset{\mu}{\text{max}}\frac{c_{\mu}}{|x _{1}^{\mu}|}\). In Appendix C, classical extreme value theory is used to show that, for large \(P\), the typical value of \(\underset{\mu}{\text{max}}\frac{c_{\mu}}{|x_{1}^{\mu}|}\) behaves asymptotically as \(\langle\underset{\mu}{\text{max}}\frac{c_{\mu}}{|x_{1}^{\mu}|}\rangle=CP^{\frac {1}{1+\chi}}+o\left(P^{\frac{1}{1+\chi}}\right)\) for some constant \(C\). Therefore we obtain a prediction for the exponent \(\gamma\):
\[\gamma=\frac{1}{1+\chi}, \tag{17}\]
in excellent agreement with data (Fig 7-(a)). This further confirms that the asymptotic behaviour with respect to \(P\) is controlled by the statistics of the points close to the decision boundary. Thus the exponents are non-universal, since they depend directly on the data distribution.
An estimate of the parameter \(\chi\) for some images datasets is reported in Tomasini et al. (2022) through the study of kernel ridge regression. For binary CIFAR10, \(\chi_{CIFAR}=1.5\) is reported, that according to 17 corresponds to \(\gamma=0.4\), a value compatible with those observed in neural networks (Table 1).
## 4 Conclusions
In this work we have explored the effect of SGD noise in different training regimes of neural networks using the hinge loss. Since this loss goes to zero at the end of training, the minima found by the algorithm are always flat: a static view explaining the benefit of SGD in terms of the flatness of minima cannot be applied. Instead, we propose a
Figure 6: **Perceptron model, data distribution on the \(x_{1}\) component.** The sign of \(x_{1}\) determines the class \(y=\text{sign}(x_{1})\). For \(\chi=0\) the distribution is Gaussian.
dynamical view where SGD noise increases the weights of the model in directions that are detrimental for learning, which in turn induces an increase in the useful directions to fit the training set. Fitting is the hardest for data close to the decision boundary, whose statistics depends both on the size of the training set and the distribution of data close to the decision boundary. This view naturally explained our observations that the total weights variation, and the training time, depend on both the SGD noise and the size of the training set. It also rationalizes the puzzling observation that the characteristic SGD temperature for which weight changes become significant and the test error is affected by the noise depends on the training set size. Exponents characterizing this relationship are non-universal. We expect them to depend on the data distribution near the decision boundary, as we demonstrated for the perceptron.
Our work thus clarifies a key effect of SGD, and explains the range of temperatures where SGD noise matters. However, understanding the sign of the effect of this noise on performance (beneficial or detrimental), and how it relates to the data structure and the network architecture, appears to be a particularly vexing question. For example, for the lazy regime of CNNs, we observe a non-monotonic behaviour of the test error, which initially grows and then decays as the SGD noise is increased.
## Acknowledgments
We thank Francesco Cagnetta, Alessandro Favero, Bastien Olivier Marie Goransson, Leonardo Petrini and Umberto Maria Tomasini for helpful discussions. This work was supported by a grant from the Simons Foundation (# 454953 Matthieu Wyart).
|
2309.10428 | On the categorical foundations of quantum information theory: Categories
and the Cramer-Rao inequality | An extension of Cencov's categorical description of classical inference
theory to the domain of quantum systems is presented. It provides a novel
categorical foundation to the theory of quantum information that embraces both
classical and quantum information theory in a natural way, while also allowing
to formalise the notion of quantum environment. A first application of these
ideas is provided by extending the notion of statistical manifold to
incorporate categories, and investigating a possible, uniparametric Cramer-Rao
inequality in this setting. | Florio M. Ciaglia, Fabio Di Cosmo, Laura González-Bravo, Alberto Ibort, Giuseppe Marmo | 2023-09-19T08:45:13Z | http://arxiv.org/abs/2309.10428v1 | On the categorical foundations of quantum information theory: Categories and the Cramer-Rao inequality
###### Abstract
An extension of Cencov's categorical description of classical inference theory to the domain of quantum systems is presented. It provides a novel categorical foundation to the theory of quantum information that embraces both classical and quantum information theory in a natural way, while also allowing to formalise the notion of quantum environment. A first application of these ideas is provided by extending the notion of statistical manifold to incorporate categories, and investigating a possible, uniparametric Cramer-Rao inequality in this setting.
\({}^{1}\) Department of Mathematics, University Carlos III de Madrid, Leganes, Madrid, Spain
\({}^{2}\) ICMAT, Instituto de Ciencias Matematicas (CSIC-UAM-UC3M-UCM)
\({}^{3}\) INFN-Sezione di Napoli, Naples, Italy
\({}^{4}\) Department of Physics "E. Pancini", University of Naples Federico II, Naples, Italy
\({}^{5}\)fcisglia[at]math.uc3m.es \({}^{6}\)fcosmo[at]math.uc3m.es \({}^{7}\)lauragon[at]math.uc3m.es
\({}^{8}\)albertoi[at]math.uc3m.es \({}^{9}\)marmo[at]na.infn.it
## 1 Introduction
This letter aims to extend Cencov's1 categorical description of classical inference theory [84] to the domain of quantum systems, providing a novel categorical foundation to the theory of quantum information. The main focus of information theory is to describe how information is processed and shared among various agents. In particular, in the case of quantum information theory, agents "live" in a quantum environment. The simplest schematic way of representing such exchange and manipulation of information in a purely classical environment was provided by C. Shannon in his mathematical theory of communication [74] as illustrated in Fig. 1. There, Shannon states: _"the fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point"_.
Footnote 1: Sometimes, the name Cencov is also spelled Chentsov.
Two agents, Alice (A) and Bob (B), share information through a physical channel (C). Agents A and B are mathematically modelled, in the classical Kolmogorovian setting, as certain sample spaces, say \(\Omega_{A}\) and \(\Omega_{B}\), carrying measurable structures given by \(\sigma\)-algebras of sets
\(\mathscr{B}_{A}\) and \(\mathscr{B}_{B}\), respectively. Alice wants to share a message with Bob obtained by drawing random outcomes following a probability distribution law \(P_{A}\{d\omega\}\)2. The channel C is modelled as a Markov kernel (or a _transition probability distribution_) \(\Pi\). A Markov kernel is a map \(\Pi\colon\Omega_{A}\times\mathscr{B}_{B}\to\mathbb{R}\) such that \(\Pi(\cdot,\Delta_{B})\) is a measurable function on \(\Omega_{A}\) for each measurable set \(\Delta_{B}\in\mathscr{B}_{B}\), and \(\Pi(\omega_{A},\cdot)\) is a probability measure on \(\Omega_{B}\) for each fixed \(\omega_{A}\in\Omega_{A}\). We will denote the Markov kernel \(\Pi\) from the measurable space \((\Omega_{A},\mathscr{B}_{A})\) to the measurable space \((\Omega_{B},\mathscr{B}_{B})\), as \(\Pi\colon\Omega_{A}\Rightarrow\Omega_{B}\).
Footnote 2: Throughout this paper we will follow Cencov’s notation for probability measures indicating the integral of a random variable (measurable function) \(f\) on a probability space (also called _Kolmogorov space_) \((\Omega_{A},\mathscr{B}_{A},P_{A})\) as \(\int_{\Omega_{A}}f(\omega)P\{d\omega\}\).
Given a Markov kernel \(\Pi\colon\Omega_{A}\Rightarrow\Omega_{B}\), to any measurable function \(f_{B}\) in \(\Omega_{B}\), we can associate a measurable function \(f_{A}\) in \(\Omega_{A}\), denoted as \(\Pi f_{B}:=f_{A}\), as:
\[(\Pi f_{B})(\omega)=\int_{\Omega_{B}}\Pi(\omega,d\omega^{\prime})f_{B}(\omega ^{\prime})\,. \tag{1}\]
In a similar way, if \(P_{A}\) is a probability measure on \(\Omega_{A}\), \(P_{A}\Pi\) is a probability measure on \(\Omega_{B}\) defined as:
\[(P_{A}\Pi)(\Delta_{B})=\int_{\Omega_{A}}\Pi(\omega,\Delta_{B})P\{d\omega\}\,, \tag{2}\]
for any measurable set \(\Delta_{B}\) in \(\Omega_{B}\).
The notation \(\Pi(\omega_{A},d\omega_{B})\) emphasizes the stochastic nature of \(\Pi\). Indeed, in general, a Markov kernel is not induced from a measurable map \(F\colon\Omega_{A}\to\Omega_{B}\) by means of the pull-back operation on functions \(\Pi_{F}f_{B}=f_{B}\circ F\), and the push-forward operation for probability distributions \(P_{A}\Pi_{F}=F_{*}P_{A}\). When this instance presents, it is customary to say that the Markov kernel is deterministic [39]. Therefore, we can say that a general Markov kernel is inherently stochastic in the sense that it is not determined by a deterministic inference \(F\colon\Omega_{A}\to\Omega_{B}\).
To get a better understanding of the mathematical structure of the theory of information, we use Cencov's categorical conceptualisation of statistical inference theory [84] identifying Markov kernel with classical communication channels. Indeed, Cencov's departing point is to assert, together with Wald, that every particular statistical problem is a problem of decision-making [84, Preface, p. 1]: _"the statistician3 having processed certain observational material,
Figure 1: Diagrammatic representation of the fundamental problem of communication. Upper half: schematic description of classical communication among two agents; lower half: schematic description of quantum communication.
must draw conclusions as to the observed phenomenon. Since the outcome of each observation is random, one cannot usually expect there conclusions to be absolutely accurate. It is a job for the theory to ascertain the minimal unavoidable uncertainty of the conclusions in the problem and to indicate an optimal decision rule."_ Statistical decision rules are then modeled as Markov kernels from a sample space \((\Omega,\mathscr{B})\) to the inference space \((\Sigma,\mathscr{S})\).
A fundamental observation4 of Cencov's work [82, 83, 84] is that the collection of all Markov kernels form a category that encodes the structural properties of statistical inference rules: _"The system of all statistical decision rules, transition probability distributions, for all conceivable statistical problems, together with the natural operation of composition, forms an algebraic category. This category generates a uniform geometry of families of probabilty laws, in which the 'figures' are the families and the'motions' are the decision rules"_[84, p. vii]. The composition rule for morphisms in this category is achieved by introducing a natural operation of composition of decision rules (_i.e._, transition probabilities/Markov kernels) as:
Footnote 4: This observation was also made by Lawvere in a seminar in 1962 [63].
\[(\Pi_{1}\circ\Pi_{2})(\omega_{1},A_{3})=\int_{\Omega_{2}}\Pi_{1}(\omega_{1},d \omega_{2})\Pi_{2}(\omega_{2},A_{3})\,. \tag{3}\]
The fact that this composition is indeed associative and gives rise to a category is one of the fundamental observations made by Cencov. Note that the units of the category are given by the deterministic Markov kernels \(1_{\mathrm{id}\alpha}\colon\Omega\Rightarrow\Omega\), associated with \(\mathrm{id}_{\Omega}\colon\Omega\to\Omega\), the identity map.
Therefore, Cencov's category has objects measurable spaces \((\Omega,\mathscr{B})\), morphisms given by Markov kernels (or classical channels) \(\Pi\colon\Omega\Rightarrow\Omega^{\prime}\), and composition law between morphisms given by Eq. (3). This category is now called Stoch, and it is the subject of a flourishing stream of research (see, for instance [39, 40, 41], and references therein). When the attention is shifted towards measurable spaces which are discrete and finite, we obtain the subcategory \(\mathsf{FinStoch}\). This category has been extensively used by Cencov [84], implicitly used by Shannon's in his mathematical model of communication, and we may argue it is also the domain where some of the most important theorems of the general theory of statistical inference have been proved.
A little twist can be applied to Stoch (and \(\mathsf{FinStoch}\)) by considering categories whose objects are (finite) Kolmogorov spaces (_i.e._, finite probability spaces) \((\Omega,\mathscr{B},P)\) instead of just (finite) measurable spaces, and whose morphisms \(\Pi\colon(\Omega,\mathscr{B},P)\Rightarrow(\Omega^{\prime},\mathscr{B}^{ \prime},P^{\prime})\) are Markov kernels \(\Pi\colon\Omega\Rightarrow\Omega^{\prime}\) such that \(P\Pi=P^{\prime}\). The resulting category is the classical counterpart of the category we exploit in this work, referred to as \(\mathsf{NCP}\), and the motivations behind this choice are better described once the quantum case has been recalled.
In Quantum Mechanics, Shannon's original representation of the communication process takes the following form. Following the modern algebraic approach to quantum theories the agents Alice and Bob are quantum systems whose algebras of observables are given by unital \(W^{*}\)-algebras (_i.e._, abstract von Neumann algebras) [79]\(\mathcal{A}\) and \(\mathcal{B}\), respectively. The states of the systems (_i.e._, the quantum analogues of classical probability measures) are normal states on the algebras, that is, linear functionals satisfying a positivity conditions (like probabilitiy measures), a normalization condition (like probability measures), and a suitable continuity condition (like probability measures dominated by a reference measure). The transmission of information is mediated by a quantum channel described by a completely positive, unital map \(\Pi\) from \(\mathcal{A}\) to \(\mathcal{B}\). Referring to standard textbooks on the subject [6, 7, 79], when \(\mathcal{A}=\mathcal{L}^{\infty}(\Omega_{\cdot A},\mu)\) and \(\mathcal{B}=\mathcal{L}^{\infty}(\Omega_{\cdot B},\nu)\), the normal states are precisely those probability measures which are absolutely continuous with respect to \(\mu\) and \(\nu\), respectively, and completely positive, unital maps are Markov kernels. Therefore recover the classical case in this setting.
Enormous effort has been poured in trying to understand the basic properties governing the situation outlined above, and, even if significant advances have been done, we believe there is still work to do to gain a complete understanding of the quantum extension of Shannon's mathematical theory of communication to the quantum setting.
Our point of view on the quantum setting is based on a recent reformulation of quantum theories in terms of groupoids and their algebras [20, 19, 21, 22, 23, 27, 28, 29, 30]. This approach allows to follow closely Cencov's conceptualisation of random phenomena by describing the quantum agents, Alice and Bob, by measure groupoids \(\Gamma_{A}\) and \(\Gamma_{B}\) respectively (see details in Section 2). The message composed by Alice are outcomes in the space \(\Omega_{A}\) of outputs of the system \(\Gamma_{A}\) drawn according to the probability distribution determined by the state of the system, and the morphism that transform them into the message that B reads is determined by the natural extension of Cencov's Markov kernels, that is, a function \(\Pi(\alpha_{A},\alpha_{B})\), whose entries are pairs of transitions \(\alpha_{A}\), \(\alpha_{B}\), in the groupoids \(\Gamma_{A}\), \(\Gamma_{B}\) respectively, such that for a fixed argument \(\alpha_{A}\), it defines a positive type function on \(\Gamma_{B}\), and that for fixed argument \(\alpha_{B}\), it defines a measurable function on \(\Gamma_{A}\). The function \(\Pi(\alpha_{A},\alpha_{B})\) transforms states into states and will be called a quantum Markov morphism (see details in Section 3). The class whose objects are couples of measure groupoids and normal states on their algebras, and morphisms are quantum Markov morphisms preserving mapping the state of the input object into the state of the output object, form an algebraic category which is the fundamental notion of our study. In particular, we argue that the category we introduce presents a fertile environment in which to discuss Cencov's approach to classical information theory, the algebraic approach to quantum information theory, and also a possible unification of classical and quantum information geometry (a multidisciplinary field of research we now briefly recall).
Classical information geometry may be briefly (and incompletely) described as the study of the differential geometric properties of parametric models of probability distributions (_e.g._, normal distributions) [2]. It turns out that most of the parametric models used by applied statisticians share a nice differential geometric structure which takes the name of _statistical manifold_[62]. Specifically, a statistical manifold is a smooth Riemannian manifold \((M,G)\) possessing a totally symmetric, (0,3)-covariant tensor field \(T\). The tensor \(T\) allows to define a couple of affine, torsion-free connections on \(M\) which are dual with respect to \(G\).
In the vast majority of practical cases, the Riemannian metric on \(M\) is not "free", because it coincides with the so-called Fisher-Rao metric tensor \(G_{FR}\)[38, 67, 72]. One of the biggest achievements of the marriage between probability theory, statistical theory, and information theory has been the identification of the central role played by the Fisher information and the Fisher-Rao metric tensor. This geometrical entity provides, among many other structural insights, a lower bounds for the information content of estimators of parametric models of probability measures by means of the Cramer-Rao inequality [56, 66].
The Fisher-Rao metric tensor on the parameter manifold emanates from the very construction of parametric models as subsets of probability measures opportunely parametrized by points in a manifold. Specifically, given a smooth manifold \(M\) and a measurable space \((\Omega,\mathscr{B})\), a model of probability measures on \((\Omega,\mathscr{B})\) parametrized by points in \(M\) is nothing but an immersion map \(i\colon M\to\mathcal{P}(\Omega)\), where \(\mathcal{P}(\Omega)\) is the space of probability measures on \((\Omega,\mathscr{B})\), satisfying suitable regularity conditions (basically ensuring we may exploit the smooth structure on \(M\) in a fruitful way) [3, 4]. If \(\{\theta^{j}\}_{j\in J}\) is a local coordinate systems on \(M\), the Fisher-Rao metric
tensor takes the form5
Footnote 5: Of course, the fact that equation (4) actually gives a Riemannian metric tensor on \(M\) requires additional assumptions to be met by the immersion map \(i\) (for instance, we are implicitly assuming that elements in the parametric models are all dominated by \(\mu\)). However, these assumptions are met in the vast majority of \({}^{\star}\)applied situations\({}^{\star}\).
\[(G_{FR})_{jk}=\int_{\Omega}\frac{\partial\ln(p(x;\theta))}{\partial\theta^{j}} \frac{\partial\ln(p(x;\theta))}{\partial\theta^{k}}\,p(x;\theta)\mathrm{d}\mu (x), \tag{4}\]
where \(i(\theta)=p(x;\theta)\mu\). Looking at equation (4), we may argue that \(M\) "does not know" about \(G_{FR}\) until its points are actually used to parametrize probability measures on \((\Omega,\mathscr{B})\). Therefore, it may be conjectured that the properties of \(G_{FR}\) making it an ubiquitous tool in statistical inference and estimation theory (among other fields) should be related to some inner structures of the space of probability distributions. Indeed, we may dare to say that it is precisely this question that led Cencov to introduce the categorical framework described above. This seemingly "heavy" affirmation is corroborated by the fact that one of the main results achieved in [84] is precisely the proof that the Fisher-Rao metric tensor for parametric models on discrete and finite outcome spaces is the unique (up to a constant factor) Riemannian metric tensor satisfying an invariance property with respect to the maps in FinStoch having a left inverse who is also in FinStoch (Cencov calls these morphisms _congruent embeddings_). Moreover, Cencov also classifies all those couples of affine connections which are mutually dual with respect to \(G_{FR}\) and satisfy an equivariance property with respects to congruent embeddings, thus giving a complete account of the admissible statistical manifolds for finite outcome spaces (as long as the assumption on the behaviour under congruent embeddings is imposed, of course).
Starting from the Fisher-Rao metric tensor and the notion of statistical manifold, classical information geometry takes off and leads to spectacular results in very different applied fields [1]. Some of these results were extended also to the quantum domain, where probability distributions are replaced by quantum states [5]. In particular, one of the pillars of quantum information geometry is the celebrated Petz's theorem [70], which states that the uniqueness of the quantum analogue of the Fisher-Rao metric tensor is necessarily lost. This non-uniqueness, however, should be thought of as being the source of richness rather than a problem, as the wealth of different investigations and results regarding the quantum counterparts of the Fisher-Rao metric tensor seem to imply [13, 24, 25, 43, 44, 45, 46, 47, 64]. Moreover, the very structure of statistical manifold should be rethought in the quantum case because connections with torsions seem to be unavoidable [17, 58, 59, 60] (and a similar attitude seems to be fruitful also in the classical case [87, 88, 89]).
Also along the lines of rethinking the idea of statistical manifolds, recent investigations are pointing toward a connection between information geometry and groupoids [48, 49, 50]. This link stems from the observation that the manifold \(M\times M\) is naturally a Lie groupoid whose associated Lie algebroid is \(TM\), and that the appearence of the Fisher-Rao metric tensor on the parametric manifold \(M\) as the "second-order, diagonal approximation" of a relative entropy function on \(M\times M\) typical of information geometry [2, 24] can be naturally extended to the framework of Lie groupoid. We find this terrain quite apt to develop a foundational investigation of the notion of statistical manifold from the perspective offered by the categorical setting we alluded to before. In particular, by introducing the notion of statistical categories and its associated notion of statistical groupoids (see Section 4), it is possible to look at statistical models as functors in the category introduced in this work. In this categorical
setting, an extension of Cramer-Rao inequality, that works for the classical and the quantum case simultaneously, is readily obtained.
## 2 Quantum systems and groupoids
On its most basic terms a quantum system is characterized by the outcomes \(x,y,\ldots\in\Omega\) of a family of observables, and by a family of _transitions_, \(\alpha\colon x\to y\), experienced by the system, whose interpretation is that if the observable \(A\) were measured right before the observed transition took place, the outcome would have been \(x\), and if measured again, right after the transition had taken place, the result would have been \(y\). The outcome \(x\) of the transition \(\alpha\colon x\to y\), will be called its source, and the outcome \(y\) its target.
The natural axioms satisfied by the family of all possible transitions are those of a groupoid (see [28] for more details). In particular, transitions compose in a natural way: the symbol \(\beta\circ\alpha\) denotes the transition resulting from the occurrence first of the transition \(\alpha\) and immediately afterwards, the transition \(\beta\). Two transitions \(\alpha\), \(\beta\) can be composed only if the target of the first coincides with the source of the second (note the backwards notation for composition), in which case they are said to be composable, and such composition law is associative. There are unit elements, that is, transitions \(1_{x}\colon x\to x\) such that they do not affect the transition \(\alpha\colon x\to y\), when composed on the right, i.e., \(\alpha\circ 1_{x}=\alpha\), or on the left, \(1_{y}\circ\alpha=\alpha\), and whose physical interpretation is that the system remains unchanged during the observation and, finally, the fundamental property that implements Feynmann's principle of microscopic reversibility [37, page 3], that is, for any transition \(\alpha\colon x\to y\), there is another one, denoted \(\alpha^{-1}\colon y\to x\), such that \(\alpha^{-1}\circ\alpha=1_{x}\) and \(\alpha\circ\alpha^{-1}=1_{y}\).
The collection of all transitions satisfying the previously enumerated properties is called a (algebraic) groupoid \(\Gamma\) with space of objects (called in what follows "outcomes") \(\Omega\). The map \(s\colon\Gamma\to\Omega\) assigning to the transition \(\alpha\colon x\to y\) the initial outcome \(x=s(\alpha)\) is called the source map, and the map \(t\colon\Gamma\to\Omega\) assigning to \(\alpha\) its final outcome \(y=t(\alpha)\) is called the target map. We will make such structure notationally evident by writting \(\Gamma\rightrightarrows\Omega\).
The previous notions provide a natural mathematical setting to Schwinger's 'algebra of selective measurements' [73] introduced to provide an abstract setting to the foundations of atomic physics. Transitions could also be understood in terms of the basic quantum mechanical notion of probability amplitudes because a unitary representation of the given groupoid will associate to them a family of operators directly related to the notion of probability amplitudes or 'transition functions' in J. Schwinger's terminology [28].
It is also possible to conceive of a groupoid as an abstraction of a certain experimental setting used to describe the properties of a given system. For instance, if we consider a charged particle moving on a certain region where detectors have been placed, the triggering of them will correspond to the possible outcomes of the system and the sequence of such triggerings would be the transitions of the system. Another possible interpretation is offered by earlier descriptions of spectroscopic data. Actually, as Connes suggested [35], the Ritz-Rydberg combination principle of frequencies in spectral lines is rightly the composition law of a groupoid (in this case of a simple groupoid of pairs).
In what follows, we will look at the groupoid used to describe a certain quantum system as a kinematical object, which means that transitions and outcomes represent just kinematical information obtained from the system without further dynamical content, that is, no specific dynamical law is associated to their description. In this sense, we will say that the groupoid
\(\Gamma\) is a kinematical groupoid. It will also be said that the groupoid \(\Gamma\) is the groupoid of "configurations" of the system, and it is associated to a specific experimental description of a quantum mechanical system (obviously, more than one kinematical groupoid can be used to describe the same quantum system, think for instance on electrons moving through bubble chambers, Stern-Gerlach devices or two-slits walls, each one of these experiments can be described by using a kinematical groupoid, all of them different).
This kinematical interpretation of the raw information provided by the experiments performed on our system must be completed with a probabilistic interpretation. In a first step towards this aim we could extend directly Kolmogorov's mathematical description of random phenomena and we will assume that the space of outcomes \(\Omega\) of the groupoid \(\gamma\) is a Kolmogorov space carrying a measurable structure \(\mathscr{B}\) and a probability distribution \(P\{dx\}\).
In order to extend such structure to the whole groupoid we would be looking for a measure structure \(\nu\) on \(\Gamma\) such that it is consistent with the projection maps \(s,t\) and the action of the groupoid \(\Gamma\) on itself by left or right translations. It turns out that such structure was analized separately by A. Connes [34] and R. Haar [52, 53] following the lead by G.W. Mackey and others.
As it turns out, the measure theoretical structure of the groupoid \(\Gamma\rightrightarrows\Omega\) is largely determined by the probability measure \(P\) on \(\Omega\), a left-invariant family of Haar measures, that is, a family of measures \(\nu^{x}\) with support in \(\Gamma^{x}=t^{-1(x)}\), satisfying that \(\alpha\nu^{x}=\nu^{y}\) for any \(\alpha\colon x\to y\) (also called a transverse function in Connes' terminology), and \((\alpha\nu^{x})(\Delta)=\nu^{x}(\alpha^{-1}\circ\Delta)\), and, finally a modular map \(\delta\colon\Gamma\to\mathbb{R}^{+}\) which is a homomorphism of groupoids. These ingredients determine a essentially unique measure \(\nu\) in \(\Gamma\) that disintegrates as \(\nu=\int_{\Omega}\nu^{x}P\{dx\}\) and such that the Radon-Nikodym derivative of the measure \(\nu^{-1}\) defined as \(\nu^{-1}(\Delta)=\nu(\Delta^{-1})\), with respect to \(\nu\) is the integrable modular map \(\delta\). Under minimal requirements on the measure theoretical properties of the spaces, for instance that they are standard Borel spaces, the groupoid \(\Gamma\rightrightarrows\Omega\) together with the class of the measure \(\nu\) has associated a von Neumann algebra \(\nu(\Gamma)\) with support Hilbert space \(L^{2}(\Gamma,\nu)\)[53]. The pair \((\Gamma\rightrightarrows\Omega,[\nu])\) will be called a measure groupoid and it constitutes a _bona fide_ extension of Kolmogorov's model to describe random phenomena when we consider boht, random outcomes and transitions among them.
A relevant observation here is that the measure \(\nu\) does not have to be interpreted in statistical terms, that is, it does not provide the statistical frequencies of events in \(\Gamma\). Such statistical interpretation could be carried by a more sophisticated notion of measure, e.g. a grade 2-measure as argued in [75] (see, for instance, [30, 14] for a detailed discussion) or, as it is commonly done, by identifying the physical states of the theory with the states of the von Neumann algebra \(\nu(\Gamma)\) of the groupoid. This is the point of view that will be taken here.
A large class of topological groupoids \(\Gamma\), for instance those that are locally-compact, Hausdorff topological spaces for which \(s,t\), as well as \(x\mapsto 1_{x}\), \(\alpha\mapsto\alpha^{-1}\), and \((\beta,\alpha)\mapsto\beta\circ\alpha\), are continuous maps has been already considered in the literature. When \(\Gamma\) and \(\Omega\) are smooth manifolds and all the maps are smooth (in particular, \(s,t\) are smooth submersions), we say that the groupoid is a _Lie groupoid_. In what follows, and in order to avoid technical complications in the exposition, we will assume that the groupoids used in the description of quantum systems are just countable discrete. In such case the Borel structures are just given by the family of all subsets, all measures have atoms given by the elements of the groupoid itself. Then the von Neumann algebra of the measure groupoid \((\Gamma,\nu)\) is just the completion in the weak (or strong) topology of the left regular representation of the abstract algebra \(\mathbb{C}[\Gamma]\) of the groupoid. Elements \(a\in\mathbb{C}[\Gamma]\) are finite formal linear combinations of transitions \(\alpha\in\Gamma\), \(a=\sum_{\alpha}a_{\alpha}\alpha\), all \(a_{\alpha}\in\mathbb{C}\) are zero except
for a finite number of them, and \(A=\lambda(a)\) is the bounded operator on \(L^{2}(\Gamma,\nu)\) given by :
\[(A\Psi)(\beta)=(\lambda(a)\Psi)(\beta)=\sum_{t(\alpha)=t(\beta)}a_{\alpha}\delta^ {1/2}(\alpha)\Psi(\alpha^{-1}\circ\beta)\,.\]
It is a trivial exercise to check that the assignment \(a\in\mathbb{C}[\Gamma]\mapsto A=\lambda(a)\in\mathscr{B}(L^{2}(\Gamma,\nu))\) is a \(*\)-algebra representation of the \(*\)-algebra \(\mathbb{C}[\Gamma]\) in the \(C^{*}\)-algebra of bounded operators on \(L^{2}(\Gamma,\nu)\), and then:
\[\nu(\Gamma)=\overline{\lambda(\mathbb{C}[\Gamma])}^{\text{WOP}}\,.\]
As indicated before physical states of the system described by the groupoid \(\Gamma\) will be identified with states \(\rho\) on the von Neumann algebra of the groupoid, that is normalised positive functionals \(\rho\colon\nu(\Gamma)\to\mathbb{C}\). Then, any state has a characteristic function \(\varphi\colon\Gamma\to\mathbb{C}\) associated to it by restriction of \(\rho\) to \(\Gamma\), that is:
\[\varphi(\alpha)=\rho(\lambda(\alpha))\,.\]
In such situation if \(A\in\nu(\Gamma)\), there is a sequence \(a_{n}\in\mathbb{C}[\Gamma]\) such that \(\lambda(a_{n})\to A\), and:
\[\rho(A)=\lim_{n}\rho(\lambda(a_{n}))=\lim_{n}\sum_{\alpha}a_{n}(\alpha)\varphi (\alpha)\nu(\alpha)\,. \tag{5}\]
Characteristic functions \(\varphi\) associated to states are positive definite, that is they satisfy the positivity property: for all \(N\in\mathbb{N}\), \(\zeta_{k}\in\mathbb{C}\), \(k=1,\ldots,N\), \(\alpha_{k}\in\Gamma\), then
\[\sum_{t(\alpha_{k})=t(\alpha_{l})}\bar{\zeta_{k}}\zeta_{l}\varphi(\alpha_{k}^ {-1}\circ\alpha_{l})\geq 0\,.\]
Moreover, if \(\varphi\) is the characteristic function of the state \(\rho\), then we have the normalisation condition: \(\sum_{x\in\Omega}\varphi(1_{x})P(\{x\})=1\), resulting form \(\rho(\mathbf{1})=1\). Clearly the numbers \(\varphi(1_{x})\) are non-negative real numbers, hence they determine a probability distribution \(p(x)=\varphi(1_{x})P(\{x\})\) in the space of outcomes \(\Omega\). Conversely, because Eq. (5) any normalised positive definite function \(\varphi\) on \(\Gamma\) will define a state on the von Neumann algebra of \(\Gamma\).
Of course if the measure groupoid \(\Gamma\) is just the trivial groupoid defined by the set \(\Omega\) carrying a Kolmogorov structure, then its von Neumann algebra is just the Abelian von Neumann algebra of essentially bounded functions on \(\Omega\), \(\nu(\Gamma)=L^{\infty}(\Omega,P)\), and the states of the theory are just probability measures \(p\) on \(\Omega\) absolutely continuous with respect to \(P\). It is remarkable that any Abelian von Neumann algebra has this form, hence corresponding to trivial measure groupoids.
The simplest non-trivial situation corresponds to \(\Gamma\) being the groupoid of pairs of a countably discrete space \(\Omega\). If \(\Omega\) is finite, then the von Neumann algebra \(\nu(\Omega)\) can be identified with the algebra of \(N\times N\) matrices \(M_{N}(\mathbb{C})\), with \(N=|\Omega|\). If \(\Omega\) is infinite, then again the von Neumann algebra of the goupoid of pairs \(\Gamma=\Omega\times\Omega\rightrightarrows\Omega\), can be identified with the factor of Type \(I_{\infty}\) of all bounded linear operators on \(L^{2}(\Omega,P)\), and the states of the theory will be described by density operators because of Gleason's theorem.
Much more complicated situations emerge readily by considering the composition of families of finite systems (see, for instance [15] where the groupoidal analysis of Powers construction of Type III factors was discussed) or by considering countably infinite systems with a finite space of outcomes in which case we will be describing quantum systems by groupoids whose von Neumann algebra would be Type II factors.
The NCP category and quantum environments
As anticipated in the introduction, one of the main purpose of this work is to introduce the category \(\mathsf{NCP}\) that allows to deal with classical and quantum information theory in a way that incorporates both Cencov's ideas about the role of \(\mathsf{Stoch}\) and \(\mathsf{FinStoch}\) in classical statistics, the algebraic approach to quantum information theory, and the groupoidal point of view discussed above. We start considering countably discrete groupoids \(\Gamma\rightrightarrows\Omega\) and their groupoid von Neumann algebras \(\mathcal{M}\) as in the end of the previous section. Then, because of the identification between states and normalized, positive definite functions on \(\Gamma\), we can introduce the notion of quantum Markov kernel in analogy with the classical case as a map: \(\Pi\colon\Gamma_{1}\times\Gamma_{2}\to\mathbb{C}\), such that:
1. Normalization: \(\sum_{x\in\Omega_{2}}\Pi(\alpha_{1},\mathbf{1})=1\).
2. Positivity: \(\Pi(\alpha_{1},\cdot)\) is a positive definite function on \(\Gamma_{2}\), for every \(\alpha_{1}\in\Gamma_{1}\).
3. Hermiticity: \(\overline{\Pi(\alpha_{1},\alpha_{2})}=\delta(\alpha_{2})\Pi(\alpha_{1}^{-1}, \alpha_{2}^{-1})\).
Defining
\[(\varphi_{1}\Pi)(\alpha_{2})=\int_{\Gamma_{1}}\varphi_{1}(\alpha_{1})\Pi( \alpha_{1},\alpha_{2})\nu_{1}\{d\alpha_{1}\}\,, \tag{6}\]
in close analogy with (2), and
\[\Pi f_{2}(\alpha_{1})=\int_{\Gamma_{2}}\Pi(\alpha_{1},\alpha_{2})\nu_{2}\{d \alpha_{2}\}\,, \tag{7}\]
similarly to (1), it is a matter of direct computation to show that \(\varphi_{2}\) is a positive definite function on \(\Gamma_{2}\) if \(\varphi_{1}\) is so, and that \(\overline{(\Pi f_{2})}=\Pi f_{2}\), provided that \(\overline{f_{2}}=f_{2}\). A composition law can be defined according to
\[(\Pi_{12}\circ\Pi_{23})(\alpha_{1},\alpha_{3}):=\int_{\Gamma_{2}}\Pi_{12}( \alpha_{1},\alpha_{2})\,\Pi(\alpha_{2},\alpha_{3})\,\nu_{2}\{d\alpha_{2}\}. \tag{8}\]
The associativity of this composition rule depends on the \(\sigma\)-additivity of the measure \(\nu\), which must be assumed in order to build the groupoid von Neumann algebra \(\nu(\Gamma)\) in the first place [34, 61]. Then, a category can be built using couples \((\Gamma\rightrightarrows\Omega,\varphi)\), where \(\Gamma\rightrightarrows\Omega\) is a countable discrete groupoid, and \(\varphi\colon\Gamma\to\mathbb{C}\) is a normalized positive definite function on \(\Gamma\), as objects, and quantum Markov kernels, denoted by \(\Pi\colon(\Gamma_{1},\varphi_{1})\Rightarrow(\Gamma_{2},\varphi_{2})\), and satisfying the additional property \((\varphi_{1}\Pi)=\varphi_{2}\), as morphisms. The category thus built is reminiscent of \(\mathsf{Stoch}\) and \(\mathsf{FinStoch}\), and is the starting point for the definition of the category \(\mathsf{NCP}\) alluded to in the introduction.
In order to explicitly define \(\mathsf{NCP}\), let us start noting that the couple \((\Gamma\rightrightarrows\Omega,\varphi)\) can be algebraically described through the groupoid von Neumann algebra \(\nu(\Gamma)\) and the state on it determined by the positive definite function \(\varphi\). Moreover, we recall that a quantum Markov kernel \(\Pi\) gives rise to two additional maps by means of equation (6) and equation (7). From the algebraic point of view, equation (6) gives rise to a linear map that sends states on the groupoid von Neumann algebra \(\nu(\Gamma_{1})\) to states in the groupoid von Neumann algebra \(\nu(\Gamma_{2})\). On the other hand, equation (7) gives rise to a linear map between \(\nu(\Gamma_{2})\) and \(\nu(\Gamma_{1})\) which preserves self-adjoint elements, and also positive ones. Also, note how these two linear maps "flow in opposite directions".
Putting everything together, we define the category \(\mathsf{NCP}\) as that category whose objects are couples \((\mathcal{M},\rho)\), with \(\mathcal{M}\) a \(W^{*}\)-algebra and \(\rho\) a normal state on it, and whose morphisms \(\Pi\colon(\mathcal{M},\rho)\Rightarrow(\mathcal{N},\sigma)\) are couples \((f,f_{*})\), where \(f\colon\mathcal{N}\to\mathcal{M}\) is a normal, completely positive, unital map, and \(f_{*}\colon\mathcal{M}_{*}\to\mathcal{N}_{*}\) is the predual map of \(f\) satisfying the additional compatibility condition \(f_{*}(\rho)=\sigma\). The proof that \(\mathsf{NCP}\) is indeed a category is a matter of direct inspection, and its basically due to the associativity of \(f\) and \(f_{*}\) (see [16] for more technical details on this category).
A few remarks are in order here. First of all, the choice of working with \(W^{*}\)-algebras (_i.e._, \(C^{*}\)-algebras which are the Banach dual of a Banach space, the so-called predual) with separable predual, and normal states on them is driven the fact that this is the situation encountered in the vast majority of "applications", and by the nice mathematical properties possessed by these objects [6, 7, 79]. Note that using \(W^{*}\)-algebras allows to deal with classical and quantum information geometry simultaneously, as already remarked in [32, 31, 33, 26]. Indeed, Abelian \(W^{*}\)-algebras like \(\mathcal{L}^{\infty}(\Omega,\mu)\) are a perfect environment to discuss classical situations, as commented in the introduction. Moreover, let us remark that the notion of morphism introduced in \(\mathsf{NCP}\) captures also classical Markov kernels.
The normality assumption on \(f\) amounts to the existence of its predual map \(f_{*}\), while the requirement of complete-positivity [11, 76] is essentially driven by the need of considering tensor products when dealing with composite systems. In the remaining of this work, this enhanced positivity condition will not play a relevant role, even though the characterisation of quantum channels satisfying complete positivity is a major problem.
In specific situations, we would like to identify a given number of agents and channels processing information instead of the full huge category \(\mathsf{NCP}\). This is readily done by introducing the notion of a **quantum environment**, that is, a family of agents working in their laboratories with their corresponding physical systems and communication channels among them. Note that we do not consider separately classical communication channels and quantum ones, in the same way that we do not treat separately classical systems and proper quantum ones, because the notions introduced before allow us to consider all together once for all. In other words a "quantum environment" can be loosely defined as a family of quantum and classical systems together including the interactions and the processes taken place among them.
The notions we have introduced before allow for a natural formalisation of this important concept. Because of the analysis carried on in the previous section we now understand that both classical and quantum systems can be properly described by using groupoids and their algebras, where classical ones will have associated Abelian von Neumann algebras, and proper quantum ones will have associated the von Neumann algebras of their corresponding groupoids. The processes taking place among them will be described by morphisms in \(\mathsf{NCP}\), and they could include both classical Markov kernels among classical systems, proper quantum channels among quantum ones, or mixed situations. In all these cases, processes will describe exchange and manipulation of information among the various agents present in the given environment. Finally, it is clear that if two processes are present in the environment, and they are composable, their composition should also be a possible process taking place in the environment.
Therefore, we may conclude that a quantum environment is closed under composition of morphisms in \(\mathsf{NCP}\), or, more precisely, a quantum environment \(\mathbf{Q}\) will be defined as a small subcategory of \(\mathsf{NCP}\) such that the \(W^{*}\)-algebra appearing in it will be groupoid von Neumann algebras.
A few observations are in order here. Both the condition that the category \(\mathbf{Q}\) defining a
quantum environment is small and that the von Neumann algebras must be groupoid algebras could be dispensed with from a purely formal mathematical perspective. Indeed, there is no reason, apart from an assumption running in the background of our thoughts about the intelligibility of our universe, that makes us to believe that in order to describe natural phenomena we can restrict our mathematics to that provided by set theory. It might very well happen that Nature is inherently non set-theoretical and our use of set theoretical notions is a prejudice derived from our own historical development. On the other side, there are not sufficient experimental evidence that would support such radical departing from the standard use of mathematics in Physics. So, together with E. P. Wigner [85], we will wonder about the unreasonable effectiveness of (set-theoretical) mathematics and will keep the categories used in our description of physical systems small.
Concerning the second assumption, let us remark that every \(W^{*}\)-algebra \(\mathcal{M}\) has associated a groupoid \(\mathscr{G}(\mathcal{M})\), the groupoid whose objects are projections \(p\in\mathcal{M}\), and morphisms partial isometries among them. Under suitable technical conditions, the algebra \(\mathcal{M}\) can be thought as a quotient of the \(C^{*}\)-algebra of \(\mathscr{G}(\mathcal{M})\). In particular, when the algebra \(\mathcal{M}\) is finite-dimensional, there will always be a groupoid of which \(\mathcal{M}\) is the associated groupoid von Neumann algebra [57]. However, this mathematical argument will not provide a natural, direct, interpretation of the elements of such groupoid in physical terms as argued in Section 2. In any case, looking for a relation as close as possible between mathematical structures and the assignment of physical meaning we are expected to provided for them, it would be under reason to stick to the assumptions established before even if for a large part of the mathematical arguments considered they are not strictly necessary.
## 4 Statistical categories and the Cramer-Rao inequality
As it was argued in the introduction, parametric models allow to introduce new analytical and geometrical tools to study the system of interest. Most importantly the Fisher-Rao metric and its derived geometrical notions. At its most basic level, we can say that a parametric model is just a smooth manifold \(\Sigma\) and an injective map \(i\colon\Sigma\to\mathcal{P}(\Omega)\), with \(\mathcal{P}(\Omega)\) denoting the family of all probability distributions on a measurable space \((\Omega,\mathscr{B})\). The map is also required to be smooth with respect to the smooth structure of the Banach space of signed measures on \((\Omega,\mathscr{B})\) with bounded total variation in which \(\mathcal{P}(\Omega)\) naturally sits [3, 4]. The probability distribution \(i(\theta)\in\mathcal{P}(\Omega)\) is denoted typically as \(p(\theta)\) or \(p_{\theta}\), for any \(\theta\in\Sigma\).
From our previous discussion, this parametric description of random phenomena lacks a fundamental ingredient, the possible Markov kernels relating two probabilities \(p=p(\theta)\) and \(p^{\prime}=p(\theta^{\prime})\) with each other. Specifically, parametric models consider a parametrization for the probability distributions in terms of \(\Sigma\), but do not consider the analogue notion for the morphisms of interest, that is for the family of channels, either classical or quantum, that are considered to be relevant in the system under study. How to encode this additional information is precisely what we address in what remains of this discussion.
It is just natural to consider that, because the algebraic structure of a quantum environment is that of an algebraic category, an adequate parametric model for a family of states and channels would be a category \(\mathbf{C}\rightrightarrows\Sigma\) whose objects \(\theta\in\Sigma\) will be modelling the states \(\rho\) of the system, and whose morphisms \(\alpha\colon\theta\to\theta^{\prime}\) would be modelling the channels \(\Pi\) of the quantum environment. Moreover, this assignment should respect the algebraic properties of the components, that is, if \(\alpha\colon\theta\to\theta^{\prime}\) models a channel \(\Pi\) and \(\alpha^{\prime}\colon\theta^{\prime}\to\theta^{\prime\prime}\), then, \(\alpha^{\prime}\circ\alpha\colon\theta\to\theta^{\prime\prime}\)
should model the composition of the two channels \(\Pi\circ\Pi^{\prime}\), and, in addition, it should assign units \(1_{\theta}\), to trivial channels. In other words, the assignment sending objects and morphisms from the model category \({\bf C}\rightrightarrows\Sigma\) into objects and morphisms in the category \({\bf Q}\) (or even \({\sf NCP}\)) must be a functor among categories.
Moreover, in order to perform a geometrical analysis of the properties of the model, it is natural to assure that the category \({\bf C}\) is a smooth manifold. More precisely, we will assume that the category \({\bf C}\rightrightarrows\Sigma\) is a Lie category. A Lie category is a small category \({\bf C}\rightrightarrows\Sigma\) such that \({\bf C}\) is a smooth manifold (possibly with boundary), \(\Sigma\) is a smooth manifold without boundary, the source and target maps \(s,t\colon{\bf C}\to\Sigma\), are smooth submersions and both, the composition map \(m\colon{\bf C}^{(2)}\to{\bf C}\), where \({\bf C}^{(2)}\) is the set of pairs of composable morphisms and \(m(\alpha,\beta)=\alpha\circ\beta\), and the map \(i\colon\Sigma\to{\bf C}\), \(i(x)=1_{x}\), are smooth maps. Moreover, if \({\bf C}\) has a non-empty boundary, we will assume that the restrictions of the source and target maps to it, are again smooth submersions. Given \(\theta\in\Sigma\), we will denote by \({\bf C}^{\theta}=\{\beta\colon\varphi\to\theta\}\), and, similarly, \({\bf C}_{\theta}=\{\beta\colon\theta\to\varphi\}\). We refer to [51] for a recent account on more technical aspects of Lie categories.
Consider, for instance, the partial order category associated to the standard partial order in \(\mathbb{R}\), that is \({\bf C}=\{(x,y)\mid x\leq y\}\), with the composition law induced from the groupoid of paris of \(\mathbb{R}\), that is, \((x,y)\circ(y,z)=(x,z)\). Then, \({\bf C}\rightrightarrows\mathbb{R}\) is a Lie category with source and target maps \(s(x,y)=y\), \(t(x,y)=x\). Moreover \(\partial{\bf C}=\{(x,x)\mid x\in\mathbb{R}\}\), is non-empty and the restrictions of \(s,t\) to it are smooth submersions.
There is a natural notion of "infinitesimal elements" in Lie categories. We consider the Lie algebra of invariant vector fields \(X\) on the manifold \({\bf C}\) with respect to the left (or right) action of the category \({\bf C}\) on itself. Specifically, for any \(\alpha\colon\theta\to\theta^{\prime}\), consider the smooth map \(L_{\alpha}\colon{\bf C}^{\theta}\to{\bf C}^{\theta^{\prime}}\), given by \(L_{\alpha}(\beta)=\alpha\circ\beta\), then the vector field \(X\) is left-invariant if for any \(\alpha\colon\theta\to\theta^{\prime}\), then \(TL_{\alpha}X(\beta)=X(\alpha\circ\beta)\), for any \(\beta\in{\bf C}^{\theta}\). The restriction of left-invariant vector fields \(X\) to the submanifold \(\Sigma\) defines a vector bundle that will be called the (left) Lie algebroid of \({\bf C}\rightrightarrows\Sigma\) and denoted as \(A_{L}({\bf C})\). In a similar way, we can define the right Lie algebroid of \({\bf C}\) by using right-invariant vector field. It can be shown that, if the units of the category \({\bf C}\) are interior, than both the left and the right Lie algebroids of the category are isomorphic and they agree with the Lie algebroid of the Lie groupoid of the category [51].
In what follows, we will refer always to the left Lie algebroid of the category \({\bf C}\). The space of smooth sections \(\xi\) of the Lie algebroid \(A({\bf C})\) carries a canonical Lie bracket induced from the Lie bracket of the corresponding invariant vector fields and there is a natural exponential map \({\rm Exp}:\Gamma(A({\bf C}))\times\Sigma\to{\bf C}\), assigning to any such section \(\xi\) a family of submanifolds in \({\bf C}\) defined as follows:
\[{\rm Exp}(s\xi,\theta)=\varphi_{s}^{X_{\xi}}(\theta)\,, \tag{9}\]
for \(\theta\in\Sigma\), \(s\in(-\epsilon,\epsilon)\), for some \(\epsilon>0\), and \(\varphi_{s}^{X_{\xi}}\) denotes the flow of the left-invariant vector field \(X_{\xi}\) associated to \(\xi\).
We will define a statistical category as a Lie category \({\bf C}\rightrightarrows\Sigma\) together with an injective functor \(i\colon{\bf C}\to{\sf NCP}\), that is, a map that assigns to any element \(\theta\in\Sigma\), an object \(i(\theta)=({\cal M}_{\theta},\rho_{\theta})\) in \({\sf NCP}\), and to any morphism \(\alpha\colon\theta\to\theta^{\prime}\), a morphism \(\Pi(\alpha)\colon({\cal M}_{\theta},\rho_{\theta})\Rightarrow({\cal M}_{\theta^ {\prime}},\rho_{\theta^{\prime}})\), such that \(\Pi(\alpha\circ\beta)=\Pi(\beta)\circ\Pi(\alpha)\), and \(\Pi(1_{\theta})=1_{\rho_{\theta}}\). Therefore, the interpretation of the statistical category for a quantum environmnet \(({\bf C},i,{\bf Q})\) is that it provides a smooth parametric model both for a family of quantum states relevant for the problem at hand together with a family of quantum channels among them. Note that a quantum environment \({\bf Q}\) which is itself a Lie category can be considered a statistical category.
Note that any statistical category extends the notion of a statistical manifold typical of information geometry. Indeed, smooth manifolds \(\Sigma\) can be considered to be Lie categories, albeit quite "dumb" ones because the only morphisms are the units \(1_{\theta}\colon\theta\to\theta\), \(\theta\in\Sigma\). Hence, a statistical manifold \((\Sigma,i,\mathcal{P}(\Omega))\) is a statistical category whith quantum environment provided by the family of Abelian \(W^{*}\)-algebras \(L^{\infty}(\Omega,P)\), with \((\Omega,\mathscr{B},P)\) a Kolmogorov space. Moreover, if \((\mathbf{C},i,\mathbf{Q})\) is a statistical category in which \(\mathbf{C}\) is as before just a manifold, we may consider the assignment \(\theta\mapsto\rho_{\theta}\) as a smooth model for the family of states \(\rho\in\mathscr{S}(\mathbf{Q})\) as discussed, for instance in [18].
Any Lie category \(\mathbf{C}\rightrightarrows\Sigma\) contains a Lie groupoid \(G\rightrightarrows\Sigma\), consisting of all its invertible morphisms. In this sense, any statistical category provides a statistical groupoid by restriction of the functor \(\Pi\). Indeed, the functorial properties of the assignment \(\alpha\mapsto\Pi(\alpha)\), imply that \(\Pi(\alpha^{-1})=\Pi(\alpha)^{-1}\) and the morphism \(\Pi(\alpha)\) corresponding to an invertible morphism \(\alpha\in G\), is invertible in the category \(\mathbf{Q}\), that is, there is another morphism \(\Pi^{\prime}\colon(\rho^{\prime},\mathcal{M}^{\prime})\to(\rho,\mathcal{M})\) such that \(\Pi^{\prime}\circ\Pi\) is the unit morphism at \((\rho,\mathcal{M})\) and \(\Pi\circ\Pi^{\prime}\) is the unit morphism at \((\rho^{\prime},\mathcal{M}^{\prime})\). Lie groupoids in the context of the geometry of information theory were introduced by K. Grabowska, J. Grabowski, M. Kus and G. Marmo [48, 49, 50], and remarkable geometric information concerning the structure of divergence functions and other geometrical structures were derived. We believe the notions and ideas presented in this note provide additional support for the relevance of groupoids and categories in the context of information theory, in general, and information geometry, in particular. These and other related aspects will be presented elsewhere.
We now turn our attention toward the use of statistical categories to obtain a version of the Cramer-Rao bound, for a single estimator, which is adapted to our setting, and is essentially connected with the Gelfand-Naimark-Segal (GNS) representation associated with a given state. For this purpose we will follow and adapt the derivation of the standard uniparametric quantum Cramer-Rao inequality [54, 55, 71, 86].
Let \(\theta_{0}\in\Sigma\) be a fixed point in our space of parameters of the model, and \(\rho_{0}=\rho(\theta_{0})\) be the corresponding state in the \(W^{*}\)-algebra \(\mathcal{M}_{0}\). We now briefly recall the GNS representation determined by the state \(\rho_{0}\)[7, 12]. Consider the so-called _Gelfand ideal_ generated by \(\rho_{0}\), that is, the left ideal \(\mathcal{J}_{0}=\{A\in\mathcal{M}_{0}\mid\rho_{0}(A^{*}A)=0\}\). Define the GNS Hilbert space \(\mathcal{H}_{0}\) associated with \(\rho_{0}\) as the Hilbert space obtained by completing the quotient space \(\mathcal{M}_{0}/\mathcal{J}_{0}\), with respect to the norm associated to the inner product \(\langle A\mid B\rangle_{0}=\rho(A^{*}B)\), where \(|A\rangle=A+\mathcal{J}_{0}\), denotes the vector associated to \(A\in\mathcal{M}_{0}\) in \(\mathcal{A}/\mathcal{J}_{0}\). Then, the GNS representation \(\pi_{0}\colon\mathcal{M}_{0}\to\mathscr{B}(\mathcal{H}_{0})\) is the homomorphism of \(C^{*}\)-algebras defined as \(\pi_{0}(A)|B\rangle=|AB\rangle\), where \(|B\rangle\in\mathcal{H}_{0}\).
Consider now the folium \(\mathcal{W}_{0}\) of the state \(\rho_{0}\), that is, all those states \(\rho\) on \(\mathcal{M}_{0}\) such that there is a density operator6\(D\) on \(\mathcal{H}_{0}\) satisfying \(\rho(A)=\operatorname{Tr}\left(D\pi_{0}(A)\right)\). Suppose that the statistical category \(\mathbf{C}\) satisfies the condition \(i(\Sigma)\subset\mathcal{W}_{0}\), which means that the states we are modelling using the statistical category \(\mathbf{C}\) lie in the folium of \(\rho_{0}\). In particular, this means there exists a family of density operators \(D(\theta)\) such that \(\rho_{\theta}(A)=\operatorname{Tr}\left(D(\theta)\pi_{0}(A)\right)\).
Footnote 6: A density operator on a Hilbert space \(\mathcal{H}\) is a trace-class, positive semidefinite operator with unit trace.
Given an "infinitesimal element" of the statistical category \(\mathbf{C}\), that is, a cross section \(\xi\) of its Lie algebroid \(\pi\colon A(\mathbf{C})\to\Sigma\), there is a one-dimensional family of states \(\rho_{s}:=i(\exp(s\xi_{0}))\), where \(\exp(s\xi_{0})\) denotes the projection on \(\Sigma\) of the curve defined by the exponential map on the Lie algebroid \(A(\mathbf{C})\) (see Eq. (9)), namely, \(\exp(s\xi_{0}):=t(\operatorname{Exp}\left(s\xi(\theta_{0})\right)\). An element \(A\in\mathcal{M}_{0}\) is
an unbiased estimator for \(s\), if the expected value of \(A\) on \(\rho_{s}\) is essentially \(s\):
\[\rho_{s}(A)=s\,,\qquad s\in(-\epsilon,\epsilon)\,,\quad\epsilon>0\,. \tag{10}\]
The idea of interpretating \(A\) as an estimator follows from the fact that equation (10) implies that experimental observations can be used to infer the value of the parameter \(s\) itself.
The map \(\Phi_{\xi}\colon\mathcal{H}_{0}\to\mathbb{C}\), defined by
\[\Phi_{\xi}|B\rangle=\left.\frac{\partial}{\partial s}\right|_{s=0}\rho_{\exp( s\xi_{0})}(B)=\operatorname{Tr}\left(D(\exp(s\xi_{0}))\pi_{0}(B)\right),\]
is a continuous linear map on \(\mathcal{H}_{0}\). Therefore, Riesz's theorem implies there is a unique element \(\ell_{\xi}\in\mathcal{H}_{0}\) such that
\[\Phi_{\xi}(B)=\langle\ell_{\xi}\mid B\rangle_{0}\,.\]
In particular, if \(A\) is an unbiased estimator, it follows from equation (10) that
\[\langle\ell_{\xi}\mid A\rangle_{0}=\Phi_{\xi}(A)=\left.\frac{\partial}{ \partial s}\right|_{s=0}\rho_{\exp(s\xi_{0})}(A)=1\,.\]
Consequently, we have
\[1=|\langle\ell_{\xi}\mid A\rangle_{0}|\leq||\ell_{\xi}||_{0}||A||_{0}\,,\]
which is equivalent to
\[\rho_{0}(A^{*}A)=\langle A\mid A\rangle_{0}\geq\frac{1}{\langle\ell_{\xi}\mid \ell_{\xi}\rangle_{0}}\,. \tag{11}\]
Equation (11) consitutes the generalised Cramer-Rao inequality we were looking for. Following Petz, the term \(\rho_{0}(A^{*}A)\) is interpreted as a generalized statistical variance of \(A\) on the state \(\rho_{0}\), and equation (11) shows it is bounded below by the inverse of \(\langle\ell_{\xi}\mid\ell_{\xi}\rangle_{0}\), a quantity that does not depend on \(A\), but only on the infinitesimal object \(\xi\). Note that \(\langle\ell_{\xi}\mid\ell_{\xi}\rangle_{0}\) may be interpreted as a pointwise inner product on the Lie algebroid \(A(\mathbf{C})\to\Sigma\), so that, when suitable additional regularity conditions are met, we can define a sort of Fisher-Rao metric on the statistical category \(\mathbf{C}\) setting
\[G_{F}(\xi,\zeta)=\langle\ell_{\xi}\mid\ell_{\zeta}\rangle_{0}\,.\]
Because of the appearence of the GNS Hilbert product in the definition of \(G_{F}\), and motivated by the discussion in section 6 of [32], we conjecture this metric to be the analogue, in the context of statistical categories, of the Bures-Helstrom metric tensor [54, 55] (also discussed, from a perspective different from that of estimation theory, in [8, 9, 10, 36, 80, 81]).
Of course, this brief discussion of the Cramer-Rao inequality for one-dimensional estimators in the setting of statistical categories is only a preliminary step toward a systematic development of (multiparametric) estimation theory for statistical categories. Indeed, the very fact of dealing with possibly non-commutative algebras immediately leads to the appearence of a zoo of possible quantum counterparts of covariances [43, 71], and of the Fisher-Rao metric tensor [68, 64, 70], and a careful comparison of all these possibilities must be provided. We believe the covariance associated to the GNS Hilbert product will still provide the best version of the Cramer-Rao inequality (very much like it happens in quantum information theory for finite-level quantum systems described by type \(I_{n}\) factors [42]), but the subtleties of multiparameter quantum estimation theory [65, 69, 77, 78] call for a more detailed discussion we aim to present elsewhere.
Conclusions and discussion
A new categorical background to analyse quantum information theory has been presented. It extends in a natural way Cencov's categorical presentation of statistical inference theory and the standard description of quantum information theory in terms of algebras of operators and quantum channels. The lousy notion of quantum environment can be formulated precisely in terms of subcategories of a universal category **NCP** and the fundamental problem of quantum information theory gains more general perspective. The categorical description allows for a natural use of the notions of equivalence and representations, notions that will be exhaustively discussed elsewhere.
This new perspective allows to introduce the notion of statistical categories and groupoids again as a natural extension of the notion of statistical manifold. In doing so, a generalised Cramer-Rao inequality can be readily obtained and a notion of the Fisher-Rao metric which is adapted to this context and that brings a natural connection with the recent ideas in the geometry of the theory of information involving groupoids takes a new perspective. Statistical categories provide parametric models of both the states and the channels in a coherent way. The problem of the uniqueness of the categorical Fisher-Rao metric thus obtained will be address in subsequent publications. Finally, it is also remarkable that the theory of non-local games can be addressed too in the proposed formalism. The use of the groupoidal description of quantum mechanical systems could provide new insight into the algebraic structures governing such problems.
## Funding
This work has been supported by the Madrid Government (Comunidad de Madrid-Spain) under the Multiannual Agreement with UC3M in the line of "Research Funds for Beatriz Galindo Fellowships" (C&QIG-BG-CM-UC3M), and in the context of the V PRICIT (Regional Programme of Research and Technological Innovation). The authors acknowledge financial support from the Spanish Ministry of Economy and Competitiveness, through the Severo Ochoa Programme for Centres of Excellence in RD (SEV-2015/0554), the MINECO research project PID2020-117477GB-I00, and Comunidad de Madrid project QUITEMAD++, S2018/TCS-A4342. F.D.C. thanks the UC3M, the European Commission through the Marie Sklodowska-Curie COFUND Action (H2020-MSCA-COFUND-2017- GA 801538) and Banco Santander for their financial support through the CONEX-Plus Programme. G.M. would like to thank partial financial support provided by the Santander/UC3M Excellence Chair Program 2019/2020, and he is also a member of the Gruppo Nazionale di Fisica Matematica (INDAM), Italy.
|
2309.07154 | Recall-driven Precision Refinement: Unveiling Accurate Fall Detection
using LSTM | This paper presents an innovative approach to address the pressing concern of
fall incidents among the elderly by developing an accurate fall detection
system. Our proposed system combines state-of-the-art technologies, including
accelerometer and gyroscope sensors, with deep learning models, specifically
Long Short-Term Memory (LSTM) networks. Real-time execution capabilities are
achieved through the integration of Raspberry Pi hardware. We introduce pruning
techniques that strategically fine-tune the LSTM model's architecture and
parameters to optimize the system's performance. We prioritize recall over
precision, aiming to accurately identify falls and minimize false negatives for
timely intervention. Extensive experimentation and meticulous evaluation
demonstrate remarkable performance metrics, emphasizing a high recall rate
while maintaining a specificity of 96\%. Our research culminates in a
state-of-the-art fall detection system that promptly sends notifications,
ensuring vulnerable individuals receive timely assistance and improve their
overall well-being. Applying LSTM models and incorporating pruning techniques
represent a significant advancement in fall detection technology, offering an
effective and reliable fall prevention and intervention solution. | Rishabh Mondal, Prasun Ghosal | 2023-09-09T20:17:39Z | http://arxiv.org/abs/2309.07154v1 | # Recall-driven Precision Refinement: Unveiling Accurate Fall Detection using LSTM
###### Abstract
This paper presents an innovative approach to address the pressing concern of fall incidents among the elderly by developing an accurate fall detection system. Our proposed system combines state-of-the-art technologies, including accelerometer and gyroscope sensors, with deep learning models, specifically Long Short-Term Memory (LSTM) networks. Real-time execution capabilities are achieved through the integration of Raspberry Pi hardware. We introduce pruning techniques that strategically fine-tune the LSTM model's architecture and parameters to optimize the system's performance. We prioritize recall over precision, aiming to accurately identify falls and minimize false negatives for timely intervention. Extensive experimentation and meticulous evaluation demonstrate remarkable performance metrics, emphasizing a high recall rate while maintaining a specificity of 96%. Our research culminates in a state-of-the-art fall detection system that promptly sends notifications, ensuring vulnerable individuals receive timely assistance and improve their overall well-being. Applying LSTM models and incorporating pruning techniques represent a significant advancement in fall detection technology, offering an effective and reliable fall prevention and intervention solution.
Keywords:Fall detection Elderly care Accelerometer sensors Healthcare Aging Raspberry Pi.
## 1 Introduction
Accidental falls pose a grave global challenge, ranking as the second leading cause of unintentional injury fatalities worldwide and claiming the lives of approximately 684,000 individuals annually. This pervasive tragedy falls disproportionately on low- and middle-income countries, where over 80% of these fatal incidents occur. The elderly population, aged 60 and above, bears the brunt of these misfortunes, suffering physical harm and substantial financial implications. The costs associated with falls among the elderly are projected to rise from an estimated $20 billion in 2000 to a staggering $54.9 billion by 2020, as reported by the Centers for Disease Control and Prevention (CDC).
The consequences of falls extend far beyond immediate injuries, as many elderly fall victims cannot regain their footing independently, requiring assistance that may be delayed. Shockingly, individuals can wait an average of 10 minutes or longer, with 3% enduring an hour or more of helplessness before receiving aid. Prolonged immobility during these critical periods often leads to further health complications, hospitalizations, institutionalization, and increased morbidity and mortality rates. Given these alarming statistics, it is crucial to implement comprehensive prevention strategies that combine education, training, secure environments, and innovative research initiatives supported by effective policy interventions to mitigate fall risks.
Existing literature offers numerous strategies to reduce fatal falls and improve the response times of medical and nursing staff. However, many of these solutions face high costs, complex implementations, or privacy limitations. To address these obstacles, we present a cost-effective embedded fall detection device that leverages accelerometers and gyroscopes, providing an unparalleled user-friendly experience. Our research endeavours encompass pioneering ideas, including developing a wearable node integrating fall detection, victim localization, and staff notification functions into a single device. Additionally, we introduce a robust and reliable Long Short-Term Memory (LSTM) model meticulously
compared to conventional machine learning (ML) algorithms to enhance the accuracy and efficiency of fall detection. Complementing these advancements, we have designed an intuitive Android application to assist caregivers in providing care and support.
By merging cutting-edge technologies, cost-effective design, and a holistic perspective on fall detection and prevention, our research aims to alleviate the burden of falls, empower caregivers, and enhance the overall well-being of vulnerable individuals. Our comprehensive approach addresses the pressing need for effective fall prevention strategies, offering promising avenues to mitigate risks and improve the outcomes for fall victims.
## 2 Literature Survey
This chapter focuses on the design of a fall detection system for monitoring geriatric healthcare and detecting falls. Figure 1 demonstrates various Fall Detection systems.
## 3 Vision-based System
Anishchenko [2] deep learning and transfer learning methodologies on real-world surveillance camera data to identify instances of falls to address the limitations associated with artificially generated datasets obtained from controlled scenarios. Bhandari et al.[9] utilize a three-step approach to detect falls in video frames, comprising identifying interest points through the Shi-Tomasi algorithm, determining the inter-point distances by calculating optical flow with the Lucas-Kanade algorithm and estimating motion speed and direction to determine the occurrence of falls. Ogden Kwolek.[7] analyzed Kinect camera feeds and utilized point cloud images to detect falls.
Vision-based systems in healthcare offer precise information through images or video feeds, aiding remote caregivers and enabling early detection of health issues. However, they can be costly to implement and maintain, time-consuming to process, and raise privacy concerns.
## 4 Ambient-based System
Taramasco et al.[11] describes a fall classification system that utilizes low-resolution thermal sensors placed at two horizontal planes near the floor. The results revealed that the Bi-LSTM model achieved a high accuracy of 93%, outperforming the other RNN models.
This ambient system offers several advantages, including its comfortable nature as it does not require users to wear any devices or sensors. It enables continuous monitoring, even without the user wearing any sensors. In case of disadvantages, False alarms may also occur if certain activities are misinterpreted as falls, such as when the user is sitting or lying down.
## 5 Wearable-based System
Kaewkannate and Kim [6] comprehensively analyze four wearable devices designed in a wristband style. Their evaluation thoroughly compares each device's various features and costs. On the other hand, the power consumption of wearable devices is highly dependent on several factors. He et al.[8]
Figure 1: Various types of a Fall Detection System
combined tri-axial accelerometers with gyroscopes and magnetometers to capture a comprehensive range of motion data. Their wearable device demonstrated enhanced performance in detecting falls and differentiating them from everyday activities.
Wearable fall detection devices offer increased safety and improved response time, particularly for individuals at risk of falls. They have a user-friendly design and are often more cost-effective than alternative caregiving options. However, these devices may be prone to false alarms and have limited effectiveness in detecting certain types of falls, posing user challenges.
## 3 Preliminaries
### Problem Statement
The growing concern about falls among the elderly necessitates the development of accurate and practical fall detection systems. Vision-based and ambient-based approaches have limitations in terms of accuracy and practicality. Wearable fall detection systems offer continuous and unobtrusive monitoring, overcoming the boundaries of existing methods. However, achieving high recall while maintaining acceptable precision is a challenge. This paper aims to develop a wearable fall detection system that prioritizes high memory using sensors and LSTM models. By leveraging wearable sensor technology and LSTM models, this system aims to enhance the safety and well-being of vulnerable individuals.
### Relevance of LSTM in Fall Detection
LSTM[5] is a valuable approach in fall detection, addressing precision and recall challenges. Recall is crucial in fall detection to minimize false negatives and ensure timely assistance. LSTM detects sudden sit and fall events by capturing their temporal dynamics and leveraging its memory component. Compared to methods like MLP, LSTM's ability to capture long-term dependencies enhances its effectiveness in recognizing fall patterns. Utilizing LSTM in fall detection systems can significantly improve the safety and well-being of individuals requiring monitoring.
### Sensor
The ADXL345 sensor is reliable and versatile for applications requiring precise motion sensing and acceleration measurement. It offers exceptional performance with low power consumption, high resolution, and programmable range options. With its integrated 3-axis gyroscope functionality, the ADXL345 provides comprehensive motion-sensing capabilities. Its ability to accurately measure changes in acceleration makes it well-suited for fall detection applications. The sensor's low power consumption and compact size make it ideal for integration into wearable devices. The programmable features of the ADXL345 enable customization for optimizing fall detection accuracy and minimizing false alarms.
Figure 2: ADXL345 Sensor
## 4 Proposed Methodology
### Overview of Hardware system
Figure 3 provides an overview of the total system, encompassing power management, data acquisition, and communication in a mobile device. Key components include the Power Manager Unit for efficient power distribution, an accelerometer and gyroscope for accurate motion sensing, Raspberry Pi 3 B+ as the central processing unit, GPS and SIM908 modules for precise positioning, a GSM module for network communication, and a user device for interaction with the system.
### Data-Preprossessing
Data preprocessing enhances fall detection system performance by addressing noise, drift, and artifacts in accelerometer and gyroscope sensor data. Filtering techniques like low-pass and median filters reduce noise and outliers. Normalization scales data to a standardized range, ensuring consistent comparisons across sensors or individuals. Feature extraction identifies relevant patterns, such as statistical measures or frequency-domain features, facilitating accurate fall detection. The Butterworth filter provides a smoothing effect, highlighting significant variations in acceleration or angular velocity associated with falls: filter order and cutoff frequency selection balance precision, computational complexity, and phase distortion. Min-Max normalization eliminates biases and scaling effects, improving accuracy and robustness. Z-score normalization is an alternative technique. The choice of preprocessing methods depends on system requirements and sensor data characteristics. Figure 4 visually represents accelerometer data for a forward fall, showcasing both the raw and filtered data.
### Proposed Model
In our research, we utilized a Long Short-Term Memory (LSTM) model, a type of recurrent neural network, for fall detection. There are two LSTM layers with 64 and 32 memory units each. This design effectively captured temporal dependencies and patterns in the accelerometer and gyroscope data, enabling accurate classification of falls. The input data for the LSTM model included accelerometer and gyroscope measurements from three axes, providing a comprehensive representation of motion data. The model processed sequences of 50-time steps, each with six features (3-axis accelerometer and 3-axis gyroscope measurements).To prevent overfitting, Dropout layers were incorporated after the
Figure 3: Overview of the Hardware system
LSTM layers. The LSTM model employed an output layer with two units representing fall and non-fall classes. A softmax activation function generated probabilities for each category, allowing the model to estimate the likelihood of input sequences belonging to each class. The model was trained using the categorical cross-entropy loss function to minimize the discrepancy between predicted and actual class labels. The Adam optimizer, known for adapting learning rates for individual model parameters, updated the model's weights based on computed gradients. Default parameters were employed, and a batch size of 32 was used to facilitate efficient parameter updates and speed up model convergence.
### Applied Weight Pruning Techniques
To optimize the LSTM model, weight pruning techniques were applied. Weight pruning involves selectively removing less significant connections or weights from the network while preserving accuracy. Figure 5 represents the pipeline of weight pruning in LSTM. The goal was to achieve a 10 to 30% pruning sparsity rate, meaning the percentage of pruned weights. Weight pruning offers benefits beyond model compression. It improves computational efficiency by reducing computations during inference. The reduced model size enables easier deployment on resource-constrained devices. Weight pruning contributes to developing efficient and lightweight fall detection systems suitable for real-world applications.
Figure 4: Graphical representation of raw and filtered data
Figure 5: Pipeline of Weight Pruning in LSTM
## 5 Result
### Dataset
Table 1 represents the collected real-time data to differentiate between falls and activities of daily living (ADLs), a group of six individuals (three women and three men, aged 30 to 60) participated. Their heights ranged from 160 cm to 185 cm, and their weights varied from 50 kg to 85 kg. To achieve this goal, the participants performed six different types of ADLs and four types of falls under controlled conditions using a mattress with a thickness of 20 cm. A total of 10,770 data points were collected during the study, with 6,480 data points corresponding to ADLs and 4,290 data points corresponding to falls. The data points were collected from the participants while performing the designated activities, with variations in activity frequencies among participants. Participants A, B, C, D, and E (all aged 30) completed ten exercises each. In contrast, participants E, F (aged 55), G, H (aged 58), and I (aged 60) performed a different number of activities based on individual capabilities or study requirements.
### Experimental Setup
The research utilized Anaconda with Keras and TensorFlow on Windows 10 for training LSTM models. Testing was conducted on a Raspberry Pi 3 B+ in a Linux environment. Jupyter Notebook was used for code execution and result analysis. This approach allowed for evaluating model performance on different platforms and assessing real-world feasibility.
### Confusion Matrix
Figure 6 represents the confusion matrix for the fall detection system can be summarized as follows:
\begin{table}
\begin{tabular}{|c||c|c|} \hline \multicolumn{3}{|c|}{ADL Activities} \\ \hline Description & Notations & Data points \\ \hline Walking & NF1 & 980 \\ Sitting & NF2 & 1010 \\ Lying & NF3 & 970 \\ Running & NF4 & 1200 \\ Sudden Sit & NF4 & 1100 \\ Sudden Standing & NF5 & 1220 \\ \hline \end{tabular}
\begin{tabular}{|c||c|c|} \hline \multicolumn{3}{|c|}{FALL Activities} \\ \hline Description & Notations & Data points \\ \hline Fall Forward & F1 & 1200 \\ Fall Backward & F2 & 990 \\ Fall Left & F3 & 1000 \\ Fall Right & F4 & 1100 \\ \hline \end{tabular}
\end{table}
Table 1: Description of six types of ADL and four types of fall
Figure 6: Confusion Matrix
The system achieved the following in the case of non-fall instances (Class 0).1083 true negatives (TN), correctly classifying them as non-fall instances.48 false positives (FP), incorrectly classifying them as fall instances. For fall instances (Class 1), the system achieved the following. Seven hundred thirty-two true positives (TP), correctly identifying them as fall instances. Thirty-five false negatives (FN), incorrectly classifying them as non-fall instances.
This information provides valuable insights into the system's classification performance for fall and non-fall instances.
### Classification Report
Class 0 (non-fall) instances were classified with 97% precision and 96% recall, resulting in an F1-score of 96%. For class 1 (fall) instances, the precision was 94%, the recall was 95%, and the F1-score was 95%. These metrics demonstrate the system's high accuracy in identifying non-fall and fall instances. Figure 8 and 8 represents train test accuracy and train test loss.
### Receiver Operating Characteristic (ROC) Curve
Figure 9 displays the Receiver Operating Characteristic (ROC)[3] curve, visually representing the fall detection system's performance. The ROC curve showcases the trade-off between sensitivity (true positive rate) and specificity (1 - false positive rate) at various classification thresholds. The Area quantifies the performance of the fall detection system Under the ROC Curve (AUC). The AUC value, calculated as 0.96 in this case, accurately measures the system's ability to differentiate between fall and non-fall instances. A higher AUC value signifies a more substantial discriminatory power of the system.
## 6 Comparison of different Models and Proposed LSTM
The comparison of different models involved assessing their performance based on various metrics such as accuracy, precision, and recall. Table 2 presents valuable insights into the models' performance in accurately classifying fall and non-fall instances while effectively reducing false positives and negatives.
(A - Accelerometer, G - Gyroscope, N/A - Not Applicable)
## 7 Conclusion and Future Work
This research addressed the critical issue of fall-related incidents by developing and evaluating a fall detection system using LSTM. The system achieved high accuracy, precision, and recall, effectively
minimizing false negatives. The LSTM-based system demonstrated competitive accuracy, sensitivity, and specificity performance compared to previous models. Future research directions include model optimization, sensor integration, real-time implementation, dataset expansion, and validation in real-world scenarios. Overall, this thesis improves the safety and quality of life for individuals at risk of falls. Future research should focus on utilizing more lightweight sensors and incorporating additional features to improve the accuracy of the fall detection system.
|
2309.17347 | Demographic Parity: Mitigating Biases in Real-World Data | Computer-based decision systems are widely used to automate decisions in many
aspects of everyday life, which include sensitive areas like hiring, loaning
and even criminal sentencing. A decision pipeline heavily relies on large
volumes of historical real-world data for training its models. However,
historical training data often contains gender, racial or other biases which
are propagated to the trained models influencing computer-based decisions. In
this work, we propose a robust methodology that guarantees the removal of
unwanted biases while maximally preserving classification utility. Our approach
can always achieve this in a model-independent way by deriving from real-world
data the asymptotic dataset that uniquely encodes demographic parity and
realism. As a proof-of-principle, we deduce from public census records such an
asymptotic dataset from which synthetic samples can be generated to train
well-established classifiers. Benchmarking the generalization capability of
these classifiers trained on our synthetic data, we confirm the absence of any
explicit or implicit bias in the computer-aided decision. | Orestis Loukas, Ho-Ryun Chung | 2023-09-27T11:47:05Z | http://arxiv.org/abs/2309.17347v1 | # Demographic Parity:
###### Abstract
Computer-based decision systems are widely used to automate decisions in many aspects of everyday life, which include sensitive areas like hiring, loaning and even criminal sentencing. A decision pipeline heavily relies on large volumes of historical real-world data for training its models. However, historical training data often contains gender, racial or other biases which are propagated to the trained models influencing computer-based decisions. In this work, we propose a robust methodology that guarantees the removal of unwanted biases while maximally preserving classification utility. Our approach can always achieve this in a model-independent way by deriving from real-world data the asymptotic dataset that uniquely encodes demographic parity and realism. As a proof-of-principle, we deduce from public census records such an asymptotic dataset from which synthetic samples can be generated to train well-established classifiers. Benchmarking the generalization capability of these classifiers trained on our synthetic data, we confirm the absence of any explicit or implicit bias in the computer-aided decision.
## 1 Introduction
Artificial intelligence (ai) finds extensive application in various classification tasks, ranging from buyer's guides to prioritizing icu-admissions and from hiring processes to self-driving cars. Computer-aided decision systems have demonstrated remarkable success in automating workflows and deriving accurate conclusions. However, it is important to recognize that the very factor contributing to the success of ai models also represents a potential vulnerability.
Any sufficiently complex machine-learning algorithm is expected to uncover all systematic patterns inherent in the data to ensure realistic decision-making. This faithful representation of our social reality is essential, as it determines the practical utility of implementing ai processes in automating decision-making. On the other hand, faithfully generalizing from patterns and trends observed in real-world datasets automatically implies replicating any discriminatory biases present within the dataset itself.
In principle, two forms of discriminatory biases can be encountered in a classification setting. The first form is more apparent, enabling the identification of direct discriminatory relationships between a protected attribute, such as gender, and the final decision. On the other hand, the second form is subtler, as it indirectly connects sensitive profiles to the decision through a discriminatory confounding with another predictor. While the first form can be addressed by completely removing protected attributes from the dataset, the second form of bias is more challenging to detect and address. Most alarmingly, this second form of bias can resurface when the classifier generalizes to new data that
persistently exhibits biases from society, even if offending confounding relationships have been correctly identified and removed during training.
As pattern-recognition and classification workflows in ai become increasingly complex, it becomes more challenging to systematically identify and prevent both direct and indirect forms of discrimination influencing the computed-aided decision. This inability to guarantee the absence of known or suspected discriminatory biases hinders the broader application of ai, particularly in critical domains such as criminal sentencing or governance. In recent years, there has been a growing demand (Xu et al., 2021; Mehrabi et al., 2021) for automation that is free from discriminatory biases, leading to the emergence of fair machine learning. Fair machine learning aims to accurately reproduce most patterns revealed by data while simultaneously restoring parity among sensitive profiles.
Within the context of fair machine learning, we adopt a systematic, model-independent approach that separates the task of de-biasing data from the actual training process of a classifier architecture. This clear distinction allows us to provide robust mathematical assurances of fairness on train and test data that are independent of the complexity of the model architecture. Figure 1 illustrates our distinct approach to achieving fairness by appropriately modifying the data.
Given a real-world data and after declaring protected predictors like gender, race/ethnicity, sexual orientation etc, one imposes a series of marginal constraints from the original data that any de-biased dataset has to obey, at least up to sampling noise. We propose to require that our data fulfils demographic Parity, classification Utility and social Realism, in short pur. Starting from satisfying these rather intuitive constraints, we additionally demand to be as close as possible to the original data. In statistics, this optimization problem uniquely produces a fair probability distribution over profiles that precisely captures desired classifying relationships, while modifying (softly constrained) higher-order relationships to achieve demographic parity.
In addition to drawing upon mathematical theorems, we demonstrate the logic and effectiveness of pur approach by concrete applications. Once we have derived a fair distribution from train data that summarizes real-world census records, we employ it as a natural classifier to make predictions on test data. This approach allows us to verify the absence of systematic bias against the designated protected attributes, while also confirming the classification utility of the natural classifier. Additionally, we leverage the fair distribution to generate synthetic datasets, which are then used to train random forests. This step highlights the ability of our methodology to generalize in broader contexts by augmenting established models.
## 2 Methodology
In any classification setting, there minimally exists a - usually categorical - feature, the so-called response variable \(Y\) with at least two outcomes intimately related to a collection of explanatory features. The latter are perceived as random variables comprising the set of predictors. Each predictor assumes an a priori different number of categories from some domain.1 Among predictors, we distinguish between _protected_ attributes \(\mathbf{S}=(S_{1},S_{2},\ldots)\) that could entail _sensitive_ relationships to the response variable \(Y\) and the remaining, _unprotected_ attributes \(\mathbf{X}=(X_{1},X_{2},\ldots)\). A tuple \((s_{i},x_{j})\) with \(s_{i}\in S_{i}\) and \(x_{j}\in X_{j}\) unambiguously characterizes then a predictor profile.
Footnote 1: For compactness of notation, we use the same capital letter to collectively refer to the feature as well as its domain.
Figure 1: The flow chart of pur methodology which removes discriminatory biases to produce fair utility-driven datasets. The latter can be used in training classifiers.
### Preliminaries
Any model-independent formulation of statistical problems necessarily relies on the _joint_ probability distribution \(p\) over possible profiles \((y,\mathbf{s},\mathbf{x})\) from the Cartesian product of response domain \(Y\) with all predictor domains \(\mathbf{S}\) and \(\mathbf{X}\). Armed with some estimate of this joint probability distribution, we can compute _marginals_ of selected features, say \(Y\) and \(X_{i}\) taking specific values \((y,x_{i})\) by summing over all probabilities of joint profiles where the selected features assume the specified values. Determining marginal sums for all possible profiles in the Cartesian product of the selected domains defines in turn a marginal distribution.
A direct estimate of joint probability distribution can be always obtained by calculating from the provided dataset relative frequencies \(f(y,\mathbf{s},\mathbf{x})\) which comprise the _empirical_ distribution. Due to finite sample sizes or deterministic relationships like natural laws, not all profiles in the Cartesian product of feature domains are necessarily observed in real life, meaning that \(f\) usually exhibits many sampling and structural zeros (Bishop et al., 2007), respectively. In any case, we shall assume that all classes in \(Y\) have been encountered in the data, at least once, as well as all sensitive profiles from \(\mathbf{S}\).
The theoretical machinery itself that is invoked in the next section is insensitive to the presence of zero estimates in the empirical distribution. However, to achieve fairness we shall make sure that any marginal \(f(y,\mathbf{s})\) involving the response to the sensitive attributes receives a finite probability. One straight-forward way to achieve this in probability space is via the pseudo-count method (Morcos et al., 2011):
\[f(y,\mathbf{s},\mathbf{x})\quad\rightarrow\quad\frac{f(y,\mathbf{s},\mathbf{ x})+\lambda/N}{1+|Y||\mathbf{S}||\mathbf{X}|\lambda/N} \tag{1}\]
\(|\cdot|\) denotes the cardinality of feature domain. The hyper-parameter \(\lambda\) which controls the regularization strength was originally thought to be fixed to one. Nevertheless, our method robustly works with any \(\lambda>0\).
Since heavily extrapolating to unseen profiles could well be misleading, one could uniformly regularize the Cartesian product of all admissible labels \(y\in Y\) with only the predictor profiles \((\mathbf{s},\mathbf{x})\in\mathbf{S}\times\mathbf{X}\) that have been observed in the data. Besides concerns (Jaynes, 1968) regarding artifacts created by excessive regularization, assigning pseudo-counts to all joint profiles in \(Y\times\mathbf{S}\times\mathbf{X}\) would quickly reveal the np-completeness underlying categorical problems with \(L\) features which scale at least as \(2^{L}\). By considering only predictor profiles that have been observed in real-world data (far below any bound posed by current computational technology), we are able to deduce exact results in Section 2.3.
### Problem statement
The provided data could be - often severely - biased against sensitive profiles \(\mathbf{s}\in\mathbf{S}\) corresponding to discriminated groups. Quantitatively, widely used (Braveman, 2006; Mehrabi et al., 2021) measures of such _disparity_ are defined as either ratios or differences between conditional probabilities. Focusing on a possible outcome \(y\in Y\), we examine after marginalizing over \(\mathbf{X}\), the deviation of conditional \(p(y|\mathbf{s})\) given a sensitive profile \(\mathbf{s}\) from a reference profile \(\mathbf{s}_{0}\). The latter usually corresponds to a group which enjoys social privileges, also in accordance with the provided data. Evidently, _demographic parity_ is restored whenever the conditional probabilities of the outcome become independent from protected attributes.
Generically, fair machine learning tries to avoid reproducing biased decisions against sensitive profiles that are advocated by training data. This objective appears to undermine the desired accuracy and generalization capability of a classification routine. In an extreme scenario, it would be possible to trivially create a fair classifier by assigning equal probability to every joint profile \((y,\mathbf{s},\mathbf{x})\) at the expense of loosing any predictive power from the original data. Already in previous work (Bhargava et al., 2022) on fair machine and representation learning, the notion has appeared of an "optimal" classifier which partially compromises classification power to - almost - achieve parity.
To be able to rigorously establish a definition of optimality, we first need to decouple the question about the architecture of a fair classifier from de-biasing training data. Focusing on the latter point, our scheme entirely operates at the level of (pre)-processing real-world datasets that are plagued by discriminatory biases. Ultimately, we want to guarantee that the pre-processed data described by a joint distribution \(p\) systematically satisfies parity among all profiles \(\mathbf{s}\in\mathbf{S}\), while fully preserving
real-world classification utility. As a result, any classifier would be at most exposed to training data described by \(p\), instead of the original \(f\) according to flow chart 1.
Translated in the language of distributions over joint profiles, our motivating goal becomes thus to find a _fair_ estimate for \(p\) that enforces demographic Parity while retaining classification Utility of the original \(f\). This amounts to requiring for all admissible profiles following marginal constraints:
* demographic Parity \[\sum_{\mathbf{x}\in\mathbf{X}}p(y,\mathbf{s},\mathbf{x})=f(y)f(\mathbf{s})\] (2)
* decision Utility \[\sum_{\mathbf{s}\in\mathbf{s}}p(y,\mathbf{s},\mathbf{x})=f(y,\mathbf{x})\] (3)
* demographic Realism \[\sum_{y\in Y}p(y,\mathbf{s},\mathbf{x})=f(\mathbf{s},\mathbf{x})\] (4)
Any \(p\) that belongs to the convex set of distributions over \(Y\times\mathbf{S}\times\mathbf{X}\) which satisfy these three groups of linear constraints in \(p(y,\mathbf{s},\mathbf{x})\) shall be called a _pur_ distribution.
In the _pur_ scheme, demographic Parity is enforced as the absence of the correlation between response variable and sensitive attributes. Note that constraint 2 implies \(p(y|\mathbf{s})=p(y)=f(y)\) for the derived conditional probabilities. Consequently, any disparity ratio directly deduced from such _pur_ distribution \(p\) would be automatically one and any disparity difference zero.
At the same time, decision Utility ensures that relationships of the unprotected attributes \(\mathbf{X}\) to the response variable \(Y\) remain unaltered in \(p\) and are not accidentally biased over pre-processing when correcting for Parity. Finally, demographic Realism prevents any form of indirect biasing (that could undermine our aim) by learning discriminatory relationships among predictors \(\mathbf{S}\) and \(\mathbf{X}\) which are currently present in society, as evidenced in the data.
Below, we show via concrete applications that these _pur_ marginal conditions comprise a minimal set of hard constraints required to systematically achieve our stated goals. One of them is to stay as close as possible to the original dataset while correcting for any disparities. In terms of distributions, this can be expressed as minimization of the kl divergence (Shore and Johnson, 1980; Kullback, 1997) from \(f\),
\[D_{\texttt{kl}}(p||f)=\sum_{(y,\mathbf{s},\mathbf{x})}p(y,\mathbf{s},\mathbf{ x})\log\frac{p(y,\mathbf{s},\mathbf{x})}{f(y,\mathbf{s},\mathbf{x})} \tag{5}\]
over all joint distributions that fulfill constraints 2, 3 and 4.
For the empirical \(f\) as our _reference_ distribution, we have to use the regularized estimate 1, otherwise the kl divergence might not be well-defined especially for smaller datasets. Furthermore, we do not need to worry about unobserved profiles, as these have no information-theoretic impact, due to \(0\cdot\log 0=0\). Hence, the summation in Eq. 5 needs to go over the Cartesian product of the anticipated outcomes with observed-only predictor profiles.
### The fair solution
As it turns out (Csiszar, 1975; Csiszar, 1991), under a consistent2 set of linear constraints, the minimization of kl divergence in the probabilities over observed profiles poses a convex optimization problem. This always admits a unique solution, the so-called _information projection_(Nielsen, 2018) of empirical \(f\) onto the convex solution space defined by constraints 2, 3 and 4, in short the _pur_ projection of \(f\). In Appendix, we recapitulate the proof of existence and uniqueness of the information-projection in a more applied fashion.
Footnote 2: Any constraint involving the prevalence of some joint profile \((y,\mathbf{s},\mathbf{x})\) could render the linear system of coupled equations 2-4 over-determined.
Generically, the information-projection of \(f\) on the solution set defined by _pur_ conditions would be a joint distribution with real and not rational probabilities, the latter being relevant for finite sample
size \(N\). Hence, one should think of the pur projection of \(f\), signified by \(q\), as the _asymptotic_ limit \(N\rightarrow\infty\) at which a dataset with the Utility and Realism of the original data restores demographic Parity. This is well demonstrated via sampling of counts from \(q\).
Production of synthetic dataAt finite sample size \(N\), we can formally sample counts \(Np(y,\mathbf{s},\mathbf{x})\in\mathbb{N}_{0}\) from \(q\) via the multinomial distribution \(mult(Np;q)\). In larger populations, it is permissible [10] to use the multinomial instead of the formally more appropriate hyper-geometric distribution to sample datasets that are smaller than the population size. Incidentally, this sampling operation provides a coherent way to generate synthetic data described by \(p\) that differ from pur projection by mere sampling noise.
In other words, synthetic data produced from \(q\) as indicated by the last step in Figure 1 would not introduce any systematic demographic bias against \(\mathbf{S}\), as long as this had been fully removed from \(q\). Indeed, a large-\(N\) expansion,
\[\log mult(Np;q)=-ND(p||q)+\ldots \tag{6}\]
best demonstrates (recall that \(D(p||q)\to 0\) iff \(p\to q\)) how synthetic datasets sampled from pur projection \(q\) become with increasing \(N\) more and more concentrated around it.
Alternative reference distributionAs argued below Eq. 5, an intuitive reference distribution to select the pur projection is the regularized empirical distribution. By tuning \(\lambda\) in Eq. 1, we can always bring the regularized \(f\) closer to the uniform distribution \(u\) which assigns the same probability to any joint profile \((y,\mathbf{s},\mathbf{x})\). In the limit of \(\lambda\rightarrow\infty\), we uncover due to (\(H\) denotes Shannon's entropy)
\[H[p]=-D_{\text{\sc KL}}(p||u)+\log\left(|Y||\mathbf{S}||\mathbf{X}|\right)\]
the principle of Maximum entropy [11], in short maxEnt under pur constraints.
To avoid disclosing higher-order effects between predictors and response, an aspect of paramount importance in privacy-related applications, one could well consider the pur projection of \(u\) as starting point for fair model-building. Such choice goes in the direction of [10], though in our setup we ensure that the optimal maxEnt distribution exactly satisfies the fairness constraints 2-4. As the proposed formalism remains structurally the same under any reasonable (i.e not unjustifiably biased) reference distribution in Eq. 5, it bears the potential to be readily applied in the cross-roads of fair and private machine learning, in the spirit of [10, 11].
The iterative minimization of information divergenceAfter receiving a dataset described by empirical distribution \(f\) and having decided about a reference distribution, most intuitively \(f\) itself, we need to compute its unique pur projection. In most cases, there exists no closed-form solution, so that some iterative method must be invoked. In principle, multi-dimensional Newton-based methods could quickly find \(q\) starting e.g. from \(f\), after reducing pur conditions to linearly independent constraints [10].
Another class of iterative approaches which is particularly tailored to enforce marginal constraints on reference distribution is the Iterative Proportional Fitting (ipf) algorithm, first introduced in [13]. As it is argued in [14, 15, 16] and rigorously shown in [10, 11], this iterative scheme has all the guarantees (see also discussion [10]) to converge to the pur distribution within the desired numerical tolerance. At the practical level, one can directly work with the redundant set of conditions 2-4 (e.g. both marginals \(p(y,\mathbf{x})\) and \(p(y,\mathbf{s})\) imply the prevalence \(p(y)\)) manifestly preserving interpretability.
Programmatically, we start from \(p^{(0)}=f\) and iteratively update our running estimate by imposing pur conditions,
\[p^{(n+1)}(y,\mathbf{s},\mathbf{x}) =p^{(n)}(y,\mathbf{s},\mathbf{x})\frac{f(y)f(\mathbf{s})}{p^{(n) }(y,\mathbf{s})}\] \[p^{(n+2)}(y,\mathbf{s},\mathbf{x}) =p^{(n+1)}(y,\mathbf{s},\mathbf{x})\frac{f(y,\mathbf{x})}{p^{(n+ 1)}(y,\mathbf{x})}\] \[p^{(n+3)}(y,\mathbf{s},\mathbf{x}) =p^{(n+2)}(y,\mathbf{s},\mathbf{x})\frac{f(\mathbf{s},\mathbf{x })}{p^{(n+2)}(\mathbf{s},\mathbf{x})}\]
until \(p^{(n)}\to q\) within numerical tolerance. Note that the order with which we impose marginal constraints does not influence the eventual convergence, as long as it remains fixed throughout the procedure. Besides general-purpose ipf packages (IPF, 2020) and (IPF) for Python and R, we provide in supplementary material a self-contained data-oriented implementation of ipf routine based on numpy and pandas modules (Harris et al., 2020; McKinney et al., 2010).
## 3 Application
To demonstrate the efficiency and flexibility of the developed methodology we consider census data from the USA.
### Multi-label classification
In the period from 1981 to 2013, there exist census records publicly available under [https://www.kaggle.com/fedesoriano/gender-pay-gap-dataset](https://www.kaggle.com/fedesoriano/gender-pay-gap-dataset). After appropriate binning in lower, mid-range and higher salaries, we choose the response variable \(Y=\texttt{hourly salary ranges}\) with five outcomes. From the provided raw data, it is straight-forward to define a predictor profile by unprotected attributes \(\mathbf{X}=(\texttt{age group},\texttt{education degree},\texttt{occupation sector})\) and protected attributes \(\mathbf{S}=(\texttt{gender},\texttt{ race})\). As sensitive profiles, we examine \(\texttt{gender}=\texttt{male},\texttt{female}\) and \(\texttt{race}=\texttt{white},\texttt{black},\texttt{hispanic}\).
Furthermore, we use the empirical \(f\) associated to each census year appearing in the raw data to sample (Politis et al., 1999) a wealth of training and test datasets within the original sample sizes \(\sim 35^{\prime}000-55^{\prime}000\) entries. Details on the statistics of relevant features and the defined census profiles as well as on generation of train-test data are given in Appendix.
As a measure of unfairness, we choose to look at _attributable disparity_ defined as (cf. (Walter, 1976))
\[p(y|\mathbf{s})-p(y|\mathbf{s}_{0}) \tag{7}\]
w.r.t. some reference group who enjoyed social privileges at the time of the survey. A quick inspection of empirical statistics for \(p=f\) reveals that \(\mathbf{s}_{0}=(\texttt{male},\texttt{white})\) had a conditional probability of roughly below 50% to earn up to 20%, as opposed to all other sensitive profiles with the conditional probability in the lower salary range rising above \(90\%\) for \(\mathbf{s}=(\texttt{female},\texttt{hispanic})\). The picture gets reversed for higher salaries. Evidently, 7 vanishes identically whenever \(p=q\), where by construction fair \(q\) denotes the \(\texttt{P}(\texttt{ur})\)-projection of a train distribution.
Similar to the original observations made in (Blau and Kahn, 2017), the positive and negative disparity slowly approaches zero over the years in both lower and higher salaries, respectively. Still, there remains up to 2013 a significant amount of demographic disparity 7 up to 30% over the whole salary range. Sampled from the original empirical distributions of the different years, both train and test data exhibit similar trends reproducing in particular the discriminatory bias. Indeed, this can be easily confirmed by plotting the average attributable disparity alongside its fluctuation scale over simulated train data, see first column of Figure 2.
Generalization and ParityHenceforth we focus on year 1981; an analogous analysis and exposition of results for following census years is provided in Supplementary Material. After running ipf to incorporate all pur conditions stemming from (mildly regularized with \(\lambda=10^{-4}\)) empirical distributions describing the simulated train data, we obtain their pur projections \(q\). In principle, we could use the pur projections to produce a wealth of synthetic data points and subsequently train more elaborate classifiers to perform predictions on test data. Nevertheless, there is nothing that prevents us from using \(q\) itself as a natural classifier according to the fundamental rule of conditional probabilities:
\[p_{\text{pred}}(y,\mathbf{s},\mathbf{x})=q(y|\mathbf{s},\mathbf{x})\cdot f_{ \text{test}}(\mathbf{s},\mathbf{x}) \tag{8}\]
where \(f_{\text{test}}\) denotes the empirical distribution of simulated test data.
Beyond mere intuition, to illustrate the necessity of all pur conditions we determine using ipf the information projection of each train data under Parity (p), Parity and Utility (pu) and eventually pur. The average predictions (alongside the scale of fluctuations over simulated data) of different combinations of conditions are depicted in the three last columns of Figure 2, respectively.
Clearly, demographic Parity is systematically (i.e. beyond mere sampling noise) achieved only using the pur projection as a natural classifier on test data. In particular, demographic Realism enables \(q\) to compensate for test data discriminating against sensitive groups through e.g. lower prevalence in highly paid jobs. It is however noteworthy that minimizing the kl-divergence from the train empirical distribution under Parity condition alone still improves the situation compared to directly using the train distribution itself as a classifier, cf. first two columns of Figure 2.
To closer illustrate the situation described by the pur projection, we compare in Figure 3 the conditional probability \(p_{\text{pred}}(y|\mathbf{s})\) for all seven gender \(\times\) race profiles against the original marginal \(f(y)\). Evidently, pur predictions obey the general variability in the empirical distribution of salaries \(f(y)\) - triggered by e.g. different occupations and education levels in \(\mathbf{X}\). As anticipated, the conditional estimate of 19 over simulated samples statistically fluctuates around this global profile without any systematic discriminatory tendency triggered by \(\mathbf{\hat{S}}\).
Generalization and UtilityFor any machine-learning algorithm, a measure of its generalization capability is required. In the given context of fairness, where data has been deliberately - albeit in a controlled manner - modified, the kl divergence of predicted joint distribution 19 from the empirical distribution of test data could become mis-leading. On the contrary, it is most natural to introduce a Utility-based metric to quantify generalization error in a model-independent way. Our suggestion is the kl divergence of the test \(Y-\mathbf{X}\) marginal from the corresponding predicted marginal:
\[\sum_{y\in Y}\sum_{\mathbf{x}\in\mathbf{X}}f_{\texttt{test}}(y,\mathbf{x}) \log\frac{f_{\text{test}}(y,\mathbf{x})}{p_{\text{pred}}(y,\mathbf{x})} \tag{9}\]
Self-consistently, the metric becomes zero by merit of condition 3, when replacing the test with the train empirical distribution and the predicted distribution 19 with the pur projection of train distribution.
In Figure 4, we give a box plot for the Utility-based generalization metric. Within the scale of variation of the simulated datasets, we can safely conclude that the natural classifier constructed out of the pur projection of train data performs in average as good as using the train distribution itself, cf. first and last columns. Similar dispersion diagrams over salary classes and box plots for all methods are listed in Supplementary Material regarding all census years.
Figure 2: Attributable disparity over salary classes estimated by the prediction on simulated test data of information projection of train distributions under various conditions. Blue line denotes the estimate \(p_{\text{pred}}(y|\mathbf{s}_{0})\) used as reference in 7. The original data refers back to 1981.
Figure 4: Utility-based generalization error 9 on simulated test data for the natural classifier associated to the information-projection under different combinations of conditions 2-4.
Figure 3: Natural predictor \(p_{\text{pred}}(y|\mathbf{s})\) of the pur-projection of train distribution evaluated on simulated test data for all profiles in \(\mathbf{S}\). Blue line denotes \(f(y)\) as computed from the empirical distribution of the original data.
### Binary classification
A classification task performed on the adult dataset [https://archive.ics.uci.edu/ml/datasets/adult](https://archive.ics.uci.edu/ml/datasets/adult) provides additional support for the importance of implementing all put conditions 2-4 in order to achieve demographic Parity. Here, the response variable \(Y\) is the yearly income, which is either high (\(>50k\)) or low (\(\leq 50k\)). As before, protected attributes \(\mathbf{S}\) are gender and race; the latter also binarized in white or non-white. Finally, the unprotected attributes \(\mathbf{X}\) are age group \(\in\) {young, middle, senior}, workclass \(\in\) {gov, private, self-employed} and education \(\in\) {dropout, highschool, above-highschool}.
After splitting the original dataset in train and test data, we compute the relevant marginals 2-4 from \(f_{\text{train}}\) in order to derive the information projection of train distribution \(f_{\text{train}}\) under Parity and under all put conditions, the p- and pur-projection of \(f_{\text{train}}\) respectively. Following our flowchart 1, we subsequently generate a wealth of synthetic datasets from the p(ur)-projections in order to train random forest classifiers on them using module [5]. In addition, we provide analogous results for the privacy-relevant maxEnt distribution under pur conditions. All details and code for data generation and training are given in Supplementary Material.
As evidenced from the first column in Figure 5, random forests trained on synthetic data generated from \(f_{\text{train}}\) without adjusting for Parity reproduce via their predictions the biases in the adult dataset. This means that sensitive profiles \(\mathbf{s}=(\texttt{female},\texttt{non-white})\), \((\texttt{male},\texttt{non-white})\) and \((\texttt{female},\texttt{white})\) with high income occur much less often than \(\mathbf{s}_{0}=(\texttt{male},\texttt{white})\) with high income. In binary classification, this observation easily translates into a ratio of conditionals as measure of disparity, i.e. the fraction of individuals with high income in the discriminated groups versus the privileged group \(\mathbf{s}_{0}\); in all three discriminated groups this fraction is below 80% without further adjustments.
A Random Forest Classifier trained on synthetic data generated by the p-projection of \(f_{\text{train}}\) reintroduces discriminatory bias when predicting on test cases - albeit not so strong as in the unadjusted case. This bias is mediated via discriminatory correlations in test data between unprotected and protected attributes, since the p-projection does not adhere to demographic Realism. On the contrary, Random Forest Classifiers trained on synthetic data generated by both pur-distributions of \(f\) and of \(u\) remain de-biased up to generalization error, when evaluated on test data, cf. last two columns of Figure 5. This confirms the necessity and adequacy of the pur scheme.
Figure 5: Disparity ratio estimated by the prediction on the same test data of Random Forest Classifiers trained on synthetic datasets generated from the p(ur)-projection of \(f_{\text{train}}\) and \(u\).
## Appendix A Theory
In the main paper, we have investigated relationships between some multi-label response variable \(Y\) and protected \(\mathbf{S}=S_{1},S_{2},\ldots\) as well as unprotected \(\mathbf{X}=X_{1},X_{2},\ldots\) attributes. When addressing fairness in a model-independent manner, any question unavoidably deals with probabilities over social profiles that live in the Cartesian product of
\[Y\times\mathbf{S}\times\mathbf{X}\equiv Y\times S_{1}\times S_{2}\times \cdots\times X_{1}\times X_{2}\times\ldots\;. \tag{10}\]
To effectively de-bias a given dataset, it suffices to formally handle attributes as categorical variables by imposing marginal constraints on the probability simplex. Hence, we refrain from discussing more general forms of linear constraints.
Primarily, we are interested in producing a demographically fair version of the social phenomenology appearing in a given dataset that still retains phenomenological relevance for present society. Within the model-independent formulation, phenomenology is expressed as a system of linear equations. Starting point of pur methodology are thus three sets of marginal constraints
\[p(y,\mathbf{s}) =\;\sum_{\mathbf{x}\in\mathbf{X}}p(y,\mathbf{s},\mathbf{x})\stackrel{{!}}{{=}}f(y)f(\mathbf{s})\quad, \tag{11}\] \[p(y,\mathbf{x}) =\;\sum_{\mathbf{s}\in\mathbf{s}}p(y,\mathbf{s},\mathbf{x}) \stackrel{{!}}{{=}}f(y,\mathbf{x})\quad\text{and}\quad p( \mathbf{s},\mathbf{x})=\sum_{y\in Y}p(y,\mathbf{s},\mathbf{x})\stackrel{{!}}{{=}}f(\mathbf{s},\mathbf{x})\]
imposed on joint probability distributions \(p\) over social profiles to achieve demographic Parity, while intuitively incorporating Utility and Realism, respectively. The shorthand notation \(\mathbf{x}\in\mathbf{X}\) means \(x_{1}\in X_{1}\), \(x_{2}\in X_{2},\ldots\) We shall refer to the convex subspace of the probability simplex over \(Y\times\mathbf{S}\times\mathbf{X}\) which incorporates all those distributions that satisfy our aims by pur:
\[p\in\text{\sc pur}\quad\Leftrightarrow\quad p\text{ satisfies \eqref{eq:p}}\,.\]
### The optimization program
To illustrate the linear character of phenomenological problem at hand, we choose to arbitrarily enumerate profiles in the Cartesian product via \(enum:Y\times\mathbf{S}\times\mathbf{X}\rightarrow\mathbb{N}\). For compactness, we denote \(\alpha\equiv enum(y,\mathbf{s},\mathbf{x})\in\{1,\ldots,|Y||\mathbf{S}|| \mathbf{X}|\}\) where shorthand notation \(|\mathbf{S}|=|S_{1}||S_{2}|\cdots\) and \(|\mathbf{X}|=|X_{1}||X_{2}|\cdots\) is understood for the cardinalities of protected and unprotected attributes, respectively. Correspondingly, we enumerate marginal profiles by the maps
\[enum_{P}(y,\mathbf{s}) \in\{1,\ldots,|Y||\mathbf{S}|\}\quad,\quad enum_{U}(y,\mathbf{x })\in\{|Y||\mathbf{S}|,\ldots,|Y||\mathbf{S}|+|Y||\mathbf{X}|\}\;,\] \[enum_{R}(\mathbf{s},\mathbf{x}) \in\{|Y||\mathbf{S}|+|Y||\mathbf{X}|,\ldots,|Y||\mathbf{S}|+|Y|| \mathbf{X}|+|S||\mathbf{X}|\equiv D\}\]
which we collectively signify by \(m\in\{1,\ldots,D\}\). A column vector with elements \(f_{m}\) facilitates then all empirical moments appearing in Eq. (11), \(f(y)f(\mathbf{s})\), \(f(y,\mathbf{x})\) and \(f(\mathbf{s},\mathbf{x})\).
The linear-algebraic character of a marginal sum can be well demonstrated via a binary coefficient matrix \(\mathbf{C}\) operating on probabilities to map them onto marginals. In terms of \(\mathbf{C}\), we can write the pur constraints as a redundant, linear system of \(D\) coupled equations
\[\sum_{\alpha=1}^{M}C_{m,\alpha}\,p_{\alpha}=f_{m} \tag{12}\]
in generically \(M\equiv|Y||\mathbf{S}||\mathbf{X}|\) variables - the probabilities \(p_{\alpha}\in[0,1]\). In this language, we are concerned with non-negative vectors in \(\mathbb{R}^{M}\) - representing distributions on the simplex- that solve linear system (12). Any elementary row operation on \(\mathbf{C}\) gives a phenomenological problem which is equivalent to (11).
Any structural or sampling zero (due to deterministic or finite-\(N\) behavior, respectively) must be considered separately Bishop et al. (2007). The former type of zero probabilities is a consequence of logic and natural laws, hence such probabilities can be immediately set to zero. Obviously, any form
of regularization should avoid re-introducing them later. The latter form of zero probabilities could be trickier to uncover. Besides regularization schemes suggested in the main paper, any empirical marginal that vanishes implies due to non-negativity that all probabilities entailed in the marginal sum must be also zero:
\[f_{m}=0\quad\Rightarrow\quad C_{m,\alpha}p_{\alpha}=0\;\;\text{(no sum)}\;. \tag{13}\]
Such constraints reduce both the number of stochastically active profiles (columns of \(\mathbf{C}\)) as well as the number of non-trivial marginal constraints (rows of \(\mathbf{C}\)). We shall refer to the resulting coefficient matrix as the reduced form of \(\mathbf{C}\).
The rank of the reduced coefficient matrix defines the linearly independent constraints implied by the linear problem independently of the particular parametrization of non-zero marginals. Evidently, linear system (12) admits at least one non-negative solution, the empirical distribution \(f\) itself. As long as the rank of the reduced coefficient matrix remains smaller than the number of its columns, there exist due to Rouche-Capelli theorem infinitely many, non-negative by continuity solutions.
The information projectionOne crucial fact is the existence and uniqueness of a distribution \(q\) which satisfies all phenomenological constraints (12) while staying closest to a sensible reference distribution \(q^{(0)}\). In the context of fair-aware machine learning, we have argued that such reference could either be a regularized version of the empirical distribution \(f\) or the uniform distribution \(u\) over admissible social profiles. Conventionally, \(q\) is called the information projection of \(q^{(0)}\) on the pur subspace of the simplex, for us in short the pur projection. Mathematically, the pur projection satisfies
\[D_{\text{\sc kl}}(q||q^{(0)})\leq D_{\text{\sc kl}}(p||q^{(0)})\quad\forall\,p \in\text{\sc pur}\;. \tag{14}\]
We emphasize that \(q^{(0)}\) does not need to belong to pur space - and in fact it would not, otherwise our society would be exactly fair, at least from the demographic perspective.
The uniqueness of a minimum of kl divergence \(D_{\text{\sc kl}}(p||q^{(0)})\) immediately follows in probability space by the convexity of the feasible region of the phenomenological problem (11) at hand; combined with strict convexity Cover and Thomas (2012) of the kl-divergence in its first argument thought as a function \([0,1]^{M}\to\mathbb{R}_{0}^{+}\). Hence, the kl divergence possesses one global minimum in pur subspace, at most. Regarding joint distributions as column vectors in \([0,1]^{M}\) naturally represents the pur subspace by a non-empty, convex, bounded and closed - hence compact - subset of \([0,1]^{M}\), viz. (12), over which any continuous function necessarily attains a minimum by the extreme value theorem. In total, we conclude that the kl divergence must attain its global minimum in the pur subspace.
### Iterative proportional fitting
In fair-aware applications, we have advocated the use of ipf algorithm to obtain the information projection that satisfies empirical marginal constraints starting from \(p^{(0)}=q^{(0)}\). If \(p_{\alpha}=0\) as either a structural or a sampling zero, then the algorithm has trivially converged to it already at the first iteration. Using the linear-algebraic characterization, we can succinctly write in terms of the coefficient matrix the update rule for the stochastically interesting probabilities after \(n\) fittings onto all positive marginals \(f_{m}\),
\[p_{\alpha}^{(nD+m)}=p_{\alpha}^{(nD+m-1)}\left(\frac{f_{m}}{p_{m}^{(nD+m-1)}} \right)^{C_{m,\alpha}}\quad\forall\,\alpha=1,...,M\;. \tag{15}\]
Now, we show Csiszar (1975) that ipf in this setting converges to the pur projection. First, we need to verify that the iterative algorithm converges to a distribution within the pur subspace. For any probability distribution \(p\) satisfying the given set of linear constraints (12), the relation
\[D_{\text{\sc kl}}(p\parallel p^{(nD+m-1)})=D_{\text{\sc kl}}(p\parallel p^{( nD+m)})+D_{\text{\sc kl}}(p^{(nD+m)}\parallel p^{(nD+m-1)}) \tag{16}\]
holds.3 This relation directly follows from
Footnote 3: Note that all kl divergences remain finite due to \(0\cdot\log 0=0\), as long as the reference distribution does not assume any zero probabilities for profiles that are later observed in the data.
\[\sum_{\alpha=1}^{M}\left[p_{\alpha}-p_{\alpha}^{(nD+m)}\right]\log\frac{p_{ \alpha}^{(nD+m)}}{p_{\alpha}^{(nD+m-1)}}=\log\frac{f_{m}}{p_{m}^{(nD+m-1)}} \sum_{\alpha=1}^{M}C_{m,\alpha}\left[p_{\alpha}-p_{\alpha}^{(nD+m)}\right]=0\;,\]
after substituting update rule (15) whose form automatically ensures that
\[p_{m}^{(nD+m)}=\sum_{\alpha=1}^{M}C_{m,\alpha}p_{\alpha}^{(nD+m)}=f_{m} \tag{17}\]
after fitting onto the \(m\)-th marginal, so that each term vanishes identically in the latter sum given \(p\in\textsc{pur}\).
After \(n\) cycles, it follows from (16) by induction
\[D_{\textsc{kl}}(p\parallel p^{(0)})-D_{\textsc{kl}}(p\parallel p^{(nD)})= \sum_{n^{\prime}=0}^{n-1}\sum_{m=1}^{D}D_{\textsc{kl}}(p^{(n^{\prime}D+m)} \parallel p^{(n^{\prime}D+m-1)})\.\]
Since the difference on l.h.s. stays finite as \(n\to\infty\), the series over non-negative terms on r.h.s. would be finite, as well. By the Cauchy criterion, there must exist for any \(\varepsilon>0\) some \(n^{*}\in\mathbb{N}\) so that
\[D_{\textsc{kl}}(p^{(nD+m)}\parallel p^{(nD+m-1)})<\varepsilon\quad\text{for} \quad n\geq n^{*}\quad\text{and}\quad m=1,...,D\.\]
In turn, this implies that \(p^{(nD+m)}\) induces a Cauchy sequence, thus establishing the existence of a generically real-valued limiting distribution \(q^{\prime}\). Because each \(p^{(nD+m)}\) fulfills the \(m\)-th marginal sum, viz. Eq. (17), cycling through all marginals \(m=1,...,D\) forces the limiting distribution \(q^{\prime}\) to satisfy them all. Consequently, the limiting distribution \(q^{\prime}\) has to belong to pur.
In particular, we conclude after finitely many steps that
\[p^{(nD+m-1)}\approx p^{(nD+m)}\quad\text{for}\quad n\geq n^{*}\quad\text{and} \quad m=1,...,D\]
within the desired tolerance \(\varepsilon\) (dictated e.g. by machine precision), which is obviously of practical importance. In cases, when ipf fails to converge sufficiently fast within the desired tolerance, one can resort to its generalizations, approximations based on gradient descend or Newton-based routines (see main text for references therein).
Eventually, it remains to verify that \(q^{\prime}\) is indeed the pur projection. Given two distributions \(p,\tilde{p}\in\textsc{pur}\), it can be inductively shown that
\[\sum_{\alpha=1}^{M}\left[p_{\alpha}-\tilde{p}_{\alpha}\right]\log\frac{p_{ \alpha}^{(nD+m)}}{q_{\alpha}^{(0)}}=0\quad\text{for}\quad n=0,1,2,...\quad \text{and}\quad m=1,...,D. \tag{18}\]
Using ipf update rule (15) we can indeed break the estimate at \(nD+m+1\) into two parts:
\[\sum_{\alpha=1}^{M}\left[p_{\alpha}-\tilde{p}_{\alpha}\right] \log\frac{p_{\alpha}^{(nD+m+1)}}{q_{\alpha}^{(0)}}= \sum_{\alpha=1}^{M}\left[p_{\alpha}-\tilde{p}_{\alpha}\right] \log\frac{p_{\alpha}^{(nD+m)}}{q_{\alpha}^{(0)}}\] \[+\log\frac{f_{m}}{p_{m}^{(nD+m)}}\sum_{\alpha=1}^{M}C_{m+1,\alpha }\left[p_{\alpha}-\tilde{p}_{\alpha}\right]=0\.\]
The second summation vanishes identically, since both \(p\) and \(\tilde{p}\) reproduce the observed \(m\)-th marginal from \(f\) (otherwise they would not belong to pur). At the same time, the first summation is zero by the inductive assumption. Starting from \(n=0\) and \(m=0\) the vanishing of the first summation is trivial for \(p^{(0)}=q^{(0)}\), thus verifying the induction.
Finally, taking \(n\to\infty\) in Eq. (18) and setting \(\tilde{p}=q^{\prime}\in\textsc{pur}\) (as concluded above) results into
\[\sum_{\alpha=1}^{M}\left[p_{\alpha}-q^{\prime}_{\alpha}\right]\log\frac{q^{ \prime}_{\alpha}}{q_{\alpha}^{(0)}}=0\quad\Leftrightarrow\quad D_{\textsc{kl} }(p\parallel q^{(0)})=D_{\textsc{kl}}(p\parallel q^{\prime})+D_{\textsc{kl} }(q^{\prime}\parallel q^{(0)})\.\]
Since the kl divergence is non-negative definite, it directly follows \(D_{\textsc{kl}}(p\parallel q^{(0)})\geq D_{\textsc{kl}}(q^{\prime}\parallel q ^{(0)})\). Equation 18 was shown for arbitrary distributions \(p\in\textsc{pur}\). Consequently, we conclude from definition (14) of the information projection and its uniqueness that \(q^{\prime}\) is indeed the pur projection, namely \(q^{\prime}=q\). This formally shows that ipf converges to the information projection of reference distribution onto the pur subspace.
## Appendix B Applications
### The gender-ethnicity gap
From \(N_{\text{year}}=42\,379\), \(45\,033\), \(37\,144\), \(56\,467\), \(55\,617\), \(53\,857\) and \(53\,790\) census records4 for the years 1981, 1990, 1999, 2007, 2009, 2011 and 2013, respectively, it is straight-forward to define categorical predictor attributes. The cumulative statistics of unprotected predictors over all years are presented in bar plot 6 alongside the prevalence of protected attributes \(\mathbf{S}=\texttt{gender}\), ethnicity in 7. As response variable \(Y\), we use the hourly wage adjusted for 2010 inflation.
Footnote 4: [https://www.kaggle.com/fedesoriano/gender-pay-gap-dataset](https://www.kaggle.com/fedesoriano/gender-pay-gap-dataset)
As a first step towards demographic fairness, we observe the disparity \(f(y|\mathbf{s})\) over the five sensible salary ranges in Figure 8 for all census years. In principle, we could have chosen another binning for hourly wages \(y\) as sensible domain \(Y\) for our response variable. The mathematical guarantees of Section A assert the validity of pur methodology. The purpose of the provided binning is to highlight the difference in salary distribution within the lower and mid ranges, while grouping together into a larger category higher salaries that are less common in the population.
To illustrate the exact restoration of demographic Parity achieved by pur methods, we present in the same Figure the fair estimate \(q(y|\mathbf{s})\) of pur projection. Any dataset of size \(\tilde{N}\) that is later sampled from \(q\) is asymptotically anticipated as \(\tilde{N}\to\infty\) to fully restore Parity in the depicted way. Incidentally, one could observe in Figure 8 the evolution of (dis)parity over census years. pur methodology corrects any discriminatory biases in the protected attributes regardless of the specific circumstances described by the yearly data.
Figure 6: Cumulative prevalence (upper left) of age groups over all years. Cumulative prevalence (upper right) of education level. Cumulative prevalence (below) of A-G sectors for sales&trade, finance&professional&hotels/restaurants, education&social/art/other, medical, transport&communications&utilities, agriculture&mining/construction, public administration.
uniformly, given other social, educational and economical aspects. Contrarily, the pur methodology uniquely unveils the joint distribution \(q\) that satisfies \(q(y|\mathbf{s})=q(y)=f(y)\), while maintaining the closest possible alignment with the original data through the acquisition (11) of demographic Utility and Realism. In terms of optimization techniques, the minimization of kl divergence from empirical \(f\) (or from uniform \(u\)) under pur constraints exactly accomplishes the specified objectives; and nothing more Jaynes (1968).
By sampling train-test data via \(mult(N_{\text{year}}f_{\text{year,sim}};f_{\text{year}})\) at the observed sample size \(N_{\text{year}}\) from the yearly empirical distribution \(f_{\text{year}}\), we can always deduce the pur projection from (mildly regularized) train data in order to use it as a natural classifier to predict on test data:
\[p_{\text{pred}}(y,\mathbf{s},\mathbf{x})=q_{\text{train}}(y|\mathbf{s}, \mathbf{x})\cdot f_{\text{test}}(\mathbf{s},\mathbf{x}) \tag{19}\]
For \(f_{\text{train}}\) and \(f_{\text{test}}\) we use \(f_{\text{year,sim}}\). Marginalizing distribution (19) over unprotected attributes \(\mathbf{X}\), we then obtain the natural prediction on test data \(p_{\text{pred}}(y|\mathbf{s})\) given sensitive profiles.
Starting from original \(f_{\text{year}}\) for each census year available in the repository, Figure 9 presents the natural prediction on test data of the pur projection trained on 1 000 datasets of size \(N_{\text{year}}\) that were simulated from \(f_{\text{year}}\). As expected, the prediction for hourly salary \(y\) given the various social profiles \(\mathbf{s}\) fluctuates around \(f_{\text{year}}(y)\) (blue horizontal line) of the original census data. This verifies that on average the generalization of pur methodology is discriminatory-free regarding \(\mathbf{S}\). In the early census years, when greater disparities combined with a lower prevalence of higher salary ranges were encountered, the estimated values on the simulated datasets exhibit wider fluctuations.
To better comprehend the logic dictating all phenomenological constraints in (11), we plot in Figures 10, 11 the attributable disparity w.r.t. \(\mathbf{s}_{0}=\texttt{male}\), white
\[p_{\text{pred}}(y|\mathbf{s})-p_{\text{pred}}(y|\mathbf{s}_{0})\]
using in Eq. (19) different information projections of the empirical distributions \(f_{\text{train}}\) describing simulated train data. From left to right, we learn all frequencies from \(f_{\text{train}}\), hence the information projection trivially coincides with \(f_{\text{train}}\) itself. Next, we only require demographic Parity (p) when minimizing the kl divergence from \(f_{\text{train}}\). In the third approach, we impose demographic Parity and Utility (pu). Finally, we give the attributable disparity predicted by the pur projection corresponding to Figure 9. Most crucially, pur guarantees that up to minimal fluctuations due to generalizing on test data whose \(f_{\text{test}}(\mathbf{s},\mathbf{x})\) is expected to slightly differ from \(f_{\text{train}}(\mathbf{s},\mathbf{x})\), disparity is not re-introduced after de-biasing the train data. As sample sizes \(N_{\text{train}}\) and \(N_{\text{test}}\) grow, all fluctuations get suppressed eventually preserving demographic Parity.
To further assess generalization capabilities of the suggested de-biasing of train data, Figure 12 lists box-plots in each census year for the Utility error (kl divergence of Utility marginals)
\[\sum_{y\in Y}\sum_{\mathbf{x}\in\mathbf{X}}f_{\text{test}}(y,\mathbf{x})\log \frac{f_{\text{test}}(y,\mathbf{x})}{p_{\text{pred}}(y,\mathbf{x})}\]
Figure 7: Cumulative prevalence of the sensitive profiles.
Figure 8: Conditional probability of hourly wage given sensitive profile estimated from empirical distribution \(f\) (colored) and pur projection \(q\) (gray) over the census years.
Figure 9: Natural prediction (19) on test data by the pur projection of train data in each census year.
of the prediction (19) made by the de-biased distribution. Unprotected social profiles \(\mathbf{x}\in\mathbf{X}\) refer to Figure 6. As anticipated, methods pu and pur which learn to reproduce demographic Utility from train data, predict the lowest Utility-based kl divergence from test data, accordingly.
### Adult dataset
The adult dataset5 has been extensively used as a benchmark dataset, also in the context of fair-aware machine learning.
Footnote 5: [https://archive.ics.uci.edu/ml/datasets/adult](https://archive.ics.uci.edu/ml/datasets/adult)
After selecting a sensible subset of predictors \(\mathbf{X}\) and \(\mathbf{S}\), the statistics of the original \(N_{\text{data}}=46\,043\) census records are summarized in Figure 13. Regarding the binary response \(Y\), demographic disparity via the ratio
\[\frac{p(y=\texttt{above}\ \texttt{50K}|\mathbf{s}\ =\ \ \ \texttt{other}\ \ )}{p(y= \texttt{above}\ \texttt{50K}|\mathbf{s}_{0}=\texttt{male},\texttt{white})} \tag{20}\]
becomes clearly recognizable in Figure 14 when \(p=f_{\text{data}}\). Similar to multi-label classification, we aim at bringing this measure to unity for all sensitive profiles \(\mathbf{s}\).
Figure 10: Attributable disparity on simulated test data predicted by simulated train data as well as by p, pu and pur projections of the train data.
Figure 11: Continuation of 10
Figure 12: Box plots of the kl divergence of predicted from test Utility-related \(Y-\mathbf{X}\) marginal.
Figure 14: Prevalence (left) of income categories in the adult dataset. Disparity measure (20) over ratio of conditional probabilities estimated from empirical distribution.
Figure 13: Prevalence of social profiles for the (un)protected predictors.
To serve this goal, we split the original dataset into train and test data. Subsequently, we minimize using the machinery of Section A.2 the kl divergence from \(f_{\text{train}}\) under demographic Parity (p) only as well as all pur constraints (11). In addition, we minimize the kl divergence from the uniform distribution \(u\) (equivalently maximize the entropy) under Eq. (11). From the p and pur projection of \(f_{\text{train}}\) and the pur projection of \(u\), many synthetic datasets of comparable sizes \(N_{\text{synthetic}}\sim N_{\text{data}}\) can be easily generated. As argued in the main text, such synthetic data is expected to be fair up to finite-\(N_{\text{data}}\) fluctuations.
To demonstrate the coherence of our approach, we conduct a - self-fulfilling from the perspective of theory A.1 - experiment. In Figure 15, we plot the disparity (20) directly computed from the (re-)sampled datasets. As a control, we utilize the disparity ratio of synthetic datasets directly generated from \(f_{\text{train}}\) which reproduce all the biases in the adult dataset. On the other hand, synthetic data generated by the three methods incorporating demographic Parity as outlined in the previous paragraph, obey on average demographic Parity. Deviations from parity attributed to sampling noise do not fall below the 80% threshold. The bias measure fluctuates in the datasets generated by de-biased distributions around 100%, signifying that there exists no expected bias.
Ultimately, we train Random Forest Classifiers (rfc) on our synthetic datasets in order to let them predict on \(f_{\text{test}}(\mathbf{s},\mathbf{x})\). From the outcome of the rfc prediction, we record in Figure 16 the associated disparity ratio. To facilitate comparison of de-biasing methods, we keep over training the parameters of the classification algorithm fixed. For the purposes of fair-data generation, this is sufficient, as we do not primarily focus here on generic classification benchmarks over ai models.
As expected, training on re-sampled datasets from biased \(f_{\text{train}}\) results into discriminatory rfc. Since the p projection of \(f_{\text{train}}\) does not incorporate information about demographic Realism, it is not able to properly handle indirect relationships between protected attributes \(\mathbf{S}\) and outcome \(Y\) via the predictors \(\mathbf{X}\). Consequently, the prediction of rfc that has been trained on such de-biased data given test data that exhibits discriminatory relationships between the predictors re-introduces the discriminatory correlation between \(\mathbf{S}\) and response \(Y\). Still, the disparity measure has significantly improved against training on the biased datasets. A similar conclusion holds for synthetic datasets generated from the pu projection of \(f_{\text{train}}\).
Based on the theoretical arguments of Section A, rfc trained on synthetic data generated by methods incorporating all pur constraints (11) remain de-biased when evaluated on \(f_{\text{test}}(\mathbf{s},\mathbf{x})\), at least up to generalization errors of the implemented classifier. In particular, this almost optimal preservation
Figure 15: Disparity ratio (20) directly computed from the distribution of generated data.
of demographic Parity in the statistics predicted on biased test data demonstrates the merit of incorporating demographic Realism alongside Utility during training.
## Appendix C Code availability
In an accompanying Python script, we provide auxiliary routines to compute marginal distributions, impose phenomenological constraints and run the ipf algorithm. Our implementation tries to stay generic by solely relying on numpy and pandas modules. Of course, there is room for further optimization depending on the concrete application, e.g. binary vs. multi-label classification, pur projection of \(f\) vs. \(u\) (maxEnt distribution) etc.
|
2301.00031 | Predicting the Students Involvements and its Impacts on Learning
Outcomes Through Online Education During Covid-19 | Everybody knows very well about the COVID-19 pandemic, lockdown, and its
impacts and effects on every field of life, from childhood to senior citizens,
from local to global. The underlying research study focuses on students'
involvement in online classes. This paper assesses the effect of the COVID-19
pandemic on the students' participation and involvement during online classes
compared to the physical classes, cheating behavior, health effects, and study
styles of the students of diverse degrees and age groups. This research study
contributes to the real problems and challenges that students faced during
online classes during the COVID-19 pandemic. The percentages of the students'
responses with different color schemes shown in Fig. 1, Fig. 2, Fig.3(a),
Fig.3(b) and Fig.4 are conveying powerful and meaningful insight. These figures
and the results given in Table I and Table II indicate that most students are
not fully involved during online classes due to technical issues, remote
distance, etc. We applied the Test here because we do not have exact population
means. We used ttest_1samp with default value 0 to compute the variables'
statistics and p-value. These values are minimal in favor of rejecting the null
or H0 (hypothesis) and accepting the alternate or H1 (hypothesis). It further
means that students' involvement during online classes is severely affected. | Muhammad Nadeem, Faisal Bukhari, Ali Hussain | 2022-12-28T18:11:07Z | http://arxiv.org/abs/2301.00031v1 | ###### Abstract
###### Abstract
Everybody knows very well about the COVID-19 pandemic, lockdown, and its impacts and effects on every field of life, from childhood to senior citizens, from local to global. The underlying research study focuses on students' involvement in online classes. This paper assesses the effect of the COVID-19 pandemic on the students' participation and involvement during online classes compared to the physical classes, cheating behavior, health effects, and study styles of the students of diverse degrees and age groups. This research study contributes to the real problems and challenges that students faced during online classes during the COVID-19 pandemic. The percentages of the students' responses with different color schemes shown in Fig. 1, Fig. 2, Fig.3(a), Fig.3(b) and Fig.4 are conveying powerful and meaningful insight. These figures and the results given in Table I and Table II indicate that most students are not fully involved during online classes due to technical issues, remote distance, etc. We applied the Test here because we do not have exact population means. We used test_1samp with default value 0 to compute the variables' statistics and p-value. These values are minimal in favor of rejecting the null or H0 (hypothesis) and accepting the alternate or H1 (hypothesis). It further means that students' involvement during online classes is severely affected.
**Keywords:** COVID-19, e-Learning, Students Involvements, Cheating Concerns of Students, Class Participation.
## I Introduction
The primary motivation for selecting this topic is that the quality of education is directly proportional to the involvement of the students during the lecture. Firstly, I found it as a teacher that many students have left the online lecture physically, but logically they showed their status as a present. This problem has multiple issues. The respected teacher cannot be confident about the presence of the students physically during online lectures. Secondly, the students are facing different issues during online lectures. The impact of these issues is that they lose interest in learning during online lectures. This research work is a new study focused mainly on the level of student involvement during online lectures. All the countries attacked by the villainous COVID-19 virus that has upset each area of life as per economy, from producers to consumers [1]. During the Covid19 pandemic, the Education sector was also severely impacted. The forceful impact of this virus sent the students and teachers to study and teach remotely from face to face system of education. Resultantly, Educational institutions are searching for another way to teach and evaluate the students [2]. So to keep every student and teacher safe, all the Educational Institutions closed because of the citywide, districtwide, and countrywide lockdowns. In such lockup situations, the students and teachers cannot interact face-to-face [3].
To keep the chain of teaching in COVID-19 virus, the World Bank has been actively trying to give financial assistance to the underdeveloped or more affected countries. The ultimate goal of [4] is to provide basic education rights to every student during this viral disease. As far as online learning is concerned, there is much use of technology. This technology-dependent way of education becomes a barrier for learners who did not train to use technology [5]. Similarly, in Pakistan, in 2021, all the educational institutions have closed as the previous year due to the severity of COVID-19. Pakistani Ministry of Education and Higher Education Commission (HEC) also provides online and distance learning ways to teach the students. [6]. The HEC provided the design for online policy guidance notes and guidelines for the Universities. However, It's a reality that practical work is not being taught during online education. This also demotivated the students, and it made an impact on their involvement in online lectures [7]. In addition to the problems mentioned above and issues of students and teachers, there are also the problems of admin staff [8].
Therefore, the teachers are not satisfied with the student's involvement in online classes compared to physical classes.
In this connection, to find the answers, this study would work on the following research objectives:
\(\bullet\) To predict why the students are involved is not as much as physical class.
\(\bullet\) To find why the students are not interested in attending the full online lecture.
\(\bullet\) To discover the issue faced by the students during online lecture.
\(\bullet\) To find the impact of taking lectures in class room with the lecture taking online on the students' learning outcomes.
\(\bullet\) To find the family members' realization about their children's online study.
The outcomes of the research would be necessary for the following concerning levels:
\(\bullet\) Student
\(\bullet\) Teacher
\(\bullet\) Parents
\(\bullet\) Educational Institution
\(\bullet\) Education Ministries
The most crucial stakeholder in the learning process are teachers, and students are aware of the issues and the factors involved as per the student involvement during an online class. The parents would also notice the difference in attitude and aptitude to study in the classroom and at home via online education. The Educational Institution may send reports to the Ministry of Education and HEC based on the outcomes of the student's involvement during an online class. In this way, the Ministries can inform the Government to look after the policies to plan a different mature online education system or to open the educational institution as soon as possible.
## II Literature Review
The impacts of COVID-19 on health, society, and education are highlighted in [9]. The researchers divided their research into four different groups: general demographics, information about daily online routine, assessment of the learning of online experience and level of satisfaction of the students, and evaluation of health due to change in lifestyle. Cheating during the exam is one of the main problems. The research work done by [10] on cheating shows that an individual's strengths vary according to the achievement settings. Their findings also concluded that the cheating rate was higher in educational settings than in work areas and in work sites than in sports venues. Study 1 further suggests that the strengths of individuals' cheating intentions differ across achievement settings.
Intentions to cheat were higher in educational settings than in work settings and higher in work settings than in sports settings. The outcomes of this research [11] concluded that the online examination during COVID-19 increased the cheating ratio, which is unrelated to achievement goals. The studies provided different guidelines to the teachers for setting the questions and time duration for online exams. The researchers of [12] highlighted the levels of students' stress, depressive symptoms, loneliness, effects of missing social life, and specific worries for their undergraduate studies. They also showed extreme crises of the students on health and research during lockdown due to COVID-19. The authors discussed that they got 212 responses out of 266 from students for the crises suffered. They also recommended different plans for teachers and academic institution administrators to develop online events so that they can prepare newcomers very well. The research efforts of [13] discover the critical problems faced by the students in the present e-learning system. They have also found the factors influencing online learning during COVID-19.
The authors also discussed the impacts of students' willingness to study alone in an e-learning environment. In addition, they interviewed 30 students from six Universities and conducted meetings with 31 e-learning system experts to find the main problems. They also suggested applicable plans for policymakers, developers, designers, and researchers, enabling them to be better acquainted with the critical aspects of the e-learning system during the COVID-19 pandemic. The researchers of [14] have found too much dissatisfaction during the online study on the COVID-19 situation. The outcomes of this research concluded that the students of the dental study were dissatisfied with the online teaching during COVID-19. The results of this research crying that online study is disturbing the student's level of involvement in the study very severely. The efforts of the analyses highlighted different aspects of students during the online study in the COVID-19 pandemic worldwide. They discussed and evaluated severe issues such as technical and economic issues, psychological problems, and students' fears about the future. It badly affects the study taste of the students and their pace in the learning process. They also offered different plans and suggestions for the policymakers and higher authorities to overcome the issues faced by the students and the teachers. The research study by the authors of [15] observed and evaluated the impact of the perception of e-learning crashes. They discovered its impact on psychological upset in the students during the COVID-19 pandemic. They concluded that fear of academic loss had become the main reason for mental upset during the issues of online study in corona disease. They also suggested remedies for the policymakers and educational institutions to manage the student's stress during the online study. The researchers analyzed different types of challenges faced by the students in Pakistani Universities [16]. The main obstacles highlighted are economic, technical, lack of skills, family support, etc. They also recommended that the Govt. take a severe step to overcome the challenges faced by the students. The outcomes of this research work [17] show that the students do not want to study online. The students expressed their problems during the survey that they were not prepared and trained for such a learning shift. They do not have a non-stop electricity facility and well-equipped information technology-based infrastructure at their homes.
## 3 Problem Statement
To find the effect of the COVID-19 pandemic on the involvement of the students during online classes as compared to the physical classes, cheating behavior, health effects, and study styles from the students of diverse degrees and age groups.
Hypothesis:
H0 = Student's involvement during online classes is the same as in physical classes.
H1 = Student's involvement during online classes is not the same as in physical classes.
**Methodology and Data Collection**
The survey methodology used to accomplish this research. Survey is a method for the collection of the information for the sample of individuals [18]. The findings of the survey analyzed through statistical analysis.
\(\bullet\)**OBJECTS OF THE SURVEY**
To analyze the levels of the student's involvement and its impacts on learning outcomes during online lectures during COVID-19.
\(\bullet\)**TARGET POPULATION**
Graduate, Undergraduate and Intermediate students of the Universities and Colleges
\(\bullet\)**DATA TO BE COLLECTED**
A questionnaire developed based on the literature review. Then this questionnaire circulated online as much as possible to find the maximum responses from the target population due to the COVID-19 situation.
\(\bullet\)**MEASUREMENT 'INSTRUMENT'**
The measurement instrument of the required survey is a questionnaire. The questions of this questionnaire were
closed-ended with a Likert scale. The definition of the Likert scale is given below:
1. SA (Strongly Agreed)
2. A (Agreed)
3. U (Undecided),
4. D (Disagreed)
5. SD (Strongly Disagreed)
This questionnaire would be distributed through Google docs to make it available to the targeted population and to get a maximum number of responses.
**IV. DESIGN OF RESEARCH STUDY**
An online survey performed using Google online forms. However, the questionnaire of this survey consists of the following subsections:
A. Respondents will be requested to answer their following usual demographics:
\(\bullet\)**Age**
\(\bullet\)**Gender**
\(\bullet\)**Area of residence**
B. Getting information routine wise online learning during the shift from face to face study to
online study in colleges/Universities in Pakistan. These information consists of the following:
\(\bullet\)**Average time given for online study in hours per day**
\(\bullet\)**Quality and the problems of the communication medium
\(\bullet\)**Actual involvement in virtual lecture same as face to face lecture in physical class**
\(\bullet\)**Level of interruption by the family members during online study period
\(\bullet\)**Attention and focus level from joining to the end of online class.
\(\bullet\)**Effects of online learning on Cheating behavior and students involvement to
C. Evaluation of the experience of the student's level of involvement in virtual class to find
the overall students involvement in online lecture.
D. Evaluation of health during change in learning style from physical class environment
provided by the College/University to the virtual class environment provided by your parents
at home and the effects of virtual class on your involvement of class.
## Appendix A
Figure 1: Getting General Info
Figure 3: Getting General Info
Figure 2: Getting General Info
## V Experimental Results
The means and standard deviation of all the variables as per questionnaire are given below:
\begin{tabular}{|c|c|c|} \multicolumn{3}{c}{TABLE I.} \\ \hline
**S.\#** & **Variable** & **Value** \\ \hline \multicolumn{3}{|c|}{**Mean of all the variables**} \\ \hline \multicolumn{3}{|c|}{SECTION A: DEMOGRAPHICS INFO} \\ \hline
1 & Gender & 1.400000 \\ \hline
2 & Age & 2.028571 \\ \hline
3 & Degree Level & 2.257143 \\ \hline
4 & Area of Residence & 1.628571 \\ \hline \multicolumn{3}{|c|}{SECTION B: GETTING GENERAL INFO} \\ \hline
1 & Time Spent SociaMedia & 2.457143 \\ \hline
2 & LaptopComputerAvail & 1.771429 \\ \hline
3 & SmartPhonesAvail & 1.571429 \\ \hline
4 & Class Participation Level & 2.885714 \\ \hline
5 & CheatingConcern & 1.942857 \\ \hline
6 & StudyLevelIfdonExam & 3.028571 \\ \hline
7 & Lack of IT Skills & 2.485714 \\ \hline
8 & BetterOnlineLearn & 3.600000 \\ \hline \multicolumn{3}{|c|}{SECTION C: GETTING SPECIFIC INFO} \\ \hline
1 & BetterTimeUtilization & 3.342857 \\ \hline
2 & CheatingBehavior & 2.285714 \\ \hline
3 & Unwilingness of Responsibility & 2.114286 \\ \hline
4 & StudentsHesitancyImpact & 2.371429 \\ \hline
5 & TechDifficultyImpact & 2.200000 \\ \hline
6 & HaveNetAccess & 2.514286 \\ \hline
7 & HaveElectricSupply & 3.00000 \\ \hline
8 & InteractionWihTeacher & 3.085714 \\ \hline
9 & ClassParticipationChance & 3.057143 \\ \hline
10 & AttensionAndFocusDisturb & 2.342857 \\ \hline
11 & OnlineAndOfflineEqual & 3.800000 \\ \hline
12 & TechnicalIssueImpact & 1.857143 \\ \hline
13 & EconomicIssueImpact & 2.000000 \\ \hline
14 & TeacherVoiceIssue & 1.971429 \\ \hline
15 & LessTSInteraction & 1.857143 \\ \hline
16 & AcademicLossFearinClassParticipation & 2.028571 \\ \hline
16 & AcademicLossFearinClassParticipation & 0.970588 \\ \hline
17 & LackDeficiencyforNonITSt & 0.747240 \\ \hline \multicolumn{3}{|c|}{**Standard Deviation of all the variables**} \\ \hline \multicolumn{3}{|c|}{SECTION A: DEMOGRAPHICS INFO} \\ \hline
1 & Gender & 0.489898 \\ \hline
2 & Age & 0.376883 \\ \hline
3 & Degree Level & 0.552545 \\ \hline
4 & Area of Residence & 0.483187 \\ \hline \end{tabular}
\begin{table}
\begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{TTest Outcomes} \\ \hline \multicolumn{2}{|c|}{Ttest\_1sampResult(statistic=array([16.663333, 31.38507589, \\ 23.81939622, 19.65311057, 13.28871279, \\ 11.51311097, 11.95187108, 13.35711613, 10.35574591, 11.7564528, \\ 13.11574349, 13.84994208, 13.79421828, 12.0044142, 9.78640192, \\ 10.375, 13.09261879, 11.51416659, 13.36038922, 13.88838218, \\ 13.86128572, 9.80912102, 18.24871239, 13.57080199, 12.19631092, \\ 11.18462458, 10.35381536, 12.18694645, 13.15416906, 11.9272551, \\ 11.34226868, 10.643480461, 11.2720409 10.3 evaluate=array([16.29551067e- \\ 18.108636299e-26, 8.604690002-32, 3.85636635e-20, \\ 5.07753770e-15, 2.80770597e-13, 1.00580609e-13, 4.38220401e-15, \\ 4.72979440e-12, 1.58421613e-13, 7.38588068e-15, 1.53985258e-15, \\ 1.73086219e-15, 8.90861219e-14, 2.02190243e-11, 4.50637008e-12, \\ 7.76732461e-15, 2.80069994e-13, 4.35148773e-15, 1.42079500e-15, \\ 1.50369222e-15, 1.90647656e-11, 3.87615125e-19, 2.77545554e-15, \\ 5.73530395e-14, 6.15085107e-13, 4.75281129e-12, 5.85928814e-14, \\ 6.79392859e-15, 1.06477004e-13, 4.21455437e-13, 2.30654437e-12, \\ 4.98568395e-13]))\) \\ \hline \end{tabular}
\end{table} TABLE II: TTest Outcomes
## VIII Conclusion
To evaluate and find the correctness and applicability of the hypothesis as per the problem statement, we used an online survey approach using Google docs. According to the percentages of survey responses given in Fig.1 Fig.2, Fig.3(a), Fig.3(b) and Fig. 4, availability of Laptop/Computer at student homes was 85.3% and smart phones was 87.5%. Time spent on social media during the online lecture was 71%. The level of Class participation was 49.6%. The concern of students cheating during the online exam was 70.8%--level of cheating behavior to ignore interest in online due to online exams encouraged by 60.8% of students. The student's unwillingness was found at 73.2%. Impacts of technical issues during online classes were 84.4%. The pace of the teacher's voice due to the Net problem was discovered at 79.5% and Impacts of less interaction of teacher-student found to be 78%. As per Fig. 4, psychological impacts on learning participation during online classes were discovered at 73.5%, the stress of loneliness affects students' level of involvement was 68.5% and the anxiety levels disturb students' level of motivation by 77.9%. As per the above experiments, the means and standard deviations are given in Table 1 and Table 2 above. Most of the high values of means shows that much percentage of the students are not fully involved during online lecture. Similarly, most of the values of standard deviations are far from zero. It shows that data points are far from the mean. We applied the test here because we don't have actual population means. We used test 1samp (Dataset [:35],0) with a default value of 0 to compute the variables' statistics and p-value. The results of this test are provided in Table 2. These values are minimal, which is in favor of rejecting the null or H0 hypothesis and accepting the alternate or H1 hypothesis. It further means that students' involvement during online classes is severely affected.
## Acknowledgment
The authors are very grateful to the management of server room of Faculty of Computing and Information Technology (FCIT), University of the Punjab to forward our questionnaire to the students for the responses. We are also thankful to the Students of Undergraduates and Gradates students of FCIT for the warm participation and sincere responses during the survey of this research study.
|
2309.14666 | ZiCo-BC: A Bias Corrected Zero-Shot NAS for Vision Tasks | Zero-Shot Neural Architecture Search (NAS) approaches propose novel
training-free metrics called zero-shot proxies to substantially reduce the
search time compared to the traditional training-based NAS. Despite the success
on image classification, the effectiveness of zero-shot proxies is rarely
evaluated on complex vision tasks such as semantic segmentation and object
detection. Moreover, existing zero-shot proxies are shown to be biased towards
certain model characteristics which restricts their broad applicability. In
this paper, we empirically study the bias of state-of-the-art (SOTA) zero-shot
proxy ZiCo across multiple vision tasks and observe that ZiCo is biased towards
thinner and deeper networks, leading to sub-optimal architectures. To solve the
problem, we propose a novel bias correction on ZiCo, called ZiCo-BC. Our
extensive experiments across various vision tasks (image classification, object
detection and semantic segmentation) show that our approach can successfully
search for architectures with higher accuracy and significantly lower latency
on Samsung Galaxy S10 devices. | Kartikeya Bhardwaj, Hsin-Pai Cheng, Sweta Priyadarshi, Zhuojin Li | 2023-09-26T04:44:40Z | http://arxiv.org/abs/2309.14666v1 | # ZiCo-BC: A Bias Corrected Zero-Shot NAS for Vision Tasks
###### Abstract
Zero-Shot Neural Architecture Search (NAS) approaches propose novel training-free metrics called zero-shot proxies to substantially reduce the search time compared to the traditional training-based NAS. Despite the success on image classification, the effectiveness of zero-shot proxies is rarely evaluated on complex vision tasks such as semantic segmentation and object detection. Moreover, existing zero-shot proxies are shown to be biased towards certain model characteristics which restricts their broad applicability. In this paper, we empirically study the bias of state-of-the-art (SOTA) zero-shot proxy ZiCo across multiple vision tasks and observe that ZiCo is biased towards thinner and deeper networks, leading to sub-optimal architectures. To solve the problem, we propose a novel bias correction on ZiCo, called ZiCo-BC. Our extensive experiments across various vision tasks (image classification, object detection and semantic segmentation) show that our approach can successfully search for architectures with higher accuracy and significantly lower latency on Samsung Galaxy S10 devices.
## 1 Introduction
Neural Architecture Search (NAS) algorithms have been widely used to automatically design highly accurate and efficient model architectures within a given search space. However, such techniques can be very computationally expensive as they require a lot of training resources. To address this limitation, Zero-Shot (Training-Free) NAS [7, 8, 13, 1, 16] has emerged recently, which relies on certain properties of neural network architectures to rank various models during the search without any actual training. As a result, these methods significantly accelerate the model searching process, enabling the identification of high-performing models more efficiently [7, 8, 2, 1, 16].
In this paper, we investigate two significant aspects of zero-shot NAS research. Firstly, despite the abundance of results in tasks such as image classification and various NAS-Benches [21, 11, 5, 15], several existing training-free metrics lack adequate validation on complex vision tasks, including semantic segmentation or object detection [7, 8, 14, 20]. Secondly, training-free metrics can be biased towards specific characteristics in neural architectures [6]. For instance, existing zero-shot NAS proxies can exhibit bias towards various factors, such as cell sizes, skip connections, convolutions, number of parameters, etc. [6].
To address the limitations mentioned above, we explore the following **key questions** centered around a recently introduced training-free metric known as ZiCo (Zero-Shot metric based on Inverse Coefficient of Variation on gradients) [7] which has demonstrated state-of-the-art performance across various NAS-Benches and ImageNet task:
1. Can ZiCo effectively perform _direct search_ in complex vision tasks, such as semantic segmentation or object detection without relying on initial ImageNet search?
2. Are there any biases present in ZiCo? If yes, how can we correct these biases?
Our study demonstrates that ZiCo yields exceptional results when applied to challenging semantic segmentation and object detection tasks, especially for _macro
Figure 1: Overview: Zero-Shot NAS on ImageNet for EfficientNet type networks. (a) ZiCo found architectures saturate with depth and have lower channel widths, thus showing that ZiCo is biased towards thinner and deeper networks. (b) Bias-Corrected ZiCo-BC metric significantly reduces the depth-width bias and produces better models.
architecture search_. However, when conducting _micro-architecture search_ with a fixed backbone [12]), we observe a bias towards thinner (i.e., lower channel width) and deeper networks in ZiCo. Fig. 1 demonstrates zero-shot NAS results for ImageNet on a broader search space than that considered in [7]. As evident, the original ZiCo score tends to favor architectures with maximum depth and lower widths, leading to a bias towards thinner and deeper networks. This bias can hinder the effectiveness of zero-shot NAS methods across various applications, as it may lead to sub-optimal neural networks with lower accuracy. Therfore, there is a need for bias correction methods that can significantly improve the performance of zero-shot NAS.
In summary, we make the following **key contributions**: (1) We demonstrate that gradient-based zero-shot proxies like ZiCo are capable of performing _direct_ macro-architecture searches on complex vision tasks such as semantic segmentation and object detection. (2) We propose a new bias correction method for ZiCo, called ZiCo-BC, that significantly enhances the metric's performance and identifies effective models in micro-architecture searches. (3) Finally, we also provide general guidelines on how to scale up this bias correction for ZiCo prior to training individual models, along with an assessment of its current limitations.
## 2 Zero-Shot NAS for Complex Vision Tasks
**Preliminaries.** ZiCo [7] is a zero-shot NAS metric that leverages the inverse coefficient of variation on gradients. This score is used to rank neural network models based on their convergence rate and generalization capacity. Specifically, ZiCo is computed as follows [7]:
\[\text{ZiCo}=\sum_{l=1}^{D}\text{log}\left(\sum_{\theta_{l}}\frac{\mathbb{E} [\mathbf{\nabla}_{\theta_{l}}]}{\sqrt{\text{Var}(\mathbf{\nabla}_{\theta_{l}})}} \right), \tag{1}\]
where, \(D\) is the total number of layers in the network, \(\theta_{l}\) represents each parameter in layer \(l\in\{1,2,3,\dots,D\}\), and \(\mathbf{\nabla}_{\theta_{l}}\) is the gradient of the loss w.r.t. each parameter \(\theta_{l}\). The expected value and standard deviation is computed across multiple batches of input data at initialization. That is, no parameters are updated across batches, only forward and backward passes are used to compute gradient statistics. It was theoretically shown in [7] that these gradient statistics are linked to training convergence and generalization.
In this paper, we will discuss two kinds of zero-shot NAS paradigms using ZiCo: **(1) Macro-Architecture Search** where we search over multiple types of backbones and heads; we call this macro-architecture search since it significantly impacts the topology of neural architectures, and **(2) Micro-Architecture Search** where the backbone type is fixed and a same type of block repeats throughout the network; here, we search over channel counts, number of block repeats, kernel sizes, type of convolution (regular, depthwise, group), expansion ratios, etc. [12].
**Does ZiCo work on complex vision tasks like semantic segmentation? A Direct Macro-Architecture Search.** Despite extensive theoretical contributions and empirical validation across several NAS-Benches and ImageNet, the effectiveness of ZiCo was not evaluated for _direct search_ over downstream computer vision tasks, i.e., without any prior ImageNet search. Hence, in this section, we exploit ZiCo to directly search for hardware-efficient Semantic Segmentation networks in a wide search space containing multiple types of backbones and segmentation heads.
We construct a complex search space using backbone and head from HRNet [19] architecture as well as using backbones and heads from a recent _manually-designed_ hardware-efficient semantic segmentation network called FFNet [10]. HRNet [19] and FFNet [10] are highly different architectures. We also searched for HRNet-Head [19] or the Up-B-Head from FFNet [10] for head search. Finally, we introduced individual options for each backbone (e.g., depth, width, etc.), leading to a large search space.
Our objective is to exploit ZiCo to automatically design a significantly better network than the manual FFNet which was designed for mobile-scale AI accelerators. To this end, we consider the Cityscapes segmentation task and conduct NSGA-2 evolutionary search [4] over the above search space with hardware latency in the loop on the Samsung Galaxy S10 mobile platform. For ZiCo computation, we used the same loss as the one used to train FFNet [10].
Table 1 demonstrates the search results. As evident, even though the HRNet architecture has the least number of parameters and MACs, the HRNet backbone is not friendly to constrained mobile-scale hardware and shows about \(3.4\times\) higher latency compared to the manual FFNet and our automatically found ZiCo-based model, both achieving much higher accuracy. Clearly, the ZiCo-based model significantly outperforms the manual FFNet by \(1\%\) higher mIoU on Cityscapes segmentation with a similar latency.
## 3 Proposed Bias Correction
As mentioned earlier, we observed a bias in ZiCo towards thinner and deeper networks for micro-architecture search, i.e., when similar blocks repeat themselves throughout the fixed backbone. More precisely, in equation (1), the metric sums over the number of layers in the network. Consequently, the score grows linearly in number of layers, whereas the gradient statistics grow logarithmically. For networks with repeating blocks, this can lead to deeper
\begin{table}
\begin{tabular}{|l||c|c||c|c|} \hline Model & \#Params & \#MACs & Latency (ms) & mIoU \\ \hline HRNet [19] & **3.94M** & **77.89**G & 28.80 (\(1\times\)) & 77.0\% \\ \hline FFNet [10] & 27.49M & 96.37G & **8.35 (\(3.4\times\))** & 79.7\% (\(+2.7\%\)) \\ \hline \hline
**ZiCo model** & 31.92M & 96.14G & **8.48 (\(3.4\times\))** & **80.7\% (\(+3.7\%\))** \\ \hline \end{tabular}
\end{table}
Table 1: Direct **Macro-Architecture Search** via ZiCo on Cityscapes Semantic Segmentation
models achieving higher ZiCo scores even if they have significantly lower width. However, thinner and deeper networks may not always achieve optimal accuracy. As shown in Bhardwaj et al. [2], width plays a fundamental role in model's expressive power, and due to bias towards thinner and deeper networks, zero-shot metrics can become less effective at identifying optimal architectures. Therefore, due to the bias, ZiCo can favor deeper and thinner models over potentially more optimal ones during the evolutionary search. In the rest of this paper, we will discuss how to correct this depth-width bias in ZiCo.
Bias Correction for Micro-Architecture Search.To rectify the bias in ZiCo or other training-free NAS metrics that may exhibit a preference for thinner and deeper networks, we introduce a bias correction term. This term can be applied to modify the original metric definition. The proposed bias correction equation takes into account the _feature map resolution_ and _channel width_ of the network at different layers. For ZiCo, the equation is as follows:
\[\text{ZiCo-BC} =\sum_{l=1}^{D}\text{log}\left(\left[\frac{H_{l}W_{l}}{\sqrt{C_{ l}}}\right]^{-\beta}\sum_{\theta_{l}}\frac{\mathbb{E}[\boldsymbol{\nabla}_{ \theta_{l}}]}{\sqrt{\text{Var}(\boldsymbol{\nabla}_{\theta_{l}})}}\right) \tag{2}\] \[=\sum_{l=1}^{D}\text{log}\left(\sum_{\theta_{l}}\frac{\mathbb{E}[ \boldsymbol{\nabla}_{\theta_{l}}]}{\sqrt{\text{Var}(\boldsymbol{\nabla}_{ \theta_{l}})}}\right)-\beta\sum_{l=1}^{D}\text{log}\left(\frac{H_{l}W_{l}}{ \sqrt{C_{l}}}\right)\] \[=\text{ZiCo}-\beta\sum_{l=1}^{D}\text{log}\left(\frac{H_{l}W_{l }}{\sqrt{C_{l}}}\right)\]
Here, \(H_{l},W_{l},C_{l}\) are height, width of the feature map, and number of channels in layer \(l\), respectively. Hyperparameter \(\beta\) controls the amount of depth-width penalty applied to the score. Setting \(\beta=0\) automatically yields the original ZiCo score. Clearly, if the model becomes deeper or if it has fewer channels, the penalty increases, thus, discouraging thinner and deeper models during the evolutionary search. Of note, other bias correction methods may be possible. We comment on this briefly in Section 5.
## 4 Experiments
We first conduct a NATS-Bench-SSS study on CIFAR-10, CIFAR-100, and ImageNet-16-120 datasets [3] to evaluate the correlations of the proposed ZiCo-BC score with accuracy and compare them to the original ZiCo score. We then evaluate the proposed bias correction for three computer vision applications: (1) ImageNet Image Classification, (2) MS COCO Object Detection, and (3) Cityscapes Semantic Segmentation. We use ResNet-based search space for semantic segmentation, and EfficientNet-based search space for ImageNet image classification as well as object detection. Next, we present more details on the micro-architecture search space and evolutionary search settings as well as performance of ZiCo-BC for each task.
### NAS-Bench Correlations
Firstly, we evaluate the proposed bias correction on NAS benchmark NATS-Bench [11]. Specifically, we focus on the 32768 neural architectures with varying channel sizes from "size search space" (NATS-Bench-SSS), which resembles our micro-architecture search setting. Following the experimental setup in ZiCo, we compute the correlation coefficients (i.e., Kendall's \(\tau\) and Spearman's \(\rho\)) between the zero-shot proxy and the test accuracy. As evident from Table 2, the bias correction improves the correlation score of ZiCo across all three datasets, indicating that the ZiCo-BC score can be a more representative proxy of test accuracy for ranking candidates during a micro-architecture search.
### Classification and Object Detection
We conduct classification and object detection tasks on EfficientNet and EfficientDet, respectively [17, 18]. As these two networks share very similar backbones, we build the search space defined in previous studies [12, 17]. Specifically, the search space includes: (1) kernel size (\(3\times 3\) or \(5\times 5\)), (2) channel size, (3) the number of operation repeats per block, and (4) regular convolution or group convolution (with group size of 32). It is worth noting that one significant difference in our search space, as compared to existing works, is the omission of the squeeze-and-excite operation as it is not very hardware-friendly. When incorporating ZiCo-BC into the search process, we utilize the widely employed cross-entropy loss [17] for classification and the focal loss [18] for object detection, respectively.
Classification.EfficientNet [17] has proven to be a powerful architecture, achieving remarkable results in various computer vision tasks. To showcase the effectiveness of our proposed method, we employ ZiCo-BC to conduct a search for EfficientNet style models on the challenging ImageNet-1k dataset. By applying ZiCo-BC to this architecture, we aim to further enhance its performance and explore architectures that strike a balance between model depth and width. The model discovered by ZiCo-BC achieves an impressive 11% reduction in latency without sacrificing accuracy, as demonstrated in Table 3. In contrast, the original ZiCo score loses about \(0.9\%\) accuracy for similar latency.
Object Detection.EfficientDet [18] is a family of architectures renowned for high accuracy and efficiency in object detection, accommodating various resource constraints. The EfficientDet-D0 architecture comprises three
\begin{table}
\begin{tabular}{|l||c|c||c|c||c|} \hline Dataset & \multicolumn{2}{c||}{Cifar-10} & \multicolumn{2}{c||}{Cifar-100} & \multicolumn{2}{c|}{Img16-120} \\ \hline \hline ProxyCor. & kT & SPR & kT & SPR & KT & SPR \\ \hline ZiCo & 0.72 & 0.91 & 0.56 & 0.76 & 0.73 & 0.90 \\ \hline \hline
**ZiCo-BC** & **0.78** & **0.94** & **0.60** & **0.79** & **0.79** & **0.94** \\ \hline \end{tabular}
\end{table}
Table 2: Correlation Coefficients on NATS-Bench-SSS
components: (1) an EfficientNet backbone network, (2) a weighted bi-directional feature pyramid network (BiFPN), and (3) a class and box network for predicting object class and bounding box information. Notably, the backbone of EfficientDet-D0 contributes 78% of the FLOPs and 92% of the parameters in the entire architecture. Hence, our primary focus lies in searching for a backbone network that enhances the latency of the architecture without sacrificing accuracy. Table 3 displays the results on MS COCO 2017 [9] after training for 300 epochs. Our searched architecture achieves a remarkable 29% latency reduction while maintaining even better accuracy compared to EfficientDet-D0.
### Semantic Segmentation
In this section, we evaluate the bias correction ability of the proposed ZiCo-BC score in the context of micro-architecture search on Cityscapes dataset. Unlike Section 2, where we conducted a macro-architecture search across HRNet [19] and FFNet [10], here we specifically test ZiCo-BC on the FFNet backbone in conjunction with the FFNet-Head. The FFNet backbone is based on the ResNet architecture and consists of four stages. Our micro-architecture search space consists of (1) number of residual blocks in each stage, (2) number of output channels for each stage; each residual block in the stage has the same number of channels, and (3) type of convolution, i.e., Group Convolution with a group size of 32/64/128 channels whichever is larger, or a Regular Convolution. All kernel sizes are fixed to \(3\times 3\). To search over a large space, we significantly vary the width and depth of the candidate networks around the baseline FFNet [10] configuration. Overall, this search space consists of more than 44M unique architectures.
Table 4 shows that ZiCo-BC finds a model with similar mIoU as FFNet [10] but achieves 11% lower latency on the mobile platform. In contrast, networks found via the original ZiCo metric lose nearly 1% mIoU with about 16% lower latency. The ZiCo-BC model has 74 residual blocks with higher channel widths, thus, correcting the bias towards deeper and thinner networks. Improving the latency of FFNet [10] by 11% (with similar mIoU) is highly non-trivial as it is already designed for mobile devices.
## 5 General Guidelines and Limitations
General Guidelines.One crucial aspect of the proposed bias correction is determining the value of the main hyperparameter \(\beta\), which influences the penalty on depth and width. An appropriate \(\beta\) value can be obtained by analyzing the architecture of Pareto models found during evolutionary search. For instance, in Fig. 1(a), most models exhibit maximum depth with low width, indicating the presence of bias. To address this, we gradually increase \(\beta\) to encourage more diverse architectures with intermediate depth. For classification and object detection tasks, we used \(\beta\) = 1. On the other hand, we used \(\beta\) = 2 for semantic segmentation zero-shot micro-architecture search.
Limitations.Two limitations of our current bias correction are identified. Firstly, the bias correction applies solely to micro-architecture search with repeated blocks. For macro-architecture search, gradient statistics for different backbones (topologies) can result in a similar score even if they have highly different number of layers1. Therefore, a common penalty to backbones with different depths would treat the shallower backbone unfairly. Hence, further research is needed to come up with a universal bias correction (if needed) for macro-architecture search. Secondly, the current bias correction assumes a fixed input size for candidate models, disregarding the potential gain in accuracy for various vision tasks by increasing the image size. Hence, future bias correction methods that maintain the overall score with increasing input size are of interest.
Footnote 1: We observed this between FFNet and HRNet candidates: For networks with similar trainability, HRNet-based models with nearly half the layers often achieve comparable ZiCo to deeper FFNet-based models.
## 6 Conclusion
In this paper, we explore the effectiveness of zero-shot NAS on complex vision tasks beyond traditional image classification. Firstly, we validate an existing proxy, called ZiCo for _macro-architecture search_ in semantic segmentation. The ZiCo-based network achieves a remarkable \(3.4\times\) speed up over HRNet through automatic search, with 1% higher mIoU compared to a manually designed model of similar latency. Next, we identify biases in ZiCo for _micro-architecture search_ and propose ZiCo-BC, a novel bias correction method for depth-width biases in zero-shot metrics. Finally, we demonstrate that our bias correction enables ZiCo-BC to consistently achieve \(11\)-\(30\%\) lower latency and \(0.2\)-\(1.1\%\) higher accuracy compared to the models found via the original ZiCo for micro-architecture search on image classification, object detection, and segmentation.
\begin{table}
\begin{tabular}{|l||c|c|c|c|} \hline Model & \#Params & \#MACs & Latency (ms) & mIoU \\ \hline FFNet & 27.49M & 96.37G & 8.35 & 79.70\% \\ \hline ZiCo & **21.80M** & **75.89G** & **7.02**\((-16\%)\) & 78.62\((-1.08\%)\) \\ \hline \hline
**ZCo-BC** & 23.28M & 79.85G & 7.44 \((-11\%)\) & **79.71\(\%\)\((+0.01\%)\)** \\ \hline \end{tabular}
\end{table}
Table 4: Direct **Micro-Architecture Search** via ZiCo and ZiCo-BC on Cityscapes Semantic Segmentation
\begin{table}
\begin{tabular}{|l||c||c|c|} \hline Model & Approach & Latency (ms) & Accuracy/mAP \\ \hline \multirow{3}{*}{EfficientNet} & Scaling & 0.90 & 77.7\% \\ \cline{2-4} & ZiCo & 0.82\((-8\%)\) & 76.8\% \((-0.9\%)\) \\ \cline{2-4} & **ZTo-BC** & **0.80**\((-11\%)\) & **77.7\%**\((0\%)\) \\ \hline \hline \multirow{2}{*}{EfficientDet} & Scaling & 2.792 & 33.6 \\ \cline{2-4} & **ZTo-BC** & **1.974**\((-29\%)\) & 33.8 \((+0.2\%)\) \\ \hline \end{tabular}
\end{table}
Table 3: Direct **Micro-Architecture Search** via ZiCo and ZiCo-BC on EfficientNet/Det search space |
2309.13109 | DESI Complete Calibration of the Color-Redshift Relation (DC3R2):
Results from early DESI data | We present initial results from the Dark Energy Spectroscopic Instrument
(DESI) Complete Calibration of the Color-Redshift Relation (DC3R2) secondary
target survey. Our analysis uses 230k galaxies that overlap with KiDS-VIKING
$ugriZYJHK_s$ photometry to calibrate the color-redshift relation and to inform
photometric redshift (photo-z) inference methods of future weak lensing
surveys. Together with Emission Line Galaxies (ELGs), Luminous Red Galaxies
(LRGs), and the Bright Galaxy Survey (BGS) that provide samples of
complementary color, the DC3R2 targets help DESI to span 56% of the color space
visible to Euclid and LSST with high confidence spectroscopic redshifts. The
effects of spectroscopic completeness and quality are explored, as well as
systematic uncertainties introduced with the use of common Self Organizing Maps
trained on different photometry than the analysis sample. We further examine
the dependence of redshift on magnitude at fixed color, important for the use
of bright galaxy spectra to calibrate redshifts in a fainter photometric galaxy
sample. We find that noise in the KiDS-VIKING photometry introduces a dominant,
apparent magnitude dependence of redshift at fixed color, which indicates a
need for carefully chosen deep drilling fields, and survey simulation to model
this effect for future weak lensing surveys. | J. McCullough, D. Gruen, A. Amon, A. Roodman, D. Masters, A. Raichoor, D. Schlegel, R. Canning, F. J. Castander, J. DeRose, R. Miquel, J. Myles, J. A. Newman, A. Slosar, J. Speagle, M. J. Wilson, J. Aguilar, S. Ahlen, S. Bailey, D. Brooks, T. Claybaugh, S. Cole, K. Dawson, A. de la Macorra, P. Doel, J. E. Forero-Romero, S. Gontcho A Gontcho, J. Guy, R. Kehoe, A. Kremin, M. Landriau, L. Le Guillou, M. Levi, M. Manera, P. Martini, A. Meisner, J. Moustakas, J. Nie, W. J. Percival, C. Poppett, F. Prada, M. Rezaie, G. Rossi, E. Sanchez, H. Seo, G. Tarlé, B. A. Weaver, Z. Zhou, H. Zou | 2023-09-22T18:00:01Z | http://arxiv.org/abs/2309.13109v2 | # DESI Complete Calibration of the Color-Redshift Relation (DC3R2): Results from early DESI data
###### Abstract
We present initial results from the Dark Energy Spectroscopic Instrument (DESI) Complete Calibration of the Color-Redshift Relation (DC3R2) secondary target survey. Our analysis uses 230k galaxies that overlap with KiDS-VIKING \(ugriZYJHK_{s}\) photometry to calibrate the color-redshift relation and to inform photometric redshift (photo-\(z\)) inference methods of future weak lensing surveys. Together with Emission Line Galaxies (ELGs), Luminous Red Galaxies (LRGs), and the Bright Galaxy Survey (BGS) that provide samples of complementary color, the DC3R2 targets help DESI to span 56% of the color space visible to Euclid and LSST with high confidence spectroscopic redshifts. The effects of spectroscopic completeness and quality are explored, as well as systematic uncertainties introduced with the use of common Self Organizing Maps trained on different photometry than the analysis sample. We further examine the dependence of redshift on magnitude at fixed color, important for the use of bright galaxy spectra to calibrate redshifts in a fainter photometric galaxy sample. We find that noise in the KiDS-VIKING photometry introduces a dominant, apparent magnitude dependence of redshift at fixed color, which indicates a need for carefully chosen deep drilling fields, and survey simulation to model this effect for future weak lensing surveys.
keywords: galaxies: distances and redshifts - gravitational lensing: weak - techniques: spectroscopic - surveys
## 1 Introduction
Modern cosmology relies on our ability to observe galaxies as tracers of the structure formation and expansion of the Universe. To do this we must map their on-sky positions and, crucially, their positions in all three dimensions. Measuring the redshift, \(z\), for galaxies outside of our local group is a good proxy for this third dimension, because the expansion of the universe reddens the light from a galaxy in a way that is monotonic with distance. The most accurate way to measure a galaxy's redshift is via the detection of prominent emission or absorption features with sufficient signal-to-noise ratio in the spectral energy distribution (SED). With this, the observed wavelengths of these features are compared to their known rest frame wavelengths to provide a redshift measurement. Spectroscopic surveys obtain these many-wavelength observations very successfully via slit masks on the telescope (e.g. on Keck, Oke et al., 1995), and more recently with integral field units (IFUs) (e.g. the Hobby Eberly Telescope Dark Energy Experiment, Gebhardt et al., 2021), and massively multiplexed instruments with independent optical fibers capable of taking many spectra at once (e.g. DESI, Flaugher and Bebek, 2014; DESI Collaboration et al., 2022). The majority of galaxies in our Universe are relatively faint and distant, and low-throughput galaxies are more feasibly observed photometrically - in optical, near-infrared, and infrared filters - rather than spectroscopically, due to limits in exposure time. Imaging surveys estimate the distances to galaxies from their colors, measured as the ratio of photometric fluxes in different bandpass filters. Although photometric redshifts are more readily attainable, these estimates are less accurate than spectroscopic redshifts.
Cosmological measurements from wide imaging surveys, like weak gravitational lensing or galaxy clustering, rely on geometric information. The most recent imaging surveys, such as the Dark Energy Survey (DES, Sevilla-Noarbe et al., 2021), Subaru Hyper Suprime-Cam (HSC, Aihara et al., 2022) and the Kilo-Degree Survey (KiDS, Kuijken, K. et al., 2019), span a significant fraction of the sky. Accurate estimates of the redshifts of these galaxy samples based on limited information (e.g. photometry rather than spectroscopy) are required to obtain unbiased cosmological constraints. Indeed, one of the foremost difficulties facing imaging surveys for cosmology lies in calibrating the redshift probability distribution (Myles et al., 2021). Typically, redshift distributions are estimated and calibrated for an ensemble of galaxies, \(n(z)\), and the uncertainty is modelled as an
error on the mean redshift of the distribution in the cosmological analysis-as cosmological parameters are most sensitive to shifts in the mean-\(z\) (see e.g. Amon et al., 2022; Li et al., 2023; Dalal et al., 2023; van den Busch et al., 2022). Calibrating the redshift distribution for an entire ensemble has unique difficulties compared to doing the same for individual galaxies, though individual redshifts have a multitude of other science applications. As ensemble redshift distributions are of the foremost interest to weak lensing cosmology, ensemble calibration is the focus of this paper.
With observations of fluxes in only a few broad bands, the underlying challenge for determining an accurate redshift is a degeneracy between a galaxy's spectral phenotype and redshift (see Newman and Gruen, 2022 for a review). A variety of approaches have historically been used to determine galaxy redshifts. Quiescent elliptical galaxies have a consistent drop in light emission at 4000A, which enables accurate photo-\(z\)s with this so-called red sequence of galaxies, which allows the bounding of the redshift between different band passes and is particularly useful for identifying cluster members and lensing galaxy samples (e.g. the RedMaPPer algorithm in Rykoff et al., 2014, 2016). While early-type, passive galaxies benefit from having very similar SEDs, this is not necessarily true for many other galaxy types. Template fitting methods can rely on either spectroscopically informed templates or semi-analytic models that are typically constructed with stellar populations (e.g. Brammer et al., 2008). An empirical variation of this is a Principal Component Analysis (PCA), but in essence both methods fit the data to a linear combination of templates (see Salvato et al., 2018 for a review). Regardless of the method for redshift estimation, redshift-type degeneracy is an irreducible problem when working with photometry (see discussion of methods that break age-mass-redshift degeneracy in e.g. Wang et al., 2023).
A correct model for the galaxy population (i.e. the mix of templates and their luminosity functions at each redshift, or a large, fully representative reference sample of galaxies with known spectroscopic redshift) would be required to determine correct \(n(z)\) for photometric samples despite redshift-type degeneracies. Both of these solutions at present appear unfeasible, though gains in forward modeling the distribution have been made (e.g. Alsing et al., 2023). The issue can be greatly reduced by observing in additional bands that break degeneracies, which motivates a filter set that goes beyond the standard optical broad bands (Buchs et al., 2019; Wright et al., 2019). With this increased wavelength coverage, however, the color space that the photometric observations occupy becomes high-dimensional. The challenge is to associate like-spectroscopic galaxies with photometric galaxies efficiently across that high-dimensional space.
Self Organizing Maps (SOMs,Kohonen, 2004) can serve as a useful tool to subdivide the high-dimensional colour space into a set of SOM cells efficiently, tracing density and coherent galaxy types, as demonstrated in Masters et al. (2015) and utilized in Hildebrandt et al. (2020); Myles et al. (2021) among others. The use of SOMs for redshift calibration demand spectroscopic galaxy samples to completely populate this space, thereby providing accurate galaxy redshifts for any combination of colors. Of critical note is that spectroscopic redshifts are typically obtained only for a specific and limited selection of galaxies and must be weighted to become representative of the photometric sample (Hartley et al., 2020). The resulting calibration problem is that without a complete spectroscopic sample that fully populates the photometric color-space derived redshift distributions are subject to bias and uncertainty given incomplete or under-sampled spectroscopic observations.
Some 5,000 spectroscopic redshifts have recently been determined by the Complete Calibration of the Colour-Redshift Relation (C3R2) project in order to populate this color space and thereby calibrate the color-redshift relation (Masters et al., 2017, 2019; Stanford et al., 2021). Beyond fully populating each SOM cell, a larger multiplicity of spectroscopic galaxies per SOM cell will be needed to meet requirements for future deep imaging surveys like Euclid (Amendola et al., 2013) and Rubin Observatory (Ivezic et al., 2019; Collaboration et al., 2009). This can be achieved via statistical characterization of broad or bimodal redshift distributions in SOM cells, though a well constructed SOM minimizes these features where possible. Additionally, as upcoming imaging surveys are deeper than most spectroscopic samples, it is essential that the magnitude dependence of the redshift at a fixed color, \(dz/dm\), is understood. Previous examinations of this measurement have shown a small \(dz/dm\) trend at fixed color (Masters et al., 2019).
In this paper, we present the DESI Complete Calibration of the Color-Redshift Relation (DC3R2) secondary target survey. This survey supplements the existing spectroscopic samples used for photometric redshift calibration by both populating SOM cells that were previously unfilled, and by increasing the multiplicity per cell. We use DC3R2 to revisit the magnitude dependence of the redshift at a fixed color with improved statistics, including apparently brighter galaxies, and study trends in \(dz/dm\) as a function of color. Sec. 2 introduces the survey data used. In Sec. 3 we describe the construction of the DC3R2 sample. We explore how it calibrates the color-redshift relationship alongside DESI main survey targets in Sec. 4. Finally, using this new resource, we examine the magnitude dependence of redshift at fixed color in the presence of observational effects like photometric scatter (Sec. 5.2).
## 2 Data
DC3R2 is a secondary target program on the Dark Energy Spectroscopic Instrument (DESI) (Sec. 2.1) that obtained spectroscopic redshifts for galaxies that were targeted with KiDS-VIKING (KV) photometry (Sec. 2.2). Additional spectroscopic and imaging surveys were used to validate this analysis, as listed in Sec. 2.3.
### Desi
The Dark Energy Spectroscopic Instrument (DESI) is a ground-based spectroscopic experiment installed at the 4m Mayall telescope (Collaboration et al., 2016; DESI Collaboration et al., 2022). The DESI instrument is sensitive from 360-980 nm, with 5,000 robotically actuated fibers that are capable of taking spectra simultaneously. Over five years, it aims to measure spectra of 40 million galaxies and quasars that will aid in examination of baryon acoustic oscillations (BAO), the growth of structure through redshift-space distortions and dark energy (Collaboration et al., 2016). While these are DESI's primary goals, the survey is uniquely capable of providing a multitude of spectroscopic redshifts that have far-reaching uses. Here we exploit its ability to be used to calibrate photometric redshifts, in line with the need of weak gravitational lensing experiments.
The DESI main survey targets relevant to this paper are divided into three galaxy types: Luminous Red Galaxies (LRGs; Zhou et al., 2023), Emission Line Galaxies (ELGs; Raichoor et al., 2023), and the Bright Galaxy Survey (BGS; Hahn et al., 2022). The methodology for fitting models to the obtained spectra are explored in Bailey et al. (2023), and the validation of these techniques is performed in Lan et al. (2023). The observations that produced data for this analysis come from December 14, 2020 through July 9, 2021, which span a combination of the 'One-Percent Survey' (OPS), Survey Validation (SV), and the
beginning of the main survey operations (Y1) - the internal/_Fuji_ and _Guadalupe_ data releases, respectively (DESI Collaboration, 2023). SV data has already been made available in the DESI Early Data Release (EDR) (DESI Collaboration et al., 2023). Selection changes for main targets were subject to minute changes between SV1 and the OPS as well as Y1 operations, detailed in the respective paper for each sample. The selection footprint for DC3R2 was modified after the OPS and before Y1 to make use of newly released photometry. Additionally, on May 12th and 13th, 2021 a small dedicated tile program was run for DC3R2 targets with high priority. The selection for DC3R2 targets is outlined in greater detail in Section 3.1, and the optimization for dedicated tile fibers is discussed in Appendix C.
### Kilos-Viking
The Kilo Degree Survey (KiDS) is a large scale optical _ugri_ imaging survey with OmegaCAM on the VLT Survey Telescope (VST) at the ESO Paranal Observatory (Arnaboldi et al., 2000; Kuijken et al., 2015). Its footprint is overlapped by a near-infrared _ZYJHK\({}_{s}\)_ VIRCAM photometric survey with the 4m Visible and Infrared Survey Telescope for Astronomy (VISTA), the VISTA Kilo-degree Infrared Galaxy Public Survey (VIKING). The two surveys both span more than a thousand square degrees in the _ugriZYJHK\({}_{s}\)_ bands, to a depth of \(r\leq 25\). Their complementary wavelength coverage has been processed jointly to create the 9-band KiDS-VIKING (KV) survey (Wright et al., 2019). This data set provides dereddened, multi-band color information for an overlapping patch of the DESI SV and Y1 footprint, allowing us to crucially associate DESI spectroscopic redshifts with a high-dimensional color space.
In this analysis, specifically, we make use of the KiDS-450 data release, with observations spanning the Galaxy And Mass Assembly (GAMA) fields (Driver et al., 2011), depicted as G09, G12, and G15 and the shaded green regions in Fig. 1 (de Jong et al., 2017). Through the beginning of main survey operations, we also used the KiDS-1000 release, shown in the blue footprint of Fig. 1 (Wright et al., 2019; Kamawadi et al., 2019; Hildebrandt et al., 2020).
### Other Spectroscopic Surveys
A multitude of spectroscopic surveys have been undertaken in the COSMOS field. Several of these were utilized in this analysis for validation. Among these surveys are the original C3R2 effort (Masters et al., 2017, 2019; Stanford et al., 2021) and the master spectroscopic catalog from the COSMOS collaboration (M. Salvato, in prep). The later includes observations from a variety of wavelength regimes and spectral resolutions across many instruments (VLT VIMOS, VUDS, Keck MOSFIRE, DEIMOS, Magellan IMACS, Subaru FMOS, and many others, (Lilly et al., 2007; Le Fevre et al., 2015; Casey et al., 2017; Hasinger et al., 2018; Kriek et al., 2015; Kartaltepe et al., 2010; Silverman et al., 2015; Trump et al., 2007; Balogh et al., 2014). For the use of this project, these samples were limited to only confident redshifts. Furthermore, they were matched to the Masters et al. (2017) photometry and assigned to the original C3R2 SOM in order to validate our color-redshift relation.
## 3 Methods and Observations
From December 2020 through July 2021, DESI observed 328k main survey targets (ELGs, LRGs, BGS) in the KV footprint, 51,177 of which were also selected as DC3R2 and 1216 that were exclusively DC3R2 targets. The following sections will further break down these numbers into the DC3R2 secondary targets during SV ( 3.1), Y1 (3.1.2), and our dedicated tiles (3.1.1), and selected main survey targets that overlap. After completeness cuts we find that we have a sample of 230.7k galaxies that occupy the color space from \(0.0<z<1.55\).
In the following sections, we describe our procedure for target selection (3.1) through the three major phases of the DC3R2 program: the dedicated tiles (3.1.1), and SV, Y1 spare fibers (3.1.2). Additionally we detail the observations (3.2), redshift completeness (3.2.2), and the weighting schema to provide a representative sample of redshifts for calibration purposes (3.2.3).
### Target Selection
The primary DC3R2 targets span across the GAMA-9h, 12h, and 15h equatorial fields (Driver et al., 2011) for a total of 300 sq. degrees, where sufficiently deep \(ugriZYJHK_{s}\) color information was available. We have matched the GAMA fields as reported by KiDS-VIKING (KV) DR3 (Wright et al., 2019) to the Dark Energy Camera Legacy Survey (DECaLS; Dey et al., 2019) Data Release 9 photometry in order to constrain color alongside DESI Z-band fiber fluxes, using the closest match within 1". In order to calibrate the complete color redshift relation, DC3R2 aimed to occupy with multiplicity as much as reasonably attainable to DESI of the color-space described by the Masters et al. 2017 Self Organizing Map (SOM), which is a useful map trained on narrow band COSMOS photometry that spans the approximate depth and breadth of color for future weak lensing surveys like Euclid and LSST. Appropriately it is the subject of several redshift search programs that also aim to span this color-space with spectroscopy (Saglia et al., 2022; Masters et al., 2015), which further encourages our use of it. This SOM is then corrected to KiDS-VIKING colors according the procedure described in Section 4.1, to allow for assignment of our alternate photometric bands. The abundances of galaxies observed across the color-space for these KiDS-450 fields (G09, G12, G15) can be found in Fig. 2a.
Galaxies are selected by color, determined by their assignment to cells in this map, as well as by fiber magnitude. We select these colors on mean cell magnitude, to take into account visibility of the targets to the fiber, and on a high probability of redshift! 1.6 (i.e. the OII feature lies within the DESI wavelength window). This allows us to target 3,692 cells from the C3R2 SOM (\(\approx\)33%) with a minimum of 3 galaxies per cell, spread in magnitude. These targets were chosen to enable a first quantification and rejection of faulty photometry and redshift and examine the trend of redshift with magnitude at fixed color, sorely required for accurate calibration efforts in the future (Masters et al., 2019). From 1800 targets per sq. deg. DC3R2 required only a small random subset (36 per sq. deg., or 2% for our initial request of 3 galaxies per cell, making it an ideal candidate for a secondary target programs with very flexible fiber assignment across a high density of targets.
Explicit choices for the DC3R2 target selection procedure follows from the joint KiDS-VIKING-DECaLS matched catalog,
* To boost redshift completeness of our targets, individual galaxies that would take four visits or fewer (for an SNR of one in the optical) were selected from the catalog as MAG_FIBER_Z! 22.10.
* Potential targets were assigned to the Masters et al. (2017) SOM, after transforming the latter to the KiDS-VIKING \(ugriZYJHK_{s}\) color-space (see Sec. 4.1).
* An envelope of SOM cells (not contiguous) that contain galaxies bright enough to achieve redshift success with two visits (mean(MAG_FIBER_Z)! 21.88 and \(\gamma\)95% probability of redshift!
1.6) were selected. The probability of redshift \(<1.6\) for a given cell was obtained from the COSMOS15 (Laigle et al., 2016) photometric redshifts for each cell as in Masters et al. (2015).
* The full target catalog was defined as the concatenation of all cells within the above envelope that each had at minimum 3 viable detections (believed to have observable redshifts in four or fewer DESI visits). From this we prioritized the brightest in each cell (as a lever arm for the measurement in Sec. 5.2) and multiplicity as described for each component of our survey in the respective section below.
The initial spare fiber catalog for the SV stage had approximately 970k targets available that met the above criteria. This target list was modified for Y1 to include the larger KiDS-1000 footprint (Kuijken, K. et al., 2019) (see 3.1.2).
#### 3.1.1 Dedicated Tiles
DC3R2 has observed two dedicated tiles, i.e. instrument pointings, at the end of the One Percent Survey within the survey validation period. We chose the two pointings to be centered at 217.5 degrees and 221.0 degrees RA on the celestial equator, within the GAMA 15h field (Driver et al., 2011) and also within the Hyper Suprime-Cam Subaru Strategic Program (HSC) survey area (Aihara et al., 2018). Each tile was observed with two different fiber configurations, one for a single 30 minute exposure aiming at targets down to \(Z_{\rm fiber}<21.5\), and one for a total of 90 minutes of exposure time with targets down to \(Z_{\rm fiber}<22.10\). A description of how targets were chosen for each of these pointings can be found in Appendix C. The dedicated tiles are identified in Fig. 1 by black circles.
#### 3.1.2 Spare Fibers
The DC3R2 targets observed outside of the dedicated tiles occurred in two phases within the SV and Y1 periods. During SV, DC3R2 took observations for 44,272 targets. In contrast, during the onset of Y1 6,905 DC3R2 objects were observed.
The initial spare fiber, _secondary target_, observations are described by the selections made in 3.1 on the KV DR3, KiDS-450, photometry released at the time.
The spare fiber target selection and prioritization for the Y1 observing period was modified from the initial SV phase to draw targets from the then publicly available larger area KiDS-1000 (DR4) photometric catalog (Kuijken, K. et al., 2019), as seen in the shaded blue regions of Fig. 1. The strategy was altered to prioritize the brightest galaxy in each cell alongside a randomly drawn target within a Z-magnitude of 21.88. This lower magnitude cut ensures a redshift is obtainable from two visits, which differs from the maximum of four visits in SV, seldom available for spare targets. The random draw traces the magnitude distribution of the cells overall. This pair-wise selection per cell has the benefit of extending the magnitude leverage on \(dz/dm\), as more bright galaxies are available in the larger footprint, and achieving high redshift success in survey spare fiber mode. The same criterion as in Sec. 3.1 allowed us to select from 5074 cells in the C3R2 SOM for the Y1 fiber proposal. The blue objects in Fig. 1 are successful matches back to this wider catalog during Y1, sometimes retroactively from main survey targets in the SV phase. The DC3R2 spare fiber program extended into Y1, and while some of that data is analyzed here, we expect an additional 315k targets within our footprint to become available with the Y1 data release, with around 28k of those objects being DC3R2 exclusive targets and the remainder coming from overlap with DESI main survey target classes.
### Observations
The positions of all observed DC3R2 targets on the sky are depicted in Fig. 1, including the dedicated tiles.
#### 3.2.1 Redshift determination
Following pixel level calibration and extraction of a one-dimensional spectrum, the DESI redshifting pipeline redbock forward models the observed data from a basis consisting of a set of template SEDs for each target class (Guy et al., 2023; Bailey et al., 2023). This decomposition is done at each point in a fine grid of redshift values \(z\) and the \(\chi^{2}\) of the difference of observed data and the best linear combination of templates is determined. A minimum in the \(\chi^{2}(z)\) indicates a potentially optimal redshift-template solution. The \(z\) that globally minimizes \(\chi^{2}\) is taken as the redshift of the observed object.
#### 3.2.2 Completeness
The primary metric of redshift confidence in the DESI survey is \(\Delta\chi^{2}\), the difference between the \(\chi^{2}\) of the best fit redshift and template combination and the second lowest local minimum of \(\chi^{2}\) as a function of redshift. The larger the \(\Delta\chi^{2}\), the more confident one can be that the first redshift is correct (Bailey et al., 2023). As each main survey sample selection (Hahn et al., 2022, Zhou et al., 2023, and Raichoor et al., 2023) has different characteristic SED features, the ability of redbock to fit a given spectrum depends on the type of galaxy. The DESI visual inspection efforts have determined minimum \(\Delta\chi^{2}\) selections and additional quality cuts that maximize purity and completeness for each main survey sample (Lan et al., 2023; Guy et al., 2023).
For this analysis, DC3R2 galaxies that we consider to have high confidence redshifts must pass the same completeness criteria as the BGS sample, i.e. \(\Delta\chi^{2}>40\). This is the strictest among the \(\Delta\chi^{2}\) cuts for DESI main survey samples and was chosen to account for the broad variation of SED types in our target selection, informed by the visual inspection done for the BGS sample in Lan et al. (2023). ELGs and LRGs have significantly lower \(\Delta\chi^{2}\) requirements in the DESI main survey, which combined with further quality cuts give high fractions of confident redshifts, as redbock and its template sets have been optimized to obtain good fits to these SEDs. Galaxies with different SEDs may need more conservative metrics to minimize outliers, hence our adoption of the \(\Delta\chi^{2}>40\) cut.
For the sake of future survey efforts to calibrate the color-redshift relation, in Fig. 4 we report the exposure times required by DESI for targets of eight-band colors in the SOM to achieve \(\Delta\chi^{2}>40\) for objects scaled to a magnitude of MAG_GAAP_Z = 21.0, as following from the discussion in Appendix D.
For the fiber configurations with 90-minute exposure time over the DC3R2 dedicated tiles, 98.1% of targets achieved the DC3R2 criterion for success with no flagged warnings (ZWARN \(==0\)). For the galaxies in the two single 30-minute exposures, 93.4% meet this redshift success criterion. The overall redshift success rate for the dedicated tile targets is 95.8%. For the entire DC3R2 ancillary program that also made use of spare fibers ( footprint depicted in Fig. 1), the success rate is 93.4% for unique DC3R2 targets, and 94.7% for shared targets, with a total of 13,270 targets observed during the dedicated program. While many of these overlap with main survey targets, the higher DC3R2 priority in fiber selection ensures that the selection of these objects are less biased towards the main survey selections in color than similar overlaps in the spare fiber fields.
#### 3.2.3 Sample Weighting Scheme
One of the primary methods of photometric redshift calibration in weak lensing is to appropriately reweight a sample of known redshifts to accurately approximate the redshift distribution of a much larger source galaxy sample. This requires a full understanding of the selection acting on both samples. For the true redshift sample this selection can be intentional in the definition of a spectroscopic target catalog or unintentional due to incomplete redshift recovery among the targeted galaxies. Only with the selection accounted for an in addition with the weak lensing source galaxy sample selection applied can the reweighted spectral sample be representative and the estimated redshift distribution therefore be unbiased.
For this survey, we must take into account the overlap of DC3R2 targets with DESI main survey targets. The selection of the latter is not based on KiDS-VIKING colors but on DESI Legacy Survey \(g,r,z,W1,W2\) photometry (Myers et al., 2023). The observation prioritization of the DESI main survey targets may affect relative abundances of certain redshifts within a given SOM cell. This is accounted for by reweighting according to different subselections of our targets, namely:
1. **Dedicated tiles**: Our dedicated tile sampling method over a small area will be unaffected by DESI main survey oversampling. It is our _fiducial_ sample for this reason.
2. **DC3R2 exclusive targets**: These are targets observed during regular DESI operations that are only observed as targets of our dedicated program, i.e. that do not meet DESI main survey target selection criteria. This sample will be potentially biased _away_ from the types and redshifts of galaxies observed by DESI's main surveys in SOM cells that contain a mix of both.
3. **DESI main survey targets**: DESI main survey targets that overlap with our selection will be more likely to be observed due to their high priority in comparison with DC3R2 exclusive
Figure 1: Footprint of spectroscopic redshifts used in this analysis, including main survey targets, depicting objects observed in SV (green points) and through the first 56 days of Y1 operations (blue points). The combined bright and faint footprint for the DC3R2 dedicated tiles are outlined in black. The broad KiDS-VIKING-N field provides the \(ugriZYJHK_{s}\) photometry to match to DESI data, which is inclusive of KV DR4 (shaded blue) and DR3 (shaded green)(Kuijken, K. et al., 2019). Particularly relevant for SV, prior to the KiDS-1000 release, the majority of our targets lie in the GAMA fields (red).
Figure 2: The distributions of targeting, spectroscopy, and redshift completeness across the color-space.
targets in SOM cells that contain a mix of both. Each class of main survey targets has different magnitude and color cuts than the DC3R2 targets. SV and Y1 main survey targets have different color selections that have evolved over the course of the survey and are weighted accordingly.
We can re-weight these subselections to leverage the full DESI survey beyond our fiducial sample, (1). The basic ruleset for usage and weighting is described here, with the aim to be providing a reliable and unbiased sample of spectroscopic galaxies over as many cells as possible with a large magnitude span within available cells. We want each cell to have a collection of representative galaxies that are corrected for overabundances caused by preferential observations of galaxies of a particular SED type. For all DESI-observed galaxies that have KiDS-VIKING photometry in a given cell:
* We split the sample in the cell into categories of (2) and (3) by observation flags DESI_TARGET and SCND_TARGET.
* For those in (2), we apply a cut on redshift confidence and warning flags (DELTACHI2 \(>\) 40, ZWARN = 0).
* For those in (3), we check how complete the _targeting_ is for main survey targets in color space by comparing the full main survey target catalog occupation for the cell (with only color cuts applied) to the full KiDS-VIKING occupation of that cell. If the targeting completeness (fraction of photometric targets that are also main survey targets) is below 90 per-cent for a given color, we exclude the main targets from this cell and skip the following steps. This effectively ensures that the color cut on the main survey sample does not significantly bisect a SOM cell. As the period covered by the observations spanned several iterations of LRG/ELG cuts (see Myers et al., 2023), only cells where both the stricter Y1 cut and the looser SV cuts did not bisect the cell were retained.
* For those in (1), (2) and (3), we check the redshift completeness of the sample type in the given cell. This is done on the subset of galaxies in a cell that pass the specific survey target selection (e.g. the DECaLS based magnitude and color cuts, or DC3R2 magnitude selection). If among this subset within a cell the redshift completeness is less than 90%, we exclude all of these for both DC3R2 and main survey galaxies from the weighting scheme due to the risk of a significant redshift dependent selection bias, and do not include low confidence
Figure 4: Median DESI exposure times (in minutes) necessary for each SOM cell, for targets with a DECam fiber magnitude of \(z_{\rm fiber}\) = 21, to reach a high redshift confidence \(\Delta\chi^{2}\) = 40, as per Appendix D. Full exposure time quantiles and median colors for each cell are reported in the attached data products. BGS targets are excluded from this plot, due to their typically much larger sky brightness. Color bar is chosen to approximately separate passive (red/yellow) from emission line galaxies (blue).
Figure 3: Median DESI redshifts compared to previously measured and curated spectroscopy in the COSMOS field reveals substantial gains in the low redshift regions and overall multiplicity. Note that these color-spaces are not identical and this comparison is a generalized one, as per the discussions in Sec. 4.1, Appendix B.
targets in the next step. See a simple visual description of this in Fig. 2c.
* We weight the samples of (2) and (3) high confidence redshifts so that both samples contribute proportionally to their abundances in the targeting catalog, within a given cell and bin of \(z_{\rm fiber}\). If only one sample type exists in a given magnitude/cell bin, no such re-weighting is done.
We see a linear relationship be written explicitly as follows. For a given magnitude/color bin \(b\) within the \(g,r,z\) magnitude cube of a given SOM cell \(c\) let \(N^{a}\) denote the number of actual (or _observed_) and \(N^{p}\) the number of potential targets. Further, denote the unique sample (ELG, LRG, BGS (B), BGS (F), DC3R2 spare fibers, DC3R2 dedicated tiles) of a target by the index \(s\). Where each class of galaxy lives in the SOM is delineated by color in Fig. 6, where we can see each of the samples are highly complimentary. The weight of the redshift of an object towards the estimated redshift distribution of a cell \(c\) depends on \(s\) and \(b\) and can be factorized as
\[w(b,c,s)=w_{b}(b,c)\times w_{s}(b,c,s) \tag{1}\]
where \(w_{b}(b,c)\) is the weight of the magnitude bin towards the full sample in the cell and \(w_{s}(b,c,s)\) is the weight of the sample that the galaxy belongs to for the given cell and magnitude bin. Individual samples in magnitude-color bins require these weights to properly reproduce relevant target abundances in our final reported redshift distributions. An illustration of how samples populate a choice of magnitude bins is visible for the SV3 selections in Appendix A, Fig. A2 where a more thorough description of magnitude bins can be found. For objects that are in more than one sample (typically only true for some DC3R2 targets with main survey samples), the largest of its sample weights is used.
\[w_{b}(b,c)=\frac{N_{b}^{p}/N_{b}^{p}}{N_{b}^{a}/N_{c}^{ae}} \tag{2}\]
Here \(N_{c}^{a,p}\) and \(N_{b}^{a,p}\) refer to the number of actual or potential targets in cell \(c\) or in magnitude bin \(b\) of that cell, respectively. Similarly, the reweighting of samples among that bin and cell is given by
\[w_{s}(b,c,s)=\frac{N_{s,b}^{a}/N_{b}^{a}}{N_{s,b}(\mathrm{spec})/N_{\mathrm{b} }(\mathrm{spec})} \tag{3}\]
No reweighting is done for objects in (1) aside from the redshift completeness check, as these targets are not expected to be biased within a cell. The distribution of weights normalized for each SOM cell, \(\Sigma_{s\in C}\Sigma_{b\in C}w_{s}(b,c,s)=N_{c}(spec)\), are depicted in Fig. 5 for the main survey samples. The purpose of these weights, to construct less biased \(p(z|c)\) distributions, can be demonstrated in Sec. 5.1, where they produce noticeable shifts in the inferred redshift distributions.
## 4 The Color-Redshift Relation
In this section we describe the color-space used in this analysis (Sec. 4.1) and discuss the characterization of redshift distributions in that space given DESI observations (Sec. 4.2).
### Description of Color-Space
This analysis makes use of the SOM first developed in Masters et al. (2015), which includes galaxies to Euclid-like depth, i24.5 (AB) or approximately 98% complete at \(i=25.3\) from Laigle et al. 2016, in _ugrizYJH_, with the modifications and added \(K_{s}\)-band from Masters et al. (2017). Though our map exists in KiDS-VIKING photometric colors - evaluated on best-fit narrowband SEDs in the original SOM - and not the photometry that Masters et al. 2015 was trained on, making use of a transformed version of this SOM has the advantage that individual cells will be populated by galaxy populations of approximately the same true colors. Thus the deep samples collected by C3R2 and the DC3R2 survey will have similar redshifts for similar cells and can help inform future redshift searches. We can conclude that our map, though photometrically different, will span a similar color space to Masters et al. 2017 and therefore Euclid and LSST by construction. For more elaboration on the differences between the SOM in this analysis and Masters et al. 2017, see Appendix B, and for the exact colors of our map see Appendix A.
### Characterization of Color-Space Using DESI
With DESI targets mapped to \(ugrizYJHK_{s}\) color-space, we have a powerful statistical sample that constrains the high-dimensional color-redshift relation for future cosmological analyses. With more than 230k galaxies covering 56% of the map, we present redshifts that can span 0 \(<z<\) 1.5 with high multiplicity. DESI provides both secure spectroscopic redshifts alongside _Redrock_ SED models fitted to each galaxy, which allows us to explore the evolution of galaxy-type across the map, as seen in Fig. 6. The colors of the SOM, broadly have smooth transitions, as do the redshifts as seen in Fig. 3a which depicts the median redshift across this map. Galaxies with alike colors tend to have alike redshifts, except for where sharp delineating features express an innate degeneracy - where small shifts in color can have large consequences for redshift inference.
The gains that have been made in constraining the \(p(z|c)\) can be seen in Fig. 3, where we examine the additional color coverage of bright, low-redshift cells that were under-observed in the COSMOS field in the past. The red and magenta regions in Fig. 3c depict regions where DESI dominates the redshift information in the map by spectroscopic count. Cosmic variance and simple under-sampling can distort the broader color-redshift relation. DC3R2 and DESI together provide much higher confidence in the distributions of redshifts at these magnitudes and colors. This is exemplified in Fig. 7, which depicts the number of cells in the map at each redshift with spectroscopic coverage. We see a dominant DESI + KV contribution in the BGS and ELG redshift regimes (the two peaks), which sharply drops off at \(z=1.55\) (becoming subdominant to COSMOS
Figure 5: Distribution of the weights for main target classes. Over-represented spectroscopic targets will have lower weight by construction, in order to generate a more representative \(p(z)\).
at \(z>1.375\)), where the OII line passes out of the optical range. DESI jointly with KiDS-VIKING calibrates 87% of the LSST / Euclid color-space (as a fraction of cells) at \(z<0.35\), 84% from \(0.8<z<1.2\), and 77% overall for \(z<1.65\). While this is the overarching calibration extent of early DESI data in the KV fields, we can see that the higher coverage where DESI main survey targets supply galaxy spectra reveals that future efforts to span deeper magnitudes, and DC3R2-like efforts to bridge the spaces between main target categories could make real gains to fill more of the space approximately between \(0.4<z<0.8\). For the full color space independent of redshift, the DESI-KV sample, inclusive of DC3R2, calibrates 56% of the space.
Similarly, DESI will produce better constraints on the \(p(z|c)\) in the future, jointly with photometry of improved depth and with the full extent of the five year survey. With currently available KiDS-VIKING, we can see in Fig. 8 that redshifts for a given color are more broad than in COSMOS- by a factor of -1.7 - likely due to photometric scatter affecting cell assignment. The 68%-region for the span of redshifts in a cell in aggregate, \((z_{c}-\bar{z}_{c})/(1+\bar{z}_{c})\), with the DESI-KV sample is [-0.0392, 0.0361], yielding an approximate \(\sigma_{z}=0.0376\). In the bottom panel we can see how the distributions vary in individual cells, with the median of cells falling at approximately this aggregate (marked by the dashed lines). DESI provides a powerful data set, and if similar photometry to the COSMOS field were supplied for our spectra, we might anticipate our estimate of the variance in our overall redshift distributions statistically decreasing by a factor of 0.317 over our current reported, unweighted uncertainty (i.e. \(\sigma_{z,\ \mathrm{COSMOS}}^{2}/\sigma_{z,\ \mathrm{DESI-KV}}^{2}\)). With an already substantial increase in number of spectra, deeper photometry would provide strong advantages. While the depth of the sample can be increased, the bounds on redshift range are fixed on the current instrument.
## 5 Characterization of spectroscopic selection biases on redshift calibration
Ideal photometric redshift calibration for weak lensing surveys has to account not just for the dependence of redshift on observed color. Even at fixed color, magnitude and explicit and implicit selections of
Figure 6: Depiction of the SOM with four complementary samples, post completeness cuts: Emission Line Galaxies (green), Luminous Red Galaxies (red), the Bright Galaxy Survey (blue) and DC3R2 selected targets (orange), where the strength of each color channel is directly proportional to the fraction of galaxies in that cell that come from the sample noted. The distributions of normalized _redrock_ SED fits are shown for three choices of SOM cell, with a colored envelope for the 68% quantile region about the median of all templates, shown in black. The shaded regions denote rest-frame wavelengths not observed by DESI according to the best fit redshift. Normalization is done at 3000 Angstroms for star forming galaxies and at 8000 Angstroms for mostly quiescent galaxies.
Figure 7: Existing spectroscopic coverage of the Rubin/Euclid color space, shown as a histogram of SOM cell counts against their median redshifts. Contributions from this work (DESI + KV photometry, blue), are combined with data taken in the COSMOS field for preference (black). The contribution of DC3R2 populated cells, weighted by the count of DESI spectra per combined spectroscopic count (red), demonstrate that jointly DC3R2 alongside DESI now provides a majority of spectroscopic calibration at redshifts below \(z\approx 1.2\).
the spectroscopic or the weak lensing source galaxy sample can have an impact on the resulting redshift distribution (e.g. Gruen and Brimioulle, 2017; Newman and Gruen, 2022). Testing or accounting for this requires a spectroscopic sample that simultaneously spans the color-space and the depth of the wide sample, and a full understanding of the selection process. Accounting for selection effects is difficult, but may be achieved via increasingly realistic simulations of the survey transfer function (e.g. Myles et al. (2021); Everett et al. (2022) for weak lensing source galaxy selection accounted for in redshift calibration). For future surveys, spanning the depth will no longer be possible across all colors on account of the required spectroscopic exposure times. A sub-cell calibration that accounts for magnitude variation may become necessary at this stage, and the accuracy of this calibration will heavily depend on the depth and multiplicity of the available spectroscopic redshifts.
Here we investigate the impact of these systematics on redshift calibration with the DC3R2 spectroscopic sample. Section 5.1 tests the effects of magnitude cuts and spectroscopic selections on the redshift distributions of realistic Stage-III lensing survey redshift bins. Sections 5.2 and 5.3 explicitly check for trends in redshift as a function of magnitude at fixed color, and the impact of noise in observed colors on those. Finally, Sections 5.4 and 5.5 compare the systematic effects found to the requirements on future surveys.
### Impact on Calibration of Redshift Bins
Selection effects in spectroscopic samples will distort crucial estimates for weak lensing surveys, like that of the mean redshift per tomographic bin. To explore these selection effects we use the KiDS-450 galaxy abundances in our SOM cells to infer redshift distributions for five KiDS-like tomographic bins. We create these by sorting cells covered by DESI spectroscopy by their median redshift, associating roughly equal numbers of galaxies with each bin, \(b\). This procedure follows where,
\[p(z|b,\mathrm{sel})=\sum_{\mathrm{cell}\in b}p(z|c)\ p(c|\mathrm{sel}) \tag{4}\]
for a given selection, sel, and bins comprised of cells, \(c\). The first term amounts to the distribution of spectroscopic redshifts in a given cell and the second amounts to a weighting factor that relies on the abundances of galaxies in the calibrated sample (here, KiDS-450). The resulting \(n(z)\)s are depicted in Fig. 9a.
We then perform a set of tests on the impact of selection biases by applying further selections to the spectroscopic sample and determining how that changes the estimated \(n(z)\)s. Table 2 shows the resulting change in mean redshift for the tomographic bins.
Every comparison ensures that the color selection for each bin does not change as the spectroscopic selection effect is applied, i.e. is made for bins defined by the same cell envelope both before and after the additional spectroscopic selection. This means that if the selection eliminates certain SOM cells from spectroscopic coverage, we use the estimated mean redshift of the same reduced set of cells even for the fiducial sample in order to isolate the effect of selection bias at fixed color.
The spectroscopic selection effects we examine are:
* (MAG_GAAP_Z \(<\) 21.0)** For \(dz/dm=0\) and a well populated color space, a magnitude cut would introduce no change in mean redshift. In the presence of significant \(dz/dm>0\), however, a magnitude cut would bias the inferred redshift towards a lower mean value. This operation and the result is demonstrated in Fig. 9b. We indeed find a small (\(|\Delta z|<0.01\)) impact of the magnitude cut on the mean redshift inferred for the two lowest redshift bins, but an impact that exceeds current calibration requirements on the mean redshift inferred for the higher redshift bins, up to almost \(\Delta z=0.1\). This cut changes the mean magnitude in each bin, and the spectroscopic redshift counts according to Table 1. As this cut was chosen to be moderately bright for demonstration, we also explore how less severe selections affect this metric in Fig. 10 as half magnitude steps down from the KV limiting magnitude in the i-band. This demonstrates the importance of representative spectroscopic redshifts in our toy model, which is constructed in a way that fundamentally differs from DES, KiDS, and HSC as our SOM is not trained on a magnitude in addition to its colors.
* (\(\Delta x^{2}>40\))** As a consequence of enforcing a more severe confidence in the fitted model, this will remove spectra in an SED-dependent way from the ELG and LRG samples where, by default, some objects with smaller \(\Delta x^{2}\) are considered confident redshifts. When implemented, less then 2% of spectroscopic galaxies are removed.The impact of this stricter selection is small (\(|\Delta z|<0.005\)) but coherent across all redshift bins, i.e. biases all redshift distributions towards a lower mean by preferentially removing higher redshift galaxies from the sample within a given cell. In the highest redshift bin, where the fraction of ELG targets contributing to the calibration
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline Bin & 0 & 1 & 2 & 3 & 4 \\ \hline \((\mathcal{Z}_{\mathrm{mag}})_{\mathrm{fid}}\) & 19.54 & 19.80 & 20.20 & 20.78 & 21.45 \\ \(\Delta\mathcal{Z}_{\mathrm{mag}}\) & 0.163 & 0.197 & 0.308 & 0.499 & 0.949 \\ \(\Delta N_{\mathrm{spec}}/N_{\mathrm{spec,~{}fid}}\) & 0.020 & 0.033 & 0.113 & 0.273 & 0.821 \\ \hline \end{tabular}
\end{table}
Table 1: Changes in properties of a given tomographic bin when subject to MAG_GAAP_Z \(<\) 21, describing the decrease in mean magnitude, and the fraction of spectroscopic calibrating galaxies cut.
Figure 8: (Top) The distribution of redshifts across all cells as separation from mean redshift in a given cell and their respective \(1\sigma\) region for DESI-KV and (for comparison) COSMOS. (Bottom) Distribution of the unbiased estimator for \(\sigma_{c}\) cell widths across the SOM. The broader peak observed in this work is likely due to the shallower KiDS-VIKING wide field photometry that DESI spectroscopic redshifts are matched to. Noted in the legend are the shaded regions from the top panel.
is large, the effect exceeds the redshift calibration requirements of future weak lensing experiments (cf. also Hartley et al.2020 for the impact of selection based on conventional redshift confidence flags).
* **Removal of Cell Outliers -** Galaxies with large deviation from the median redshift of an ensemble of similar color are more likely to be outliers of various types, e.g. due to blending, AGN light, or redshift determination errors. We define a sample of outliers based on the criterion \[dz=\frac{|z-\mathrm{median}(z)_{\mathrm{cell}}|}{(1+\mathrm{median}(z)_{ \mathrm{cell}})}<3\sigma_{z,\mathrm{all}}\,\] (5) where \(\sigma_{z,all}\) is the aggregate standard deviation in cells found in Sec. 4.2, from which \(3\sigma_{z,all}=0.113\). The selection corresponds to the central 99.7% of redshifts if these were to follow Gaussian distributions of color-independent width at any fixed color and thus would reject only the most egregious outliers. In practice, these distributions are not Gaussian and this selection cuts 5.1% of objects. The selection is relative to the median rather than the mean to robustly deal with cells that are undersampled or heavily affected in their mean redshift by outliers. The impact of removing the outlier population defined this way is maximally of order \(\Delta z\approx 0.02\), and has reduced impact in the highest redshift bins. While a number of these outliers can be attributed to photometric scatter across degenerate regions in the SOM (neighboring cells with large separation in redshift), others may be real examples of broad or bimodal cell distributions and ought to be examined with visual inspection.
* **Applying Weights, \(w\) -** Spectroscopic redshifts are weighted according to the scheme described in Section 3.2.3, to account for prioritization of spectroscopy for targets of certain morphologies, colors, and SED-types that are not necessarily representative. As the weights can be somewhat noisy in under-sampled regions of color space (in targeting), additionally this comparison is restricted to cells where the resulting shift in \(\bar{z}_{c}\) from applying the weights is small. While this will result in an underestimation of the true weighting effect for all galaxies, these are the regions where weights can be applied confidently due to the spectroscopic and targeting counts. We apply a cell envelope selection where \(|\bar{z}_{c}-\bar{z}_{c,w}|<0.08\), which retains \(>90\%\) of cells. Note that the effect of this scheme in Table 2 is always to lower the mean redshift in a given bin. The impact is maximally \(\Delta z\approx 0.01\) in the most sparsely populated spectroscopic bins. Additionally, if these weights are applied in the same way to the magnitude selection (\(Z<21\)), we see in Table 2 that they mitigate the bright spectroscopic bias in high bins, but do not eliminate it.
### Magnitude Dependence of Redshift at Fixed Color
Past analyses on photometry and spectroscopy in the COSMOS field have shown the magnitude dependence of redshift at fixed color cell to be small and well described by a linear behavior with
Figure 10: Lack of representative spectra for an inference biases the mean redshift of a bin for our sample matched to KV. Depicted are the shifts in mean redshift for our tomographic bins should the calibrating spectroscopic galaxies be preferentially brighter than the source galaxies. We show several choices of magnitude cut incrementally brighter than the KiDS limiting magnitude of \(i=23.63\), and compare that to the Rubin science requirement (LSST Dark Energy Science Collaboration et al.2018). Small offsets have been applied to the mean redshift per bin, \((\bar{z})_{bin}\), for visualization.
Figure 9: A simple redshift calibration scheme for the G09, G12, G15 fields of KiDS-450 using DESI + DC3R2 spectroscopy reveals that applying selection effects on the spectroscopic catalog can significantly underestimate the mean redshift of a photometric sample of the same observed color.
\(3\times 10^{-3}\)(Masters et al., 2017). The equivalent measurement made in the KV data with DESI redshifts is depicted in Fig. 11b. Here we examine the relation of differences in \(Z\)-band magnitude and redshift between pairs of galaxies that occupy the same color-cell in the SOM. A cell with \(n\) galaxies contributes \(n(n-1)/2\) data points to Fig. 11b. We see a linear relationship, with a large amount of scatter. The raw slope measured, \(\;dz/dm=0.0250\pm 0.0009\;\), is an order of magnitude larger than that reported in previous studies (see App. E for methodology). This ought not to be taken at face value, and Section 5.3 (Fig. 11a) explores photometric scatter as the largest source of bias in this measurement, as well as the chief difference between the data set used in this study and that of previous ones. Furthermore, comparisons of \(\;dz/dm\) will be affected to lesser degrees by the other second order effects discussed in Section 4.1.
### Systematic Uncertainty due to Photometric Scatter
With the depth of our SOM resembling that of future deep surveys, it could be problematic that we use a relatively shallow photometry in KiDS-VIKING to assign our spectroscopic galaxies. Photometric scatter enters our calculation through perturbations in the cell assignment, which comprises our definition of fixed color. A potential systematic effect of this photometric scatter on the measured slope can be explored with a test described here. We can perform a direct measurement on the slope introduced by photometric noise by (1) removing any existing \(dz/dm\) from our data by randomly shuffling all spectroscopic redshifts in a given SOM cell, thus nulling \(\;dz/dm\) while preserving the \(p(z|c)\), (2) applying a random Gaussian draw of the width of the flux error reported in the catalog for each band, thus perturbing measured colors. (3) Reassigning galaxies to the SOM based on these perturbed fluxes, we repeat our measurement of \(\;dz/dm\). Additionally, we can (4) iteratively reshuffle and perturb many times to reduce statistical uncertainty in the measurement from the limited sample size. The only contributor to this final, measured \(\;dz/dm\) will be photometric scatter, making it a measurement on this systematic that can be compared across photometric surveys of different depth. Using (4) to quintuple the number of points available to us for this measurement, we observe in Fig. 11a, that \(\;(dz/dm)_{\rm sc}=0.0132\pm 0.0007\;\) for the reported KV errors associated with our DESI targets in this SOM. The slope introduces this way can be measured across color space, i.e. across the SOM. This color-dependent effect is subtracted off in the second panel of Fig. 12, leaving an estimate of an intrinsic \(\;dz/dm\) that is corrected at first order, at least under the assumption that the reported flux measurement errors are accurate. While individual cells carry significant large intrinsic \(\;(dz/dm)_{\rm c}\) (order of \(\approx 0.01\)), the average across all cells is much lower at \(\;(dz/dm)_{\rm c,avg.}=-7\times 10^{-4}\).
#### 5.3.1 Impact of photometric noise levels
We can see in Fig. 11a that the systematic measured for KiDS-VIKING-like error in a SOM of Euclid-like resolution is significant, \(\;(dz/dm)_{\rm sc}\approx 0.013\;\), and the likely dominant contributor to our overall measured slope. With this effect in mind, we revisit the original measurement performed in Masters et al. (2019) using the same field and similar photometric filters. Repeating the test of Section 5.3 used to measure the impact of photometric scatter, but with the lower noise levels of the COSMOS photometry, we find that the original measurement had a contributing noise bias of \(\;(dz/dm)_{\rm sc}\approx 0.003\pm 0.0004\). This is similar to the measured slope in Masters et al. (2019), and means that the intrinsic \(\;dz/dm\) in that sample is roughly consistent with zero. With the limitations of the coarse photometric scatter test we have applied we do not claim to have a measurement of the _intrinsic_\(\;dz/dm\) in COSMOS to better than \(0.003\) as a result.
Also worthy of note is that doubling the photometric error for KV associated DESI redshifts produces a \(\;(dz/dm)_{\rm sc}\), \(\;2=0.0256\pm 0.0010\;\), which is larger than the observed raw slope in Sec. 5.3. Thus, in case the photometric error in the KV catalogs should be underestimated at levels that have been reported in other studies doing source injection into survey images (Everett et al., 2022, e.g.), it could be that the average intrinsic slope of the DC3R2 sample is indeed consistent (across the full sample, not simply cell-by-cell) with zero as well.
As noise in the KiDS-VIKING photometry introduces a large systematic effect on our desired measurement, it is worth exploring what photometry is needed to drive down the error and improve constraints on \(\;dz/dm\). Given the uncertainty on our estimation of the systematic effect of photometric noise, the hypothesis that \(\;dz/dm\approx 0\) seems to be consistent with current data. Yet improved photometry such as that from HSC (Aihara et al., 2018) could allow to use DESI spectra to better effect. With a background limited, simple error model, and HSC-like flux errors for _grizy_ to augment KiDS-VIKING _uJHKs_, we find that \(\;(dz/dm)_{\rm sc}\approx 0.008\), which halves the effect, though does not eliminate it. In the limit of noiseless optical (_ugriz_), we still find \(\;(dz/dm)_{\rm sc}\approx 0.004\). We can imagine the LSST scatter with these objects would be comparable to this value, without improved NIR or IR follow up. Note that this is still larger than the slope found with the COSMOS photometry that includes the very deep UltraVISTA NIR data, demonstrating both that such small systematic errors are achievable in principle, and that they rely on deep NIR that is difficult to achieve with currently operating instruments.
### Future survey requirements
The tests presented above have shown limitations to how well we can constrain the trend of mean redshift with magnitude at fixed color in the presence of noisy photometry. Here, we connect this to
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline Selection & \(\Delta\bar{z}_{0}\) & \(\Delta\bar{z}_{1}\) & \(\Delta\bar{z}_{2}\) & \(\Delta\bar{z}_{3}\) & \(\Delta\bar{z}_{4}\) \\ \hline \(\;Z<21.0\;\) & 0.0097 \(\pm\)0.0057 & 0.0067 \(\pm\)0.0027 & 0.0160 \(\pm\)0.0041 & 0.0317 \(\pm\)0.0058 & 0.0916 \(\pm\)0.0068 \\ \(\;\Delta\chi^{2}>40\;\) & 0.0007 \(\pm\)0.0057 & 0.0010 \(\pm\)0.0027 & 0.0005 \(\pm\)0.0040 & 0.0015 \(\pm\)0.0058 & 0.0045 \(\pm\)0.0119 \\ \(\;dz<0.113\;\) & 0.0182 \(\pm\)0.0064 & 0.0086 \(\pm\)0.0030 & -0.0115 \(\pm\)0.0040 & -0.0036 \(\pm\)0.0057 & 0.0034 \(\pm\)0.0124 \\ \(\;w(b,c,s)\;\)scheme & -0.0041 \(\pm\)0.0055 & -0.0062 \(\pm\)0.0031 & -0.0152 \(\pm\)0.0042 & -0.0155 \(\pm\)0.0055 & -0.0033 \(\pm\)0.0130 \\ \(\;Z<21.0,w\;\) & 0.0016 \(\pm\)0.0054 & 0.0014 \(\pm\)0.0031 & 0.0031 \(\pm\)0.0042 & 0.0323 \(\pm\)0.0055 & 0.0642 \(\pm\)0.0071 \\ \hline \end{tabular}
\end{table}
Table 2: Consolidation of the shifts in mean redshift inferred for a tomographic bin with KiDS-VIKING-like cell abundances if different selections are made on the spectroscopic sample. \(\Delta\bar{z}_{i}\) is always the mean redshift of the new selection subtracted from the fiducial mean. The final row is a repeat of the first magnitude cut with the weights applied both before and after the selection. Error bars are produced by bootstrapping cells with replacement in the selection.
Figure 11: Plots depicting the systematic from photometric scatter induced in \(dz/dm\) (left) and the raw measurement from the joint DC3R2-KV sample (right), where each data point is the difference between a unique pair of galaxies occupying the same SOM color-cell in magnitude and redshift. The best-fit slope (dashed red) is fit from a collection of medians and \(\sigma\) (pink) in magnitude slices. If reported photometric errors are accurate, this systematic dominates our measurement.
Figure 12: Color dependence of \(dz/dm\) in the raw data (left) and corrected for the contribution of photometric scatter to the overall slope (right).
requirements for how well do we need to know \(dz/dm\) for future, stage IV weak lensing surveys. We can approximate the redshift calibration error due to imperfectly known magnitude dependence as
\[\frac{\Delta z}{(1+\bar{z})}\approx\Delta(dz/dm)\times(\bar{m}_{\rm wide}-\bar{m} _{\rm spec}), \tag{6}\]
where \(\Delta(dz/dm)\) is our error in the magnitude dependence of redshift at fixed color, \(\bar{z}\) is the mean redshift of the sample being calibrated, and \((\bar{m}_{\rm wide}-\bar{m}_{\rm spec})\) is the offset between the mean magnitudes of wide field observed galaxies and the spectroscopic sample used to calibrate them.
The requirement for final Rubin analyses is expected to be \(|\Delta z|/(1+\bar{z})\approx 0.001\) (LSST Dark Energy Science Collaboration et al., 2018). With the limiting \(i\) band magnitude of final Rubin data and the mean magnitude of the current DESI sample for \(\bar{m}_{\rm wide}=26.3\) and \(\bar{m}_{\rm spec}=21.1\), respectively, we find that we need to determine \(|\Delta(dz/dm)|\leq 0.0002\). If we assume the \(\Delta(dz/dm)\approx 0.004\) that was determined in Section 5.3.1 to be the limit obtainable with current NIR photometry and deep Rubin observations, our needs and capabilities are at odds.
If instead of considering the faintest galaxies used by Rubin we estimate the sample's mean magnitude, using the slope of the luminosity function from Gruen and Brimioulle (2017), observed to the limiting i-band magnitude for LSST, we obtain \(\bar{m}_{\rm wide}=\bar{t}_{\rm Rubin}\approx 24.90\). This provides a more realistic, less conservative requirement for \(\Delta(dz/dm)\leq 0.0006\), which is still stricter than all our estimates for the effect of photometric scatter on this slope by almost an order of magnitude.
### Interpretation
The previous sections studied how selections on the spectroscopic calibration sample provided here may impact the redshift distributions estimated using that sample. We performed these tests using a simple redshift calibration scheme that assumes that at a fixed observed color, the distribution of true redshifts of a photometric and a spectroscopically selected galaxy sample are identical. We note that none of the recent, Stage-III, analyses have relied on a redshift calibration scheme that was quite this simple, and hence the biases we identify are not expected to be present at the level found here in recent analyses. For example, Fig. 10 depicts a case where the calibrating spectra are up to two magnitudes brighter than the limiting magnitude of the sample, whereas the formal KiDS analysis made use of a variety of spectroscopic sources that spanned the depth of their photometric sample (van den Busch et al., 2022). Similarly DES did not strictly apply a spectroscopic selection and made use of narrowband photometric redshifts where there were no representative spectroscopic redshifts (Myles et al., 2021). Despite this, extrapolation of redshift calibration samples to photometric objects of similar color but fainter magnitude is likely required for future, deeper photometric surveys, and hence it is useful to study the potential pathways for bias in such an application of our sample. Both the DES and KiDS SOMs (Myles et al., 2021; Wright et al., 2019) are trained on colors \(and\) a magnitude (or louptitude) - and DESC will be unable to do this without throwing away a substantial part of their sample. Hence the SOM for this analysis is matching spectroscopic \(z\)s to galaxies based on their colors alone.
A necessary condition for a non-zero bias in such a redshift calibration scheme is that the expectation value of redshift depends on properties relevant for the spectroscopic target selection and measurement success other than just a galaxy's observed color. The most salient such effect we identify is via a trend of mean redshift with observed magnitude at given observed color, \(dz/dm\). We find this slope to be much steeper than reported in previous studies. The cause for this steepness is an effect of the larger photometric noise in the colors measured for our target sample, rather than a large dependence of redshift on magnitude at fixed _true_ color. Most of the imaginable selection effects are related to a galaxy's magnitude, and hence the non-zero \(dz/dm\) propagates into several of the bias tests we perform and merits further interpretation.
The bias imposed by photometric scatter on \(dz/dm\) varies with color, as seen in Fig. 12, and potentially also with other selection choices. The scatter itself is asymmetric as it does not shift objects into neighboring cells isotropically in the map, and will induce a larger \(dz/dm\) in areas with large color-redshift degeneracies (i.e. a small error in color could lead to a large offset in the mean local redshift). Our ability to understand the intrinsic \(dz/dm\) present in the data depends heavily on the quality of our photometry and/or our ability to correctly model photometric measurement errors. As seen in the right panel of the same figure, the intrinsic \(dz/dm\) is similarly color dependent, but just as likely to be negative as positive (indicating that it is likely a noisy measurement).
Our method to constrain this bias relies on the reported photometric errors. However, if these errors were to be misestimated by order unity, potentially in a color dependent way, the entire measured slope could be the result of this systematic, as discussed in Sec. 5.3.1. Literature analyses have demonstrated that the deviation between observed photometry and the true fluxes of a galaxy requires realistic image simulations processed by photometric pipelines to estimate, and reported errors are frequently an underestimate at levels that indeed reach factors of two (e.g. Huang et al., 2017; Everett et al., 2022). No literature study exists on the accuracy of reported KV GAAP flux errors specifically, but the relative linearity of the algorithm and preliminary studies on image simulations (Li et al., 2023b) imply that the misestimation of the error in GAAP is non-zero but not a factor of two.
Consequently, Sec. 5.1 demonstrates that selection effects in magnitude suffer from \(dz/dm\) dependence significantly in higher redshift bins, where objects tend to be fainter, and this emphasizes the role that deep photometry, in contrast to merely multi-band coverage, plays when making maximal use of spectroscopic redshifts. We can see with the \(\sigma_{z}\) cut, on the narrowness of the redshift distribution for a given cell, that potential outliers in a given cell that arise either from photometric scatter or a misattributed redshift can have dramatic effects on the redshift distribution. Since this effect is largest in the lowest redshift bin-where objects are brightest-we might suspect that these outliers are dominated by cell-to-cell photometric scatter. Cutting these cells does further limit the color space of a given weak lensing analysis, and ought to be avoided where possible. For this reason, visual inspection works like Lan et al. (2023) and careful selection of the redshift sample are crucial to future analyses.
It has been established (a) that LSST will require a very accurate measurement of \(dz/dm\) if calibrated by DESI alone (Sec. 5.4), and (b) that even with perfect optical photometry the photometric scatter contribution to \(dz/dm\) with existing DESI spectra is larger than the acceptable error by almost an order of magnitude (Sec. 5.3.1). The strategies to account for this in future weak lensing redshift calibration efforts can be threefold:
* **Deeper Photometry** : Despite (b), specifically improving NIR/IR
measurements will decrease the systematic contributor to \(dz/dm\) and potentially allow for a measurement that meets photo-\(z\) requirements.
* **Deeper Spectroscopy** : Deeper spectra will reduce a given survey calibration's dependence on \(dz/dm\), mitigating how well known this slope has to be in (a). After DESI's current survey, it will be uniquely situated to push for deeper spectra.
* **Modeling** : Perhaps the most viable approach, one can take into account the effect of photometric scatter with appropriate modeling of the data. Future analyses could constrain the bias via forward modeling from a color space of true photometry. In this approach, the intrinsic and observed \(dz/dm\) are folded into the inference alongside the systematic in a methodologically appropriate way, and the size of the systematic effect is no longer a limiting factor.
## 6 Conclusions
Photometric redshift calibration - estimating the redshifts for galaxies where we only observe them through a collection of filters - requires a thorough understanding of the color-redshift relation, which is very non-linear across the full galaxy population. This paper presents a catalog of spectra, the Dark Energy Spectroscopic Instrument (DESI) Complete Calibration of the Color-Redshift Relation (DC3R2) secondary target survey, designed to aid in the redshift calibration for a large fraction of the weak lensing source galaxy samples of future photometric surveys. The data includes associated weights that turn the survey samples (ELGs, LRGs, BGS) into one that is representative at a given \(ugrizYJHK_{s}\) and limiting magnitude of 23.68 (5\(\sigma\), \(i\)-band), using KiDS-VKING as test-bed. We chose to select targets on this 9-band photometry in order to break redshift-type degeneracies, and KiDS-VKING provided the most constraining data over this area. With this unprecedented quantity of DESI spectroscopic redshifts, we examine how it benefits future surveys and allows us to examine spectroscopic selection effects.
* The DESI sample reported here calibrates the redshift distribution of roughly 56% of the galaxies in COSMOS via 230k spectroscopic redshifts. For the photometric colour space that will be visible to DESC and Euclid (approx. 98% complete at \(i=25.3\)), this sample corresponds to coverage of 6248 cells out of 11250. Approximately 41% of the full COSMOS color-space and galaxy population is calibrated from spectroscopic galaxies that are classed as DC3R2 targets (inclusive of overlap with main survey targets), and 4% from uniquely DC3R2 objects.
* However, even though this sample provides an incredible quantity of high-quality spectra, we find that the combination of uncertain photometry and a variety of spectroscopic selection effects can produce substantial biases on the redshift inference for a given lensing redshift bin. We demonstrate that introducing a preference for brighter-magnitude calibration spec-\(cs\) in the presence of photometric scatter effects or intrinsically high \(dz/dm\) biases the mean redshift on the bin of order \(\Delta z\approx 0.01\), especially for higher redshift bins (see Fig. 10 for a breakdown). This effect is present even for the shallower test-bed data used here and will be exacerbated for fainter samples, as it enters the inference as an induced magnitude dependence of redshift at fixed color. Fewer photometric bands further worsens the effect, as breaking degeneracies becomes more difficult (e.g. the uncertainty in mean redshift for the year 3 HSC, Rau et al. 2023, \(grizy\) analysis was larger than that of KiDS-450, Wright et al. 2020 \(ugrizYJHK_{s}\) by a factor of 1.25, even with KiDS having an additional bin).
* Results for this work are expressed in a color-space that is similar to past work on the Masters et al. (2017) SOM, as our map is a transformation of this space into KiDS-VKING colors. As demonstrated in Fig. 11, we recover very similar galaxy SEDs and redshifts per cell, but comparisons like that of Fig. 3c ought to be taken with the knowledge that the colors and photometric noise levels between different surveys are not identical.
* Our analysis reveals a general agreement in the color-redshift relation with previous spectroscopic surveys that have explored this color-space. When accounting for effects induced by photometric noise, we also find agreement in the magnitude dependence of redshift at fixed color with the result of Masters et al. (2017).
* Photometry quality has an important role in redshift calibration. This study has found that photometric errors need to be well understood for modeling the color-redshift relation and especially magnitude dependent effects (\(dz/dm\)). In order to constrain this slope for future surveys we require either better photometry than expected, deeper spectroscopy, or improved analysis methodology (discussed in Sec. 5.5). We find that the use of KV photometry in this map does not supply as strong a constraint as past data sets do on the slope \(dz/dm\). However, close examination of the systematic \(dz/dm\) induced by photometric scatter in a high resolution SOM has strengthened the case for the null hypothesis in this work and in past survey data sets. While \(dz/dm\approx 0\) is potentially consistent with our data, deeper photometry or survey simulation will be needed to constrain the slope sufficiently for future weak lensing efforts.
The spectroscopic redshifts measured by the complete 5-year DESI survey will provide unparalleled support for future redshift calibration in weak lensing surveys. To fully leverage the powerful quantity of data discussed in this paper, more accurate photometry and deeper spectroscopic redshifts will be necessary to constrain the magnitude dependence of redshift at fixed color. Exploration into the capability of massively multiplexed spectroscopic instruments, like DESI, to attain redshifts of fainter sources than currently targeted is important. Looking ahead, future surveys will provide additional wavelength and sky area coverage that can improve photometric redshift calibration. Among these is a follow-on program to DC3R2 using the 4-metre Multi-Object Spectroscopic Telescope (4MOST, de Jong et al. 2019) that will observe targets across the same SOM used in this work, though more uniformly to redshift -1.55 (Graon & McCullough 2023). Together with deeper spectroscopic campaigns (e.g. DESI-II, Schlegel et al. 2022) and campaigns including high-quality infrared spectroscopy, these data will form the basis for constraining a model of the galaxy population seen by deep photometric surveys, including its redshift distribution.
## Data Availability
The data and code used to generate the figures in this paper will be made available with its publication1. The redshifts for objects taken during SV are available in the DESI Early Data Release2(see DESI Collaboration et al. 2023). Visualization tools exist to explore the data across the color-space in browser and are available via github3.
Footnote 1: [https://zenodo.org/record/8328495](https://zenodo.org/record/8328495)
Footnote 2: [https://data.desi.lbl.gov/doc/releases/edr](https://data.desi.lbl.gov/doc/releases/edr)
Footnote 3: [https://jmccull.github.io/DC3R2_Overview](https://jmccull.github.io/DC3R2_Overview)
## Acknowledgements
The authors thank several people for enabling this work, among them Mara Salvato, for providing the COSMOS collaboration spec
troscopic catalog, and Hendrik Hildebrandt for providing an updated KiDS catalog covering the COSMOS region which allowed us to leverage spectroscopic measurements as well as perform crucial tests on our methodology. Other thanks go towards the wider DESI collaboration and the C3 working group for providing feedback and guidance at several stages of the process.
JM received a Deutscher Akademischer Austauschdienst (DAAD, German Academic Exchange Service) fellowship and funding from the Bavaria California Technology Center (BaCaCTeC) in support of this work. This research was also supported by the Excellence Cluster ORIGINS which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2094-390783311.
This research is supported by the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231, and by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract; additional support for DESI is provided by the U.S. National Science Foundation, Division of Astronomical Sciences under Contract No. AST-0950945 to the NSF's National Optical-Infrared Astronomy Research Laboratory; the Science and Technologies Facilities Council of the United Kingdom; the Gordon and Betty Moore Foundation; the Heising-Simons Foundation; the French Alternative Energies and Atomic Energy Commission (CEA); the National Council of Science and Technology of Mexico; the Ministry of Economy of Spain, and by the DESI Member Institutions.
The authors are honored to be permitted to conduct scientific research on Iolkam Du'ag (Kitt Peak), a mountain with particular significance to the Tohono O'odham Nation.
The DESI Legacy Imaging Surveys consist of three individual and complementary projects: the Dark Energy Camera Legacy Survey (DECaLS), the Beijing-Arizona Sky Survey (BASS), and the Mayall z-band Legacy Survey (MzLS). DECaLS, BASS and MzLS together include data obtained, respectively, at the Blanco telescope, Cerro Tololo Inter-American Observatory, NSF's NOIRLab; the Bok telescope, Steward Observatory, University of Arizona; and the Mayall telescope, Kitt Peak National Observatory, NOIRLab. NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. Pipeline processing and analyses of the data were supported by NOIRLab and the Lawrence Berkeley National Laboratory (LBNL). Legacy Surveys also uses data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE), a project of the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. Legacy Surveys was supported by: the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy; the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility; the U.S. National Science Foundation, Division of Astronomical Sciences; the National Astronomical Observatories of China, the Chinese Academy of Sciences and the Chinese National Natural Science Foundation. LBNL is managed by the Regents of the University of California under contract to the U.S. Department of Energy. The complete acknowledgments can be found at [https://www.legacysurvey.org/acknowledgment/](https://www.legacysurvey.org/acknowledgment/).
This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 using NERSC award HEP-ERCAP0020828.
Based in part on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme IDs 177.A-3016, 177.A-3017, 177.A-3018 and 179.A-2004, and on data products produced by the KiDS consortium. The KiDS production team acknowledges support from: Deutsche Forschungsgemeinschaft, ERC, NOVA and NWO-M grants; Target; the University of Padova, and the University Federico II (Naples).
|
2309.12217 | A Multi-label Classification Approach to Increase Expressivity of
EMG-based Gesture Recognition | Objective: The objective of the study is to efficiently increase the
expressivity of surface electromyography-based (sEMG) gesture recognition
systems. Approach: We use a problem transformation approach, in which actions
were subset into two biomechanically independent components - a set of wrist
directions and a set of finger modifiers. To maintain fast calibration time, we
train models for each component using only individual gestures, and extrapolate
to the full product space of combination gestures by generating synthetic data.
We collected a supervised dataset with high-confidence ground truth labels in
which subjects performed combination gestures while holding a joystick, and
conducted experiments to analyze the impact of model architectures, classifier
algorithms, and synthetic data generation strategies on the performance of the
proposed approach. Main Results: We found that a problem transformation
approach using a parallel model architecture in combination with a non-linear
classifier, along with restricted synthetic data generation, shows promise in
increasing the expressivity of sEMG-based gestures with a short calibration
time. Significance: sEMG-based gesture recognition has applications in
human-computer interaction, virtual reality, and the control of robotic and
prosthetic devices. Existing approaches require exhaustive model calibration.
The proposed approach increases expressivity without requiring users to
demonstrate all combination gesture classes. Our results may be extended to
larger gesture vocabularies and more complicated model architectures. | Niklas Smedemark-Margulies, Yunus Bicer, Elifnur Sunger, Stephanie Naufel, Tales Imbiriba, Eugene Tunik, Deniz Erdoğmuş, Mathew Yarossi | 2023-09-13T20:21:41Z | http://arxiv.org/abs/2309.12217v1 | # A Multi-label Classification Approach to Increase Expressivity of EMG-based Gesture Recognition
###### Abstract
_Objective._ The objective of the study is to efficiently increase the expressivity of surface electromyography-based (sEMG) gesture recognition systems. _Approach._ We use a problem transformation approach, in which actions were subset into two biomechanically independent components - a set of wrist directions and a set of finger modifiers. To maintain fast calibration time, we train models for each component using only individual gestures, and extrapolate to the full product space of combination gestures by generating synthetic data. We collected a supervised dataset with high-confidence ground truth labels in which subjects performed combination gestures while holding a joystick, and conducted experiments to analyze the impact of model architectures, classifier algorithms, and synthetic data generation strategies on the performance of the proposed approach. _Main Results._ We found that a problem transformation approach using a parallel model architecture in combination with a non-linear classifier, along with restricted synthetic data generation, shows promise in increasing the expressivity of sEMG-based gestures with a short calibration time. _Significance._ sEMG-based gesture recognition has applications in human-computer interaction, virtual reality, and the control of robotic and prosthetic devices. Existing approaches require exhaustive model calibration. The proposed approach increases expressivity without requiring users to demonstrate all combination gesture classes. Our results may be extended to larger gesture vocabularies and more complicated model architectures.
_Keywords_: Multi-Label Classification, Myoelectric Control, Gesture Recognition, Expressivity, Human-Computer Interface, Partially-Supervised Learning, Data Augmentation, Synthetic Data Generation, Surface Electromyography (sEMG)
## 1 Introduction
Surface electromyography (sEMG) provides a convenient sensor modality for human-computer interaction (HCI) applications [1]. In the past two decades, research efforts have sought to translate the electrical activity associated with muscle contraction into control commands for general use computing, prosthetic control, and motor rehabilitation [2, 3]. sEMG-based gesture recognition describes the task of classifying hand gestures from an sEMG signal. To date, nearly all research efforts in this task have concentrated on the classification of single gestures. The expressivity of the gesture set can be greatly increased by allowing the combination of gestures; for example, combining a wrist movement to indicate a cursor direction, with a finger movement to indicate a mouse "click." Combining gestures in this way results in a multi-label classification problem.
Algorithms for multi-label classification can be grouped broadly into _algorithm adaptation_ methods and _problem transformation_ methods [4]. The key difference is that algorithm adaptation focuses on designing a training objective to specifically address the multi-label problem, whereas problem transformation focuses on ways of combining multiple instances of existing single-label classifiers. The choice of algorithm adaptation or problem transformation is highly dependent upon the details of the classification task. The research focused on gesture compositions via multi-label classification is emerging, but to date remains relatively limited.
High accuracy in classifying movement along multiple axes with application to prosthetic control has been achieved using fully supervised data that included all possible movement combinations [5, 6, 7]. However, the requirement that all possible combinations of gestures be included in the training dataset limits the creation of expressive combinations of gestures, due to the time and effort needed to collect such exhaustive data. Therefore, an ideal combination gesture classification scheme would use only single gestures as training data. Few attempts have been made to classify combinations of gestures from single gestures. Several algorithm adaptation based approaches have utilized methods in which all classes that exceed a probability threshold are combined to form a composite gesture [8, 9]. These approaches are not well suited for mutually exclusive combinations of gestures, as they may predict gesture combinations that are not biomechanically possible (i.e. simultaneous finger extension and flexion).
This issue can be avoided by delineating labels into mutually exclusive groups and training a multi-label classifier. Problem transformation approaches utilizing separate classifiers for gesture sets are potentially well-suited to the multi-label classification of mutually exclusive gestures. To our knowledge, only a single study has investigated multi-label classification using a problem transformation approach. In that study, separate classifiers for flexion and extension of each digit were employed to construct multi-digit movements [10]. While novel, the utility of this approach given the large number of degrees of freedom of the hand is limited.
In this work, we construct a vocabulary of gestures with two-part labels consisting of a direction component derived from wrist movement and a modifier component derived from finger movement. We then apply a problem transformation approach - in particular, we require that each prediction consists of exactly one direction and one modifier component. To limit calibration time we restrict training data to consist of only single direction and single modifier gestures. To accommodate this label structure, we use a model architecture in which an incoming gesture is simultaneously classified by two models; one model estimates the direction component and another model estimates the modifier component. Critically, we explore both a parallel and hierarchical implementation of the model architecture.
We hypothesized that using a standard supervised learning approach with this model architecture and single gesture calibration data will unavoidably result in poor predictive accuracy for combination gestures. We, therefore, explore a strategy to address this challenge using synthetic data augmentation; we collect a training set with only real single gestures and extrapolate to the set of combination gestures. In particular, given examples of two real single gestures, such as Up and Pinch, we can construct a synthetic combination Up, Pinch gesture by blending their feature vectors.
We collect wrist worn sEMG data from healthy participants following instructions given via a custom user interface. Critically, we utilize a set of movements completed using a joystick to obtain ground truth labels for all actions and eliminate
task error as a source of label noise. We perform three computational experiments in which we examine the model architecture and classification approach (Experiment 1), the selection of subsets of synthetic gesture combinations (Experiment 2), and the augmentation of single gestures to resolve class imbalances created by the creation of synthetic data (Experiment 3). We report results with respect to a lower bound model trained only on single gestures, and an upper bound model trained on both single gestures and real combination gestures. Through these experiments, we demonstrate the feasibility of using the problem transformation approach to multi-label classification of disjoint gesture sets from single label gesture training data with synthetic combinations.
## 2 Methods
### Subjects
A total of 11 individuals (6 male / 5 female, 25.18\(\pm\)3.86 years of age) participated in the study. All participants were right-handed, as confirmed via self-report. Prior to participating, subjects confirmed that they did not have any muscular, orthopedic, or neurological health issues that could affect their ability to perform the experiment.
### Gesture Vocabulary
We defined an action space in which subjects may perform one of four possible direction gestures (Up, Down, Left, Right), one of two possible modifier gestures (Pinch, Thumb), and a rest gesture. We explicitly generate expressivity by allowing combinations of one direction and one modifier, such that the total set of possible actions is 15 (4 direction only, 2 modifier only, 8 combinations, and rest). We refer to combination gestures as "doubles."
### Sensors and Measurement
Subjects were seated comfortably in front of an LCD with arms supported on arm rest, and the right hand positioned to a joystick. Surface electromyography (sEMG, Trigno, Delsys Inc., 99.99% Ag electrodes, 1926 Hz sampling frequency, common mode rejection ratio: \(>80\) dB, built-in 20-450 Hz bandpass filter) was recorded from eight electrodes attached to the right forearm with adhesive tape. The eight electrodes were positioned with equidistant spacing around the circumference of the forearm at a four-finger-width distance from the ulnar styloid (the subject's left hand was wrapped around the right forearm at the ulnar styloid to determine the electrode placement); the first electrode was placed mid-line on the dorsal aspect of the forearm mid-line between the ulnar and radial styloid (see Figure 1).
Ground-truth labels were collected by simultaneously recording from the joystick (Logitech Extreme 3D Pro). The pitch axis (continuous) was used for Up (radial deviation) and Down (ulnar deviation) gestures, while the yaw axis (continuous) was used for Left (wrist flexion) and Right (wrist extension) gestures. We selected the direction with the greatest activation at any one time; joystick movement had to pass a threshold of 0.25 on any given axis to be considered active. The trigger button (index finger flexion) was used for Pinch gestures and the side thumb (thumb adduction) button for Thumb gestures. To reduce the system jitter, we smoothed the output of the joystick using an exponential moving average with a momentum of 0.5.
### Feature Extraction
We extracted features from raw sEMG data using a sliding window. We chose a window size of 250ms
Figure 1: sEMG Recording. Electrodes were placed on the mid-forearm of the subject starting from mid-line on the dorsal aspect and continuing towards the thumb.
Figure 2: Ground-truth Movement Labels. Subjects manipulated a Logitech Extreme 3D Pro Joystick during all gestures, providing high-confidence ground truth labels. Direction movement was measured using pitch and yaw axes. Modifier movements used trigger and thumb buttons.
with 50ms step size. From each of the 8 sensor channels of raw sEMG, we computed the Root-Mean-Square (RMS) and Median Power Frequency after Fourier transform. Given a data vector \(x\) containing \(T\) samples, the RMS is defined as:
\[\mathrm{RMS}(x)=\sqrt{\frac{1}{T}\sum_{i=1}^{T}x_{i}^{2}}. \tag{1}\]
The Median Power Frequency is defined as the frequency value \(f_{\textsc{med}}\) that divides the integral of the Power Spectral Density (PSD) into two regions of equal area [11]:
\[\int_{0}^{f_{\textsc{med}}}\!\!\mathrm{PSD}(f)df=\int_{f_{\textsc{med}}}^{ \infty}\!\!\mathrm{PSD}(f)df=\frac{1}{2}\int_{0}^{\infty}\!\!\mathrm{PSD}(f)df. \tag{2}\]
### Experimental Task
Subjects performed an initial **Calibration** block, consisting of 8 repetitions of each of the 7 single gestures Up, Down, Left, Right, Pinch, Thumb, and Rest. On each calibration trial subjects were prompted to prepare for 3 seconds by a yellow screen border, then prompted to perform the gesture continuously for 2 seconds by a green border, and finally prompted to rest for 3 seconds by a red screen border. See Figure 3 for examples.
Subjects then performed alternating blocks of **Hold-Pulse** gestures (HP, one gesture component was held while the other was pulsed) and **Simultaneous-Pulse** gestures (SP, both gesture components started and stopped together).
HP blocks contained 28 trials, consisting of: \(4\cdot 2\) trials with held direction and pulsed modifier, \(2\cdot 4\) trials with held modifier and pulsed direction, 4 trials with only a held direction, 2 with only a held modifier, 4 with only a pulsed direction, and 2 with only a pulsed modifier. Thus a single HP block explored the full repertoire of combinations. Figure 4 shows an example of the user interface (UI) shown to subjects during HP blocks. A single segment on top represents a held gesture, and 4 shorter segments below represent another pulsed gesture. In some trials, only the "held" or only the "pulsed" segments were present. When a gesture trial begins, the subject's vertical cursor moves from left to right across the screen. The gesture trial can be broken into the following stages:
* The cursor starts 2 seconds away from the instruction segments; during this time, the window border is yellow to indicate that the subject should plan their actions.
* For the next 8 seconds, the cursor overlaps the active area. The window border is green to indicate that the subject should perform actions as instructed; when the cursor bar intersects a colored line segment, the subject must perform that gesture.
* For the final 2 seconds, the cursor travels past the active area, and the window border are red to indicate the subject should rest.
Each Simultaneous-Pulse (SP) block contained 8 trials; each SP trial consisted of a pair of one direction and one modifier gestures. Figure 5 shows an example of the UI for SP blocks. Unlike the HP blocks, the horizontal instruction lines were arranged so that both gesture components are started and stopped together.
Only examples of single gestures during Calibration were used for model training; any instances where the subject accidentally performed a combination gesture during this time were excluded. SP blocks contributed only examples of combination gestures, while HP blocks contributed examples of both single and
Figure 3: UI during a calibration trial. The colored screen border instructs the subject to prepare (Yellow), then hold the gesture (Green), and then rest (Red).
combination gestures (since there were moments when only one gesture was active).
Table 1 lists the order of experimental blocks performed and the number of trials in each block.
### Data Acquisition Framework
We constructed our experimental framework using the LabGraph [12] Python package. The software was developed by creating LabGraph nodes, which handle different tasks such as collecting raw data, extracting features, training a model, performing classification and smoothing, managing the user interface, governing the experimental timing, and logging as seen in Figure 6.
### Structured Labels
We defined gestures consisting of a direction component and a modifier component - this allowed us to treat these two components independently. Furthermore, we attempted to calibrate models using labeled data from only the 7 classes of single-gesture data and extrapolate to the remaining 8 classes of combination gestures.
In order to describe both single and combination gestures, we defined a structured label of the form (D, M) consisting of a direction component and a modifier component. The direction component took one of 5 values (Up, Down, Left, Right, and NoDir), while the modifier component took one of 3 values (Pinch, Thumb, and NoMod). All 15 classes of subject gesture behavior were described using these structured labels as follows:
* Gestures with only a direction component were labeled as (D, NoMod), where D was one of Up, Down, Left, or Right,
* Gestures with only a modifier component were labeled as (NoDir, M), where M was one of Pinch, or Thumb,
\begin{table}
\begin{tabular}{|c|c|} \hline
**Block Type** & **\# Gestures** \\ \hline Calibration & 56 \\ \hline HP & 28 \\ \hline SP & 8 \\ \hline HP & 28 \\ \hline SP & 8 \\ \hline \end{tabular}
\end{table}
Table 1: Experiment Structure. Subjects performed an initial calibration, demonstrating each single gesture type multiple times. Subjects then performed multiple blocks of combination gestures. HP - Hold-Pulse one gesture was held while the other was pulsed intermittently (see Figure 4). SP - Simultaneous-Pulse; both the direction and modifier component start and end together (see Figure 5).
Figure 4: Subject UI during a Hold-Pulse trial. The gray vertical cursor scrolls from left to right; when it intersects a horizontal line segment, the subject performs that gesture. The screen border color also indicates when the subject should be active.
Figure 5: UI during a Simultaneous-Pulse (SP) trial. Similar to Hold-Pulse trials (Figure 4), but both gesture components onset together.
Figure 6: Flow diagram for real-time data acquisition pipeline. Each circle represents a LabGraph node with a specific responsibility; arrows indicate the flow of information.
* Gestures with both components active simply took the form (D, M) where D was one of the four directions and M was one of the two modifiers,
* Resting data was labeled as (NoDir, NoMod).
### Creating Synthetic Combination Gestures
In order to calibrate models using only single gesture examples, we derived a simple method of constructing synthetic combination gestures. Given a feature vector from a direction gesture \(z_{d}\) (such as (Up, NoMod)) and a feature vector from a modifier gesture \(z_{m}\) (such as (NoDir, Pinch)), we constructed a synthetic feature vector \(\tilde{z}=\frac{z_{d}+z_{m}}{2}\) representing an estimate of the features from the unseen combination gesture (such as (Up, Pinch)). This averaging strategy is one of the simplest possible approaches, and is similar to some approaches for data augmentation in the literature, such as Sample Pairing [13].
### Examining Feature Similarity
In order to explore the feature-space structure of our real and synthetic data, we constructed heatmaps showing the average similarity of items from various pairs of classes. In this analysis, we considered all 15 classes of real data, and the 8 classes of synthetic doubles gestures, giving a heatmap of shape 23 by 23. We included all real single and combination gesture items, as well as a uniform subset of 0.5% of all possible combination gesture items in this analysis.
We used a radial basis function (RBF) kernel to compute similarities. Given a pair of feature vectors \(z_{1}\) from class \(C_{1}\) and \(z_{2}\) from class \(C_{2}\), the RBF kernel similarity was computed as
\[K_{\mathrm{RBF}}(z_{1},z_{2})=\exp(-\gamma\|z_{1}-z_{2}\|^{2}), \tag{3}\]
where the length scale \(\gamma\) determines how quickly similarity should decrease as items move farther apart. We set the length scale \(\gamma\) using the so-called "median heuristic": within each subject, we set \(\gamma=1/H\), where \(H\) is the median of squared Euclidean distances between any pair of feature vectors. This heuristic tends to select a length scale that gives a good contrast between similar and dissimilar items [14].
To compute the \((i,j)\) entry of the heatmap representing the similarity \(S_{\mathrm{RBF}}(C_{1},C_{2})\) between two classes \(C_{1}\) and \(C_{2}\), we averaged the RBF kernel similarity over all pairs of items in those classes,
\[S_{\mathrm{RBF}}(C_{1},C_{2})=\frac{1}{|C_{1}|}\frac{1}{|C_{2}|} \sum_{z_{1}\in C_{1}}\sum_{z_{2}\in C_{2}}K_{\mathrm{RBF}}(z_{1},z_{2}). \tag{4}\]
After computing the similarity heatmap for each subject, we then averaged across subjects to obtain a single final heatmap. Note that our similarity metric is closely related to commonly studied graph cut measures such as the normalized cut [15]. The results of this feature similarity analysis are discussed in Section 3.1.
### Experiment 1: Model Selection
We performed a computational experiment to determine which choice of the model architecture and classifier algorithm would perform best in conjunction with our strategy for synthesizing combination gestures. The results of this experiment are presented in Section 3.2.
We designed all sEMG signal models to take one input vector (the features \(z\) of a combination gesture) and produce two output vectors (the estimated posterior probability distribution over directions \(p(y_{\textsc{Dir}}|z)\), and the estimated posterior over modifiers \(p(y_{\textsc{mod}}|z)\)).
In each run of the experiment, we trained and tested a model using data from a single subject. The subject's data was divided into three parts; the Calibration data (which contained only single gestures), a test set (the final HP and SP blocks, which contained both single and combination gestures), and a special set (the other HP and SP blocks). For all models, the test set was kept constant, and we considered three cases for how to handle the training set; we trained a **lower bound** model using only the single gestures from Calibration, an **upper bound** model on the single gestures from Calibration plus additional real single and combination gestures from the special set, and an **augmented** model using the real single gestures from Calibration plus synthetic combination gestures as described in Section 2.8. We repeated this procedure 3 times for each subject using a different random seed.
We considered two basic model architectures, each designed around the principle that the two gesture components (direction and modifier) may be treated independently. The first approach, which we refer to as **Parallel**, is shown in Figure 7. Here, the incoming data item is simultaneously given as input to two independent classifiers; a 5-way classifier that was trained using only the direction component of the labels, and 3-way classifier that was trained using only the modifier component of the labels. Consider the 3-way classifier; since it used only the modifier component of the labels, it observed label values Pinch, Thumb and NoMod. When it observed a NoMod value, this could have come from any of the four direction-only gestures, or the Rest gesture. Analogously, when the 5-way classifier observed a training item whose direction component was NoDir, this item could have come from one of the two modifier-only gestures, or the Rest gesture.
In Figure 8, we show the **Hierarchical** model architecture considered. Here, we also have two independent paths. The direction path begins with a binary classifier, predicting \(p(y_{\textsc{DIR}}\neq\emptyset|x)\) to answer the question: "Is there a direction gesture present?" Then, it applies a 4-way classifier to get the conditional probabilities of each active gesture: \(p(y_{\textsc{DIR}}|y_{\textsc{DIR}}\neq\emptyset)\). The same two stages are taken for modifiers, with a binary classifier predicting \(p(y_{\textsc{MOO}}\neq\emptyset|x)\), followed by with a 2-way classifier to get the conditional probabilities of each active gesture: \(p(y_{\textsc{MOO}}|y_{\textsc{MOO}}\neq\emptyset)\).
We considered three choices of classification algorithm; Logistic Regression (LogR) was used as a standard method that learns linear decision boundaries, Multi-layer Perceptron (MLP) was used as a standard method that learns non-linear decision boundaries, and Random Forest (RF) was used as a representative "rule-based" classifier. In each single experiment, all model components used the same classifier algorithm. All classifiers were implemented in Python using Scikit-Learn [16].
Models were evaluated using balanced accuracy, which is the average accuracy in each class. We further separated performance into the balanced accuracy on single gestures ("Singles Acc", i.e. the 7 classes for which real data was available from Calibration), the balanced accuracy on double gestures ("Doubles Acc", i.e. the 8 unseen gesture classes in which both label components are active), and the overall balanced accuracy ("Overall Acc").
### Experiment 2: Selecting Synthetic Combination Gestures
Based on the results of Experiment 1, we selected the "Parallel" model architecture with MLP classifier algorithm for further analysis. Our next experiment explored different ways to use subsets of all possible synthetic combination gestures. The results of this experiment are presented in Section 3.3.
As expected, we observed during the first experiment that adding synthetic combination gestures was necessary in order for models to successfully classify combination gestures at test time. In Experiment 1, we created synthetic double gestures by naively combining all possible valid pairs (consisting of features from one direction-only gesture and features from one modifier-only gesture); this resulted in a training dataset where the majority of data was synthetic. We observed that the augmented model, and even some of the upper-bound models, experienced a trade-off between the performance on single gestures and the performance on double gestures. We hypothesize that this trade-off might be partly due to the over-abundance of synthetic data; therefore, in our next experiment, we explored ways of reducing the amount of synthetic data to try to improve this trade-off.
Consider the case of constructing synthetic combination feature vectors \(\tilde{z}\) from two classes of single gestures \(C_{d}\) and \(C_{m}\) (e.g. (Up, NoMod) and (NoDir, Pinch)). In Experiment 1, when constructing all possible pairs, we create all \(|C_{d}|C_{m}|\) possible items. Now in Experiment 2, we considered two main approaches: subsetting the two single gesture classes _first_, and then creating all possible pairs (methods prefixed by _subsetInput_), or creating all possible
Figure 8: **Hierarchical** Model Architecture. Input data is processed in two independent components. In the first component, a binary classifier determines the probability that a direction is absent (\(\emptyset\)); then, a 4-way classifier is used to determine the conditional probability over directions: **Up**, **Down**, **Left**, **Right**, given that a direction is present. Analogous processing occurs in the modifier component.
Figure 7: **Parallel** Model Architecture. Input data is processed by two independent components. The first component predicts probabilities among 5 classes representing directions: **Up**, **Down**, **Left**, **Right** and “NoDirection” (\(\emptyset\)). The second component predicts probabilities among 3 modifier classes representing: **Thumb**, **Pinch**, and “NoModifier” (\(\emptyset\)).
pairs and then selecting a subset _afterwards_ (methods prefixed by _subset_).
For each of these approaches, we considered three ways of subsetting items: taking a representative sample by sampling items uniformly at random (suffixed by _uniform_), taking a tightly clustered sample by selecting items closest to the mean (suffixed by _near_mean_), and taking a diverse sample by selecting items whose quantile of distance to the mean was evenly spaced (e.g. to take 4 points, we would take the points whose quantile of distance to the mean were \(q=0.0,0.33,0.66,1.0\)) (suffixed by _speed_quantiles_).
We included the same lower bound and upper bound models in this experiment. The "baseline" for this model was the performance of an augmented model using all possible synthetic items. As with Experiment 1, models were evaluated for their balanced accuracy on single gesture classes, double gesture classes, or overall across all classes, and experimental runs were repeated for 3 random seeds for each subject.
### Experiment 3: Single Gesture Data Augmentation
In Experiment 2, we found that creating all possible synthetic combinations and then taking a uniform random subset of 10% nearly preserved performance while greatly reducing the amount of synthetic data used during training. However, we still observed a trade-off between the performance on single gestures and the performance on combination gestures. Thus, in Experiment 3, we explored methods for adding augmented single gesture data in the hope of improving this trade-off between accuracy on single gestures and double gestures. The results of this experiment are presented in Section 3.4.
Based on Experiment 2, we selected a single setting for further exploration: a uniform subset of 10% after creating all synthetic pairs. We then considered three basic methods for creating augmented single gesture features from real single gesture features. First, we considered adding Gaussian noise to real items (_add-gaussian-*_), where noise \(\epsilon\) was sampled \(\epsilon\sim\mathcal{N}(0,\sigma^{2}I)\) with \(\sigma\) values of 0.3, 0.4, or 0.5. Next, we considered fitting a Gaussian mixture model (GMM) to the data and sampling points from this estimated distribution (_fit-gmm-*_); we considered GMMs with 1, 5, or 10 components. Lastly, we considered fitting the distribution of features using a kernel density estimate (KDE), and sampling points from this estimated distribution (_fit-kde_). For this KDE approach, we used a Gaussian kernel with a fixed lengthscale of 0.01. Fitting a GMM and fitting a KDE were both done using Scikit-Learn [16].
We again considered the lower bound and upper bound models as described in the previous two experiments. The baseline for each part of Experiment 3 was the performance of a model trained with synthetic combination gestures and no augmented single gestures.
## 3 Results
### Examining Feature Similarity
In Figure 9, we show the average pairwise RBF kernel similarity between the features of different gesture classes. These similarities were computed using Eq. 4, as described in Section 2.9. The dotted guidelines on the heatmap indicate distinct regions of interest (comparison between real singles, real combinations, and synthetic combinations).
Note that, for an individual pair of feature vectors, the RBF kernel described in Eq. 3 takes values ranging from 0 when items have very dissimilar features, to 1 when items have identical features. The average pairwise similarity between classes described in Eq. 4 therefore also takes values in the same range. Values on the diagonal of the heatmap represent the similarity of items within a class and can be viewed as a measure of the spread of a certain class in feature space.
Before interpreting the results we observed, it is useful to consider what we expect to see in this similarity heatmap in an ideal case, where subjects performed gestures very precisely (i.e. every repetition of a certain gesture is performed the same way), and we chose an ideal feature space that accurately represents the structure of the raw data. In this case, we may hope to observe that the similarity heatmap has high values on the diagonal, indicating that items belonging to the same class have very similar features. For different classes of single gestures (such as Up and Pinch), if we believe the underlying movements are distinct, we may hope to observe that the feature similarity is low. When comparing a single gesture (such as Up) to a combination gesture it belongs to (such as Up, Pinch ), it is difficult to make _a priori_ predictions about the similarity, but we may hope that these class pairs are closer than other class pairs that share no gesture components. Finally, if we chose a suitable method for creating synthetic combination gestures, then we may hope to observe that their features are highly similar to the matching real classes. This would appear as a diagonal line in the heatmap, in the region comparing real and synthetic items.
In Figure 9, we observed many of the expected trends. The main diagonal is stronger, indicating that each class is relatively tightly clustered. In the middle-left section, comparing real double gestures and real single gestures, we observe that (D, M) is highly similar to (D, NoMod) for each direction gesture D, but not very similar to the modifier component M; this
indicates that the feature vectors are more strongly influenced by their direction component. We observe a similar trend in the bottom-left section, comparing synthetic doubles to real singles. In the bottom-middle section, comparing synthetic doubles to real doubles, we see some similarity between the expected class pairings, such as synthetic (Up, M) and real (Up, M), but we also see a similarity between unexpected class pairings, such as synthetic (Up, M) and real (Left, M). In the bottom-right section, comparing synthetic doubles to synthetic doubles, we again see that the direction component drives the structure of the feature vectors; as a result, (D, M1) and (D, M2) are similar for any modifiers M1 and M2.
### Experiment 1: Model Selection
In Figure 10, we show the results of Experiment 1, in which we varied the choice of model architecture and classification algorithm. See Section 2.10 for a detailed explanation of this experiment. The top panel of Figure 10 shows balanced accuracy on single gestures only, while the middle panel shows balanced accuracy on double gestures only, and the bottom panel shows overall balanced accuracy for all gesture classes.
We observed the best trade-off between accuracy on single gestures and double gestures using the MLP classifier with Parallel architecture. Although the Random Forest classifier performed best for double gestures classification, it performed significantly worse than other methods in single gestures classification. Since the Parallel architecture performed better for double gesture classification with a small trade
Figure 9: Feature Similarity Heatmap. Using pre-computed features from real single gestures, real combination gestures, and synthetic combination gestures, we computed average pairwise similarity \(S_{\text{RBF}}(C_{1},C_{2})\) between pairs of classes according to Eq. 4. After computing values within each subject, we averaged across subjects to obtain a single heatmap.
off in single gesture classification, we selected the MLP classifier with Parallel architecture for further exploration in the next two experiments.
### Experiment 2: Selecting Synthetic Combination Gestures
Using the best classifier and architecture from 3.2, MLP with Parallel architecture, we explored methods to subset synthetic double gestures, as well as the number of synthetic double gestures to use in this experiment. See Section 2.11 for further details.
In Figure 11, we show the results of this experiment. As before, the top panel shows balanced accuracy on single gestures, while the middle panel shows balanced accuracy on double gestures, and the bottom panel shows balanced accuracy overall. In
Figure 11: Experiment 2: balanced accuracy using _MLP_ classifier and _Parallel_ architecture with various strategies to select subsets of synthetic double gestures. Top - performance on single gesture classes; Middle - performance on double gesture classes; Bottom - performance on all gesture classes. _Lower Bound_ model trained on Calibration data only; _Upper Bound_ model trained on Calibration, HP1, HP2, SP1, and SP2 data; _Baseline_ model trained on Calibration and all synthetic data; Other models trained on Calibration and subsets of synthetic data (\(f\in\{0.001,0.005,0.01,0.05,0.1,0.25,0.5\}\) represents the amount of synthetic data used). All models are tested on HP3 and SP3 data. Each dot represents one subject and one random seed.
Figure 10: Experiment 1: balanced accuracy using various classifier algorithms (_LogR_ - logistic regression, _MLP_ - multi-layer perceptron, and _RF_ - Random Forest) and model architectures (_Parallel_ and _Hierarchical_). Top - performance on single gesture classes; Middle - performance on double gesture classes; Bottom - performance on all gesture classes. _Lower Bound_ model is trained on Calibration data only; _Synthetic_ model is trained on Calibration data with synthetic double gestures (using 100% of the generated doubles); _Upper Bound_ model is trained on Calibration, HP1, HP2, SP1, and SP2 data. All models are tested on HP3 and SP3 data. Each dot represents one subject and one random seed.
general, we found that sampling diverse items (using *_subset_spaced_quantiles_ methods) or sampling items near the mean (using *_subset_near_mean_ methods) had an overly-strong effect and resulted in a trade-off; the resulting models experienced a large decrease in performance on double gestures and a large increase in performance on single gestures. We also found that the model appeared to learn useful information from the whole set of synthetic double gestures, not just the ones near the mean. We found that taking a uniform (and therefore representative) subset gave a reasonable approximation of the same performance, but helped increase training speed and helped reduce the excess of synthetic data in our training set.
### Experiment 3: Single Gesture Data Augmentation
In Section 3.3, we explored the effect of using subsets of synthetic double gestures. Although we found that it was possible to roughly maintain model performance while reducing the number of synthetic items used, we still observed an undesirable trade-off between model performance on single gestures and doubles gestures. Roughly speaking, adding more (synthetic) double gestures to the training set causes the model to achieve higher accuracy on double gestures at the cost of achieving lower accuracy on single gestures. We hypothesize that this trade-off may be due to the imbalance between the amount of single and double gestures in the training set, and we attempt to resolve this trade-off by adding additional single gestures to the training set using data augmentation. As described previously, we considered several methods for creating augmented single gestures. Figure 12 shows the performance when the training set includes a subset of synthetic double gestures plus a set of augmented single gestures. The subset of synthetic doubles for Figure 12 was taken using the uniform random strategy, keeping 10% of possible items for each class.
We observe that, even though we managed to increase the accuracy on singles gestures with data augmentation, this came at the cost of losing model performance on double gesture classification. Figure 12 shows that GMM methods (_fit-gmm-1_, _fit-gmm-5_ and _fit-gmm-10_) with 10% of synthetic doubles gave us the best trade-off between the gain of accuracy on single gestures and loss of accuracy on double gestures. We also experimented with adding augmented single gestures alongside all possible synthetic double gestures; the trends in performance from this scenario were not substantially different from Figure 12
of single gesture feature vectors to create synthetic combinations. Given the novel nature of this approach, we report on the performance of different classifiers, and methods of selecting synthetic data to guide future developments in the field of gesture expressivity.
### Model Selection for Gesture Expressivity
Prior approaches to multi-label classification have primarily focused on the generation of composite gestures from individual components such as the construction of a fist gesture from the collective flexion of individuals digits [9, 10] or the classification of multiple axes of action of a single joint [5, 6]. In contrast, we focus on combining gestures that could be used for joint actions leveraging the knowledge of limb biomechanics to construct sub groups of gestures that are mutually exclusive in their actions. To do so, we made use of a problem transformation approach to train individual classifiers for each subgroup of gestures. Each subgroup of gestures can be interpreted as discrete action sets that can be combined to form joint actions. We found that there was no advantage to the hierarchal model structure in determining whether a gesture from an action set was present prior to selecting an action from that set compared to when such a determination was made at the same time an action was selected. We also examined the performance of logistic regression, multi-layer perceptron, or random forest classifier algorithms within this model architecture. We observed better performance of the multi-layer perceptron compared to other models when considering the trade-off between the ability to classify single gestures and double gestures. This results likely reflects the highly non-linear boundaries between classes. Overall, we found that this problem transformation approach yielded promising results, and allowed us to increase the expressivity of our gesture set, while maintaining a relatively short model calibration time and also avoiding physiologically-implausible predictions.
In our experiments, we included a lower-bound model trained on only real single gestures, as well as an upper-bound model trained using real single gestures and real double gestures. We found that, for a sufficiently flexible classifier algorithm such as MLP, the upper-bound model did _not_ experience a trade-off between single and double gesture performance. This may indicate that generating realistic and informative synthetic data is a key direction for future research.
### Use of Synthetic and Augmented Data
The need to collect training data on all possible combinations of single gestures represents a limiting factor in the ability to expressivity of gestures through combination, as the calibration time needed quickly becomes laborious. We therefore imposed a restriction during our model development, that combinations of gestures were to be omitted from the training set. As seen in our low-bound models classification of combination gestures using only single gestures yields poor (near chance) results. We, therefore, tested the use of synthetic combination gestures, where each synthetic combination was created by averaging the feature vectors of a real direction-only gesture and a real modifier-only gesture. Our basic approach for creating synthetic combination gestures was to generate all possible valid pairs. While this approach yielded significant improvements in the classification performance of double gestures, it also resulted in a trade-off between model performance on single and double gesture classes. We hypothesized that this was due to the extreme over-abundance of synthetic data when including all valid pairs. We therefore also explored methods of subsetting these synthetic combination gestures, and found that using a uniform random subset of 10% of generated items preserved most of the performance gains while greatly reducing the amount of synthetic training data. However, even with the use of 10% of generated combination items a large class imbalance between synthetic combinations and real single gestures remained. We, therefore, tested whether the augmentation of single items using different noise models would improve the decrement in singles accuracy due to the addition of doubles. We did not find this to be the case. The augmentation of singles data to reduce the class imbalance did not have a profound effect on increasing singles accuracy.
### Limitations and Future Directions
The experiments conducted in the current study included only a limited number of gestures, and relied on a joystick for obtaining ground truth labels. As the number of direction gesture classes and the number of modifier gesture classes increases, the number of possible combination gestures would increase combinatorially. Thus, successfully training a model that can extrapolate from the sum of component gestures to the product-set of all possible gesture combinations would yield a greater potential benefit, in terms of the amount of time saved during model calibration. However, it remains to be seen whether the current approach will scale to more single gesture classes. The current approach could also be scaled up by using "chords" of 3 or more gestures; this approach would be practically limited by the need for multiple sets of biomechanically independent gestures, the need for suitable hardware to obtain ground truth labels, and may offer diminishing utility in a real-world application due to the high skill requirements of using
such multi-gesture chords.
Our experiments focused on the use of two-component model architectures. We also considered using a single model over all possible combination classes. One key drawback is that such a model does not have a well-defined decision rule for unseen classes; thus it cannot be used in the "lower-bound" control scenario. We experimented with such a single model using synthetic data and found that it did not out perform the proposed parallel and hierarchical model architectures.
In the current study, based on previous observations of the significant impact of label noise due to subject task non-adherence, we used a set of gestures that was well captured using a joystick. This gave us high confidence in the set of labels, but some of the gestures used may have relatively weaker signals in the distal-forearm sEMG sensor setup that was used. In particular, our direction gestures consisted of gross wrist movements, which we expect to be well measured by distal-forearm sEMG, but our modifier gestures used fine-grained finger movements, which rely partially on hand-intrinsic muscle activity that may not be captured as well. Future work may explore alternative, more flexible hardware for ground-truth labeling, so that we may ensure that all gestures used are well captured by the sEMG sensor setup.
Another important direction for future research is to evaluate other methods for creating synthetic training data, including other simple functions and even functions with learnable parameters that may be pre-trained on a separate population of subjects.
Finally, an area of potential improvement is the use of deep neural networks. These models offer two key possible benefits. First, by using classifier models with greater capacity, we may be able to make better use of a large set of synthetic training data, or even train population models to extrapolate to unseen subjects. Second, a gradient-based training scheme may lend itself well to several alternative training schemes. For example, we may consider a two-stage approach, in which the model is first pre-trained on a set of real single gestures, and then fine-tuned on a set of synthetic combination gestures. Alternatively, we may use a multi-term objective function that treats real and synthetic data differently.
Despite these limitations, our results indicate that the novel problem transformation approach tested using a parallel model architecture in combination with a non-linear classifier, and restricted synthetic data generation holds potential for increasing the expressivity of sEMG-based gestures with short calibration time.
### Applications of Proposed Method
Current approaches to control of multi-joint robotic or prosthetic limbs rely on a "direct control" approach, in which a subject must learn to simultaneously manipulate each degree of freedom. This style of control yields expressivity, but requires significant user training and expertise. The proposed methodology has the potential to provide the same expressivity while greatly reducing the burden of training for the user. This approach also offers a framework for greatly increasing the expressivity of human-computer interaction; the use of combination gestures allows for a combinatorial increase in decoded user actions, similar to the use of the "shift," "control," or "caps lock," keys on a keyboard. The proposed method may also facilitate more easily incorporating new gestures on-the-fly, since the model is explicitly designed to extrapolate to unseen combinations without exhaustive supervision.
## 5 Ethical Statement
All protocols were conducted in conformance with the Declaration of Helsinki and were approved by the Institutional Review Board of Northeastern University (IRB number 15-10-22) All subjects providing informed written consent before participating.
## 6 Acknowledgements
Funding for this research was provided by Meta Reality Labs Research.
|
2303.18007 | How to measure research performance of single scientists? A proposal for
an index based on scientific prizes: The Prize Winner Index (PWI) | In this study, we propose a new index for measuring excellence in science
which is based on collaborations (co-authorship distances) in science. The
index is based on the Erd\H{o}s number - a number that was introduced several
years ago. We propose to focus with the new index on laureates of prestigious
prizes in a certain field and to measure co-authorship distances between the
laureates and other scientists. To exemplify and explain our proposal, we
computed the proposed index in the field of quantitative science studies
(PWIPM). The Derek de Solla Price Memorial Award (Price Medal, PM) is awarded
to outstanding scientists in the field. We tested the convergent validity of
the PWIPM. We were interested whether the indicator is related to an
established bibliometric indicator: P(top 10%). The results show that the
coefficients for the correlation between PWIPM and P(top 10%) are high (in
cases when a sufficient number of papers have been considered for a reliable
assessment of performance). Therefore, measured by an established indicator for
research excellence, the new PWI indicator seems to be convergently valid and,
therefore, might be a possible alternative for established (bibliometric)
indicators - with a focus on prizes. | Lutz Bornmann, Robin Haunschild | 2023-03-31T12:27:33Z | http://arxiv.org/abs/2303.18007v1 | How to measure research performance of single scientists? A proposal for an index based on scientific prizes:
### Abstract
In this study, we propose a new index for measuring excellence in science which is based on collaborations (co-authorship distances) in science. The index is based on the Erdos number - a number that was introduced several years ago. We propose to focus with the new index on laureates of prestigious prizes in a certain field and to measure co-authorship distances between the laureates and other scientists. To exemplify and explain our proposal, we computed the proposed index in the field of quantitative science studies (\(\text{PWI}_{\text{PM}}\)). The Derek de Solla Price Memorial Award (Price Medal, PM) is awarded to outstanding scientists in the field. We tested the convergent validity of the \(\text{PWI}_{\text{PM}}\). We were interested whether the indicator is related to an established bibliometric indicator: P(top 10%). The results show that the coefficients for the correlation between \(\text{PWI}_{\text{PM}}\) and P(top 10%) are high (in cases when a sufficient number of papers have been considered for a reliable assessment of performance). Therefore, measured by an established indicator for research excellence, the new PWI indicator seems to be convergently valid and, therefore, might be a possible alternative for established (bibliometric) indicators - with a focus on prizes.
## Introduction
Research without research evaluation is not imaginable; evaluations are an important and integral part of nearly all activities in science with the objective of research improvement and/or monitoring (Moed & Halevi, 2015). Research evaluation can be undertaken in a qualitative form with peer review processes; it is also very popular to evaluate research quantitatively using various indicators (Moed, 2017). An indicator is usually defined as a "measurable quantity that'stands in' or substitutes for something less readily measurable and is presumed to associate with it without directly measuring it" (Wilsdon et al., 2015, p. 5). For example, citations may measure impact of research although impact may happen in a way that is not reflected in citations. The most frequently used indicators in research evaluation processes are bibliometric indicators (Szomszor et al., 2021). The results by Hammarfelt, Rushforth, and de Rijcke (2020) show, for example, that scientists prefer to assess candidates in peer review processes based on bibliometric indicators. One important (and understandable) reason of the popularity of the indicators is missing competence of the assessing scientists in the candidates' research. Besides simple bibliometric indicators such as the Journal Impact Factor, advanced indicators such as the disruption index or percentile-based field-normalized indicators are in use in today's research evaluation processes (Bornmann, 2019; Wang & Barabasi, 2021).
With the h index, a very popular index for assessing the performance of single scientists has been introduced by Hirsch (2005), which combines publication output and citation impact in a single number. Besides the h index, many bibliometric indicators have been proposed for the use on the single researcher level in recent years, especially variants of the initial h index (Bornmann, Mutz, Hug, & Daniel, 2011). Some years ago, for example, Thomson Reuters (the former provider of the Web of Science, WoS, database; it is Clarivate now) introduced the Highly Cited Researchers database (see [https://recognition.webofscience.com/awards/highly-cited](https://recognition.webofscience.com/awards/highly-cited)) including researchers with the most papers belonging to the 1% most frequently-cited
papers (Clarivate Analytics, 2021). Recently, Clarivate introduced Beamplots as an alternative to the popular h index to measure citation impact field-normalized and over the complete range of a scientist's publication years (Bornmann & Marx, 2014; Szomszor & Pendlebury, 2021).
Although bibliometrics is frequently employed in the evaluation of single scientists, its use has often been criticized. According to Sunahara, Perc, and Ribeiro (2021), the use of bibliometrics "exerts enormous pressure on scholars (particularly on young scientists...) for publishing in large quantities, in prestigious journals, and developing highly cited research". In addition, many bibliometric indicators have been proposed in the past, and there is no consensus which indicator should be used (as standard). For example, performance can be measured size-dependently or size-independently. On the other side, the comparison of various h index variants by Bornmann et al. (2011) reveal that the variants measure performance similarly. Thus, there seems to be great redundancy between bibliometric indicators (on the single scientists' level).
In this study, we take up the critique on bibliometrics and leave the usual way of measuring performance by using citations. We propose an alternative measure that is based on collaborations (co-authorships) in science. The alternative is oriented towards the Erdos number - a proposal for measuring performance in mathematics. The Erdos number reflects the "collaborative distance" between the mathematician Paul Erdos (1913-1996) and other scientists (authors). Paul Erdos was an outstanding mathematician who published in collaboration with many other mathematicians. Here, we use the basic idea of the Erdos number - co-authorship distances - to propose a new performance metric - called Prize Winner Index (PWI) - for single scientists. We propose to focus with the new index on laureates of prestigious prizes in a certain discipline and to measure co-authorship distances between the laureates and other scientists.
To exemplify and explain our proposal, we computed the proposed index for the field of quantitative science. We selected this field, since we have both been active in this field for many years and are therefore in the position to interpret the empirical results. The Derek de Solla Price Memorial Award (Price Medal, PM) is awarded to outstanding scientists in quantitative science studies. We name the PWI for the field of quantitative science studies PWI\({}_{\text{PM}}\). We (RH) developed an R package (PWIR, Hounschild, 2022) that can be used to calculate the PWI for all authors in a dataset (e.g., from the Web of Science) with the names of the laureates as input by the user. The laureates of the PM are used as a default in the R package. Datasets from other databases could be converted into supported format(s).
### Prize Winner Index (PWI)
Paul Erdos was one of the most important mathematicians, who received many prizes. He was enormously productive (in terms of his number of publications) and collaborated frequently with other and different scientists. According to Glanzel and Abdulhayoglu (2018), it is not clear who initially proposed the Erdos number, but an explanation of it can already be found in Goffman (1969). A lot of information around the number and its calculation has been published on the website of the "Erdos Number Project" by Jerry Grossman, Professor of Mathematics, Emeritus at Oakland University.1 The Erdos number is defined as "the shortest path connecting an author with Erdos in the complete co-author network created by Paul Erdos and his \(n\)-th order co-authors, which is iteratively generated by always adding the co-authors of co-authors, who are not already members of the network" (Glanzel & Abdulhayoglu, 2018, p. 534). Based on this definition, Paul Erdos has an Erdos number of 0; his co-authors have an Erdos number of 1. All co-authors of Paul Erdos co-authors (i.e., co-authors who have not published directly with Paul Erdos) have an Erdos number of 2. Higher Erdos numbers follow for further co
authors of co-authors. Following Glanzel and Abdulhayoglu (2018), the mean Erdos number of mathematicians is about 5.
In this study, we propose - following the basic Erdos number definition - the PWI that counts co-authorships with prize winners (and their co-authors) in a certain discipline (usually). The PWI is defined as follows: The co-authorship distance from prize winners (\(d_{pw}\), 0 for prize winners themselves, 1 for co-authors of prize winners, 2 for co-authors of co-authors of prize winners) is determined for each paper \(p\) and each prize winner \(w\). The formula \(\frac{1}{2}\)\(d_{pw}\) provides harmonic distant weights for each co-authorship distance value; e.g., 1, \(\nicefrac{{1}}{{2}}\), \(\nicefrac{{1}}{{4}}\), and \(\nicefrac{{1}}{{8}}\) are obtained for the co-authorship distances 0, 1, 2, and 3. The weights are summed up for each author over each paper and prize winner in the dataset. The PWI can be written as:
\[\text{PWI}=\sum_{p}\sum_{w}\frac{1}{2^{d_{pw}}}\]
The PWI is a size-dependent indicator. Scientists who have authored more papers are more likely to have a higher PWI. In addition to the PWI, the R package PWIR also provides the number of papers and the number of co-authors of each author in the dataset. This enables the user to easily compute relative PWI variants: PWI per published paper and PWI per co-authors. For matching as many synonyms of authors, only the last name and the first name (including other given names) initials are used by the R package PWIR, although this might cause homonyms to be merged together. Sensible author name disambiguation has to be carried out before by altering the author names (e.g., by adding further additional given names). Also, merging occurrences of synonyms requires preprocessing of the downloaded files.
To illustrate the calculation of the PWI, we used a publication set with two papers from the WoS. The exemplary prize winner is L. Waltman. To retrieve the two papers, we undertook a topic search for "gender differences in scientific careers" in the WoS. As of October 18, 2022, this search delivers two publications. The author lists of these two publications are shown in Table 1.
The author P. Mahlck is not connected to L. Waltman. Thus, this author has a PWI value of zero. The author L. Waltman occurs once with a co-author distance of zero (\(d_{pw}=0\)): \(\text{PWI}=\text{1}\cdot\text{1}/\text{2}_{0}=\text{1}\). The authors H. Boekhout and I. van der Weijden occur once with a co-author distance of one (\(d_{pw}=1\)): \(\text{PWI}=\text{1}\cdot\text{1}/\text{2}_{1}=.5\). The corresponding output of the function PWI from R package PWIR when the WoS download as described above is provided as input is shown in Table 2. The output includes the number of papers and the number of co-authors
\begin{table}
\begin{tabular}{|l|c|} \hline Publication N\({}^{\underline{\text{o}}}\) & List of authors \\ \hline
1 & P. Mahlck \\ \hline
2 & H. Boekhout, I. van der Weijden, L. Waltman \\ \hline \end{tabular}
\end{table}
Table 1: **Publications with their author lists as retrieved from a topic search in Web of Science as of October 18, 2022**
The PWI is based on two premises: (1) In every discipline, "scientific elites" exist that can be identified by (prestigious) prizes. According to Zuckerman (1977), scientific elites "are worthy of our attention not merely because they have prestige and influence in science, but because their collective contributions have made a difference in the advance of scientific knowledge" (as cited in Li, Yin, Fortunato, & Wang, 2020). In her study, Zuckerman (1977) identified scientific elites by the Nobel prize, i.e., the elite received this prestigious prize. For Tijssen (2020), "Nobel prizes are often considered, especially by the general public, to be an ultimate accolade of international excellence" (p. 59). (2) Scientists who collaborate with the elite or with scientists in the narrow collaboration network of the elite do research on a high research quality level.
In the following, we would like to explain our rationale to base the PWI on the two premises in more detail: (Ad 1) In the reward and incentives system of science, prizes have an important value. For Ma and Uzzi (2018), prizes "identify top scientific achievements... identify successful role models who inspire achievements once thought to be impossible... and act as signals of scientific credibility... Prizes may also forecast the direction of future scientific investments. Prizewinning papers are cited in patents faster than similarly cited, non-prizewinning papers... and often include prizewinners with direct or indirect capital (e.g., Howard Hughes Medical Research Award) that stimulates research" (p. 12608). The empirical results by the authors based on bibliometric data show that "prizes are more concentrated within a relatively small scientific elite" (p. 12608). Prizewinning topics do not only have significant increases in growth (productivity, citation impact, and new entrants) (Jin, Ma, & Uzzi, 2021), scientific prizes are also meaningful events in the career of scientists leading to changes in publication practices (Liu, Yu, Chen, & Huang, 2018) and the receipt of additional prizes (Chan, Mixon, & Torgler, 2018; Zheng & Liu, 2015).
(Ad 2) The second premise for the new index refers to the rational of basing the PWI on collaborations with elite scientists and authors in their collaboration network. Literature overviews on collaborations in science have been published by Bozeman, Fay, and Slade (2013) and Katz and Martin (1997). According to Zeng, Fan, Di, Wang, and Havlin (2021) "teamwork is becoming increasingly common in recent modern science". The mean number of co-authors per paper has nearly doubled since the 1950s (Zeng et al., 2021), and the number of solo-authored papers dramatically decreased (Wang & Barabasi, 2021). The studies by Cimenler, Reeves, and Skvoretz (2014) and Milojevic, Radicchi, and Walsh (2018) show that career success and growth are related to scientists' number of collaborators and prestige of advisors. Elite scientists tend to cooperate with other strong scientists (or promising young scientists) and have a stimulating effect on their research environment (Wang & Barabasi, 2021). The significant influence of strong collaborations on scientific performance has been denoted as apostle effect (Gallotti & De Domenico, 2019). This effect has been demonstrated especially for young scientists (Li, Aste, Caccioli, & Livan, 2019).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Author & PWI & Number of papers & Number of co-authors \\ \hline WALTMAN L & 1 & 1 & 2 \\ \hline BOEKHOUT H &.5 & 1 & 2 \\ \hline VAN DER WEIJDEN I &.5 & 1 & 2 \\ \hline MAHLCK P & 0 & 1 & 0 \\ \hline \end{tabular}
\end{table}
Table 2: **Output of the function PWI from the R package PWIR when the Web of Science download as described above is provided as input**
## Methods
### Price Medal (PM)
In this study, we computed the PWI for the field of quantitative science studies. Scientists active in science studies are organized in the International Society for Scientometrics and Informetrics (ISSI). The community focuses on quantitative approaches to the study of science that include informetrics, scientometrics, and webometrics. ISSI provides four awards to acknowledge exemplary achievements in the community. The highest award is the PM. It is the only award in this field that refers to lifetime a achievement. Since 1984, the PM is awarded to outstanding scientists in quantitative science studies. The other awards honor single papers, students, or doctoral students. The PM has been named after the scientist Derek de Solla Price, who can be regarded as "one of the founders of scientometrics. He published extensively in the 1960s and 1970s... laying the foundations for the newly emerging field of quantitative science studies" (Wyatt, Milojevic, Park, & Leydesdorff, 2016, p. 88).
On the ISSI homepage, the PM is explained as follows: "The Price Medal was conceived and launched by Tibor Braun, founder and former Editor-in-Chief of the international journal _Scientometrics_, and is periodically awarded by the journal to scientists with outstanding contributions to the fields of quantitative studies of science. The journal _Scientometrics_ is an international journal for all quantitative aspects of the science of science, communication in science and science policy co-published by Akademiai Kiado, Budapest, and Springer, Dordrecht. The first medal was awarded to Eugene Garfield in 1984" ([https://www.issi-society.org/awards/derek-de-solla-price-memorial-medal](https://www.issi-society.org/awards/derek-de-solla-price-memorial-medal), accessed at November 8, 2022). Eugene Garfield can be seen as one of the 'fathers' of scientometrics who developed the first citation index (Garfield, 1955).
Because of the great importance of Eugene Garfield for the field of scientometrics and his intensive publication and collaboration activities over many years, Glanzel and Abdulhayoglu (2018) calculated the Erdos number for scientometrics based on the papers published by Eugene Garfield.
### Datasets
In this study, we used three journal sets that reflect the field of science studies in a closer or wider perspective. We do not focus on only one set, since the field can be differently defined, and we are interested how robust our empirical results are. Robust results would be reflected in similar results based on different datasets.
_Core journals_: Although the community of a given field publishes its results mostly in specific journals, there are usually core journals and more peripheral journals. In the field of science of science studies, _Scientometrics_, _Journal of Informetrics_ (JOI), and _Quantitative Science Studies_ (QSS) can be considered as core journals. _Scientometrics_ is not only the eldest journal in the field with the most annual papers but also the journal that awards the PM. QSS is the official journal of the ISSI. JOI can be indicated as core journal, since QSS resulted from JOI: Some years ago, the chief and deputy editors and the editorial board of JOI decided to leave JOI and to switch to the newly established QSS. We downloaded 8,393 records from the WoS on July 21, 2022, of the aforementioned three sources.
_iMetrics_: Leydesdorff, Bornmann, Marx, and Milojevic (2014) and Maltseva and Batagelj (2020) used the term iMetrics to denote the field of scientometrics based on paper and journal selections. The iMetrics set is mostly identical with our core journals set that additionally includes, however, the _Journal of the American Society for Information Science and Technology_ (the alternative title _Journal of the Association for Information Science and Technology_ was also considered). We downloaded 12,473 records from the WoS between July, 21 and 25, 2022, of the aforementioned four sources.
_Complete set_: The complete set includes papers from journals and proceedings that publish papers from the science studies field. The proceedings of the ISSI and the publications in the following eight journals seem reasonable to us to cover the wider field of science of science studies in that the PM is centered: _Journal of Data and Information Science_, _Journal of Information Science_, _Journal of Informetrics_, _Journal of the American Society for Information Science and Technology_ (the alternative title _Journal of the Association for Information Science and Technology_ was also considered), _Professional de la Informacion_, _Quantitative Science Studies_, _Research Evaluation_, and _Scientometrics_. We downloaded 19,166 records from the WoS between July 21 and 25, 2022, of the aforementioned nine sources.
We used the datasets and the PWI function to produce lists of authors with PWI\({}_{\text{PM}}\) values. The initial lists showed the need for a proper author name disambiguation. For example, the entry VAN RAAN AFJ belongs to the same person as the entries VANRAAN AFJ and VAN RAAN A, respectively. Thus, the PWI\({}_{\text{PM}}\) values had to be recalculated when author names were merged. The function PWI provides an option "method\(=\)0" to just print the author names, number of papers, and number of co-authors for checking of author name variants. These lists can be used to identify variants of author names and to disambiguate the names in the WoS input files. Afterwards, the function PWI can be run without the option "method\(=\)0" for computing the "correct" PWI\({}_{\text{PM}}\) values.
For this study, we identified and merged the names of the PM awardees in the three datasets and received the following numbers of authors: core journals \(=\) 9,802; iMetrics \(=\) 14,228; complete set \(=\) 20,903.
#### Statistics
We analyzed the relationship between PWI\({}_{\text{PM}}\) values and number of papers published, number of co-authors, and the status of an author of being a PM laureate or not. Since we are interested in how much each of the three variables contributes uniquely to the PWI\({}_{\text{PM}}\) values of the authors, we performed a robust multiple regression analysis with PWI\({}_{\text{PM}}\) values as dependent and number of papers published, number of co-authors, and status of an author of being a PM laureate or not as independent variables. We decided to compute a robust regression (Acock, 2018), since the PWI\({}_{\text{PM}}\) values are skewed distributed (see Figure 1). We used the program Stata for the statistical analyses of this study (StataCorp., 2021).
#### Definition of P(top 10%)
For the validation of the PWI\({}_{\text{PM}}\), we correlated the indicator values with P(top 10%). We used P(top 10%) values from our WoS custom database. The P(top 10%) values were calculated according to the procedure proposed by Waltman and Schreiber (2013). This ensures that exactly 10% of the papers are top 10% papers, however, at the expense that some papers are only partial top 10% papers and obtain a fractional P(top 10%) value.
#### Results
The full list of PWI\({}_{\text{PM}}\) values for authors in the field of science studies based on the three datasets is available at: [https://ivs.fkf.mpg.de/PWI/PWI-data.xlsx](https://ivs.fkf.mpg.de/PWI/PWI-data.xlsx). Figure 1 shows cumulative probability plots of PWI\({}_{\text{PM}}\) values separated for the status of an author (of being an awardee of the PM or not) based on the three datasets. The results for the datasets are very similar. First, the plots reveal that the values are very skewed distributed: There are only a few authors with high PWI\({}_{\text{PM}}\) values. Second, the plots show that there are great differences between awardees and non-awardees: The few authors with high PWI\({}_{\text{PM}}\) values are mostly awardees.
Figure 1: Cumulative probability plots of PWI\({}_{\text{PM}}\) values separated for the status of an author (of being an awardee of the PM or not) based on three datasets
_Authors with the highest PWI\({}_{PM}\) values_
The top-20 authors of the scientometrics sources (using the three datasets) ordered by PWI\({}_{\text{PM}}\) values in descending order are shown in Table 3. In accordance with the results in Figure 1, the results in the table reveal that awardees of the PM are in the best position to receive high PWI\({}_{\text{PM}}\) values: Nearly all authors in the table are awardees who have been active in the field of science studies for many years. The high number of awardees among the top authors can be expected and was conceived, since the PWI\({}_{\text{PM}}\) is strongly oriented towards the PM. We would like to highlight in Table 3 the exceptional achievement of Peter Vinkler (Price Medalist in 2009), who published all papers alone (without any contribution of co-authors). There are five scientists in Table 3 who did not receive the PM: Hans-Dieter Daniel, Nees Jan van Eck, Robin Hanuschild, Bart Thijs, and Lin Zhang. The five scientists have collaborated extensively with one Price Medalist each: Wolfgang Glanzel (Bart Thijs, Lin Zhang), Lutz Bornmann (Hans-Dieter Daniel, Robin Hanuschild), and Ludo Waltman (Nees Jan van Eck).
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Author & PWI\({}_{\text{PM}}\) & Number of papers & Number of co-authors & Price Medalist \\ \hline _Core journals_ & & & & \\ \hline GLANZEL W & 240.50 & 187 & 101 & Yes \\ \hline BORNMANN L & 213.53 & 187 & 81 & Yes \\ \hline SCHUBERT A & 183.75 & 129 & 28 & Yes \\ \hline LEYDESDORFF L & 162.56 & 138 & 66 & Yes \\ \hline ROUSSEAU R & 150.63 & 133 & 94 & Yes \\ \hline THELWALL M & 128.78 & 125 & 67 & Yes \\ \hline BRAUN T & 120.75 & 74 & 16 & Yes \\ \hline EGGHE L & 97.94 & 85 & 9 & Yes \\ \hline MOED HF & 80.00 & 66 & 60 & Yes \\ \hline VAN RAAN AFJ & 66.53 & 56 & 25 & Yes \\ \hline WALTMAN L & 48.16 & 42 & 31 & Yes \\ \hline VINKLER P & 39.00 & 39 & 0 & Yes \\ \hline BAR-ILAN J & 38.16 & 33 & 19 & Yes \\ \hline THIJS B & 34.69 & 41 & 28 & No \\ \hline MORAVCSIK MJ & 34.00 & 34 & 5 & Yes \\ \hline PERSSON O & 31.28 & 23 & 18 & Yes \\ \hline HAUNSCHILD R & 25.53 & 44 & 27 & No \\ \hline INGWERSEN P & 23.66 & 20 & 22 & Yes \\ \hline DANIEL HD & 22.77 & 36 & 15 & No \\ \hline ZHANG L & 21.06 & 40 & 63 & No \\ \hline _iMetrics_ & & & & \\ \hline BORNMANN L & 280.05 & 246 & 93 & Yes \\ \hline LEYDESDORFF L & 254.20 & 221 & 104 & Yes \\ \hline GLANZEL W & 248.03 & 193 & 104 & Yes \\ \hline THELWALL M & 218.92 & 214 & 95 & Yes \\ \hline ROUSSEAU R & 195.28 & 171 & 107 & Yes \\ \hline SCHUBERT A & 184.52 & 129 & 28 & Yes \\ \hline EGGHE L & 139.09 & 122 & 12 & Yes \\ \hline BRAUN T & 122.52 & 75 & 17 & Yes \\ \hline \end{tabular}
\end{table}
Table 3: **Output of the function PWI\({}_{\text{PM}}\) from the R package PWIR when the Web of Science downloads as described above is provided as input**
The researchers in Table 3 are well-known in the field of scientometrics. They can be found as chief editors of important journals in the field. For example, Ludo Waltman is chief editor of QSS and the former editor of JOI. He took over the position at JOI from Leo Egghe, who was the first editor of JOI. Wolfgang Glanzel has been chief editor of _Scientometrics_ for many years; Tibor Braun is the founder and former chief editor of this journal. All researchers in Table 3 are current or former members of editorial boards of scientometric journals. Researchers in Table 3 can also be found in the Highly Cited Researchers database (in social sciences), which is published annually by Clarivate (Clarivate Analytics, 2021, and earlier by Thomson Reuters) such as Loet Leydesdorff, Mike Thelwall, and Ludo Waltman. In 2019, Springer published the Handbook of Science and Technology Indicators, which includes "state-of-the-art descriptions of the wide variety of indicators and methods used for research and innovation assessment" (Glanzel, Moed, Schmoch, & Thelwall, 2019). Three of the four editors belong to the twenty researchers in the field with the highest PWIPM values.
The results in Table 3 have been produced by the R package PWIR. This package does not only compute individual PWIPM values but also the number of papers and number of co-authors. The
results in the table show that the PWIPM values probably correlate with the number of papers and number of co-authors. The values seem to depend on these numbers. Since we are interested in the variables that explain variation in PWIPM values, we performed several regression models. The results are presented in the next section.
_Relationships between PWIPM, number of papers, and number of co-authors_
The results in Table 3 of the authors with high PWIPM values indicate that researchers with many papers in the field and in collaboration with other researchers in the community are in the best position to publish (frequently) with awardees of the PM or are awardees themselves. In this section, therefore, we would like to test the dependence of the PWIPM values on the number of papers published, the number of co-authors, and the status of being an awardee or not.
The results of the regression analyses based on the three datasets are shown in Table 4. The results include not only the coefficients from the analyses, but also the \(R^{2}\) and semipartial \(R^{2}\). Whereas the \(R^{2}\) reveal how much variation in the dependent variable is explained by the independent variables, the semipartial \(R^{2}\) reveal the increment in \(R^{2}\) by an independent variable. The \(R^{2}\) values of the three models show that around 76% of variation in PWIPM values can be
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Independent variables} & \multirow{2}{*}{Coefficient} & 95\% & \multirow{2}{*}{Beta} & \multirow{2}{*}{\(R^{2}\)} & \multirow{2}{*}{\(R^{2}\)} & \multirow{2}{*}{\(F\)} & Number of authors \\ \cline{5-8} & & & & & & & \\ \hline \multicolumn{8}{|c|}{_Core journals_} &.76 & 46.54*** & 9,801 \\ \hline Constant & -.32*** & [-.55, -.09] & & & & & \\ \hline Number of papers &.71*** & [.48,.95] &.72 &.22*** & & & \\ \hline Number of co-authors & -.10* & [-.19, -.01] & -.10 &.01*** & & & \\ \hline Awardee (yes) & 31.94*** & [19.85, 44.03] &.31 &.07*** & & & \\ \hline \multicolumn{8}{|c|}{_iMetrics_} &.78 & 49.14*** & 14,227 \\ \hline Constant & -.23* & [-.43, -.02] & & & & & \\ \hline Number of papers &.72*** & [.52,.92] &.75 &.23*** & & & \\ \hline Number of co-authors & -.13*** & [-.21, -.05] & -.13 &.01*** & & & \\ \hline Awardee (yes) & 39.62*** & [25.33, 53.91] &.31 &.07*** & & & \\ \hline \multicolumn{8}{|c|}{_Complete set_} &.75 & 45.37*** & 20,902 \\ \hline Constant & -.07 & [-.25,.10] & & & & & \\ \hline Number of papers &.66*** & [.46, 85] &.74 &.21*** & & & \\ \hline Number of co-authors & -.15*** & [-.23, -.06] & -.16 &.01*** & & & \\ \hline Awardee (yes) & 50.57*** & [33.87, 67.26] &.34 &.08*** & & & \\ \hline \end{tabular} Notes. * \(p\)\(<\).05, *** \(p\)\(<\).01, *** \(p\)\(<\).001
\end{table}
Table 4: **Multiple robust regression analyses predicting PWIPM values from number of papers, number of co-authors, and being an awardee of the PM (or not)**
explained by the three independent variables. The results of the three overall \(F\) tests indicate that there is a statistically significant relationship between \(\text{PWI}_{\text{PM}}\) values and the three variables. Both results (\(R^{2}\) values and \(F\) tests) point out that the three independent variables are closely related to the \(\text{PWI}_{\text{PM}}\) values.
The constants (or intercepts) in Table 4 indicate \(\text{PWI}_{\text{PM}}\) values when an author is not an awardee and has published no paper (without any co-authors). Since this constellation does not exist, the constant is negative in all cases and not very informative. The coefficients can be used yet to calculate estimated \(\text{PWI}_{\text{PM}}\) values: For example, the \(\text{PWI}_{\text{PM}}\) value that is estimated based on the number of papers and co-authors using the core journals dataset is -.32 +.71 (number of papers) + (-.10) (number of co-authors). Thus, the estimated \(\text{PWI}_{\text{PM}}\) value for an author (who is not an awardee) of ten papers with one co-author is -.32 + (.71 * 10) + (-.10 * 1) = 7.32. The status of being a Price Medalist leads - as expected - to a significant increase in the estimated \(\text{PWI}_{\text{PM}}\) value (for an author with only one paper and co-author): -.31 + (.75 * 1) + (-.10 * 1) + (31.94 * 1) = 39.26. The estimated values may deviate stronger from observed values, since the distribution of PWI values is very skewed.
The results of the regression models in Table 4 based on the three datasets are very similar. The most important variable with respect to the \(\text{PWI}_{\text{PM}}\) values seems to be the number of papers. The beta coefficient in the table is a measure of effect size that can be interpreted like a correlation with.1 being weak,.3 being moderate, and.5 being strong (Cohen, 1988). The beta coefficients of the number of papers are strong (the beta coefficients of the other variables are on a moderate or weak level). The semipartial \(R^{2}\) values point to the same direction: The increments in \(R^{2}\) are significantly higher for the number of papers (about.22) compared to both other independent variables (less than.1).
### Validation of the \(\text{PWI}_{\text{PM}}\)
In this paper, we introduce with the PWI a new metric for research evaluation purposes that might be an alternative and an addition to establish metrics. With the introduction of such a new metric, one wonders how well it functions and whether it is able to measure what it proposes to measure (Kreiman & Maunsell, 2011; Moed, 2016). In the introduction section, we argue that the PWI uses information on scientific prizes to focus on measuring excellence (elite) in a certain field. In the following, we demonstrate exemplarily that authors with high \(\text{PWI}_{\text{PM}}\) values are members of editorial boards and chief editors. Furthermore, authors of the most important handbook in scientometrics can be found among the authors with high \(\text{PWI}_{\text{PM}}\) values. We also checked whether authors with high \(\text{PWI}_{\text{PM}}\) values can be found in Clarivate's Highly Cited Researchers database.
In this section, we want to connect to these selective observations of validity by calculating the number of excellent papers - P(top 10%) - for all authors in our three datasets. We calculated the sum of the P(top 10%) values of the articles and reviews in our three datasets that were published before 2020 to allow for a citation window of at least three years. By correlating this number with \(\text{PWI}_{\text{PM}}\) values, we assess the convergent validity of the new indicator (National Research Council, 2012). The convergent validity of an indicator is given if its correlation with a theoretically similar measure is "high" (Chen, Shao, & Fan, 2021; Jirschitzka, Oeberst, Gollner, & Cress, 2017). In statistical terms, "high" would mean that the correlation is at least.5 (Cumming, 2012). Table 5 shows the result of the correlation analyses based on the three datasets. The table presents the correlation coefficients and the number of authors included in the analyses. We calculated the coefficients with different thresholds for the number of papers per author, since we observed that the coefficients depend on the number of papers published by an author.
Note. **_p\(<\)_.01, ***_p\(<\)_.001
As the results in Table 5 reveal, the convergent validity of the PWI\({}_{\text{PM}}\) values is only then given when authors with at least ten papers per author are required. This limitation in the results seems reasonable since a reliable measurement of quality (performance) is only possible with a sufficient database (i.e., a sufficient number of papers per author). For Lehmann, Jackson, and Lautrup (2008), it is only possible "to draw reliable conclusions regarding an author's citation record on the basis of approximately 50 papers" (p. 384). Similar thresholds for the number of papers can be found in Glanzel and Moed (2013).
#### Discussion
According to Moed (2018) "government funding of scientific research is increasingly based upon performance criteria". For that, research evaluation needs indicators that are able to measure what is in the focus of a specific evaluation (Bornmann & Marewski, 2019). Many evaluation processes are dominated by the strive for excellence: "Excellence is omnipresent in the research ecosystem" (Jong, Franssen, & Pinfield, 2021). Only the elite among possible candidates should be at the top of the list in research evaluation processes. The most popular indicators for measuring excellence in science are bibliometric indicators. The Highly Cited Researchers database published by Clarivate Analytics (2021) reflects this popularity very well. In this study, we propose a new indicator which is not based on bibliometric data, but on scientific prizes. With the focus on prizes, the indicator follows a common feature behind many processes in science: "an amazingly steady tendency to the concentration of items on a relatively small stratum of sources" (de Bellis, 2009, p. xxiv). Since scientific prizes are - as a rule - rare events, only happening for exceptional scientists (Zuckerman, 1977), we used prizes
\begin{table}
\begin{tabular}{|c|c|c|} \hline Threshold for the number of & Coefficient & Number of authors \\ papers per author & \multicolumn{2}{c|}{_Core journals_} \\ \hline
[MISSING_PAGE_POST]
\end{tabular}
\end{table}
Table 5: **Spearman rank correlation coefficients between PWI\({}_{\text{PM}}\) values and P(top 10%) values**
as a starting point for the identification of scientific elites following the basic Erdos number definition. The PWI counts co-authorships with prize winners (and their co-authors) in a certain field; it is based on harmonic distant weights for each co-authorship distance value. Since we are interested in the narrow environments of prize winners, the PWI does not only focus on the prize winners themselves but also on scientists who have collaborated with prize winners and their co-authors. The PWI can be applied in various research evaluation processes such as funding of disruptive research, recruitment of exceptional researchers for professorships, and identification of reviewers for specific programs that support breakthrough research.
To exemplify the calculation and use of the PWI, we calculated the index for our own field (quantitative science studies) using three datasets that are linked to the PM. PWI\({}_{\text{PM}}\) values reflect the performance of authors who have published in the field of science studies. We observed that the authors with the highest PWI\({}_{\text{PM}}\) values are well-known scientists in the field being members of editorial boards and chief editors (among other things). We tested the dependence of PWI\({}_{\text{PM}}\) values on the number of papers, the number of co-authors, and the status of an author of being an awardee of the PM. The results of regression models show that the number of papers is the most important factor for receiving high PWI\({}_{\text{PM}}\) values, i.e., the index is size-dependent. For the calculation of a size-independent variant of the indicator, the PWI\({}_{\text{PM}}\) values can be divided by the number of papers or the number of years since publishing the first paper (which would be an age-normalized variant of the index). The regression models also reveal that another important factor for high PWI\({}_{\text{PM}}\) values is the status of being an awardee of the PM. This result was expected, because it reflects the design of the indicator.
The most important step of the empirical analyses in this study was the validation of the PWI\({}_{\text{PM}}\): Only validated indicators should be considered in the research evaluation practice. We tested the convergent validity of the PWI\({}_{\text{PM}}\). We were interested whether the indicator is related to an established bibliometric indicator: P(top 10%). The results show that the coefficients for the correlation between PWI\({}_{\text{PM}}\) and P(top 10%) are high (in cases when a sufficient number of papers have been considered for a reliable assessment of performance). Therefore, measured by an established indicator for research excellence (Bornmann, de Moya Anegon, & Leydesdorff, 2012), the new PWI indicator seems to be convergently valid. Since this conclusion is based on only one indicator to measure the convergent validity of the PWI, future studies should use other indicators to confirm our results (or not). Future studies could also try to (in-)validate the PWI in other fields using other prizes. With the developed R package (PWIR, Hausschild, 2022), PWI values can be computed for all downloads from the WoS database.
In principle, we see several areas for a possible optimization of the PWI. Future studies might focus on these areas with empirical investigations: (1) We already mentioned that the PWI could be age-normalized by dividing the value by the number of years since publishing the first paper. (2) Another area concerns the point in time of the prize award. In the current definition of the PWI, it is not considered when the prize was awarded. Authors are classified as prize winners independent of the point in time of winning the prize. One could argue that authors are prize winners only from that point in time when they received the prize. The consideration of the point in time leads to a more complicated calculation of the PWI and may change PWI values. (3) The index assumes voluntary collaborations, i.e., collaborations that have been stricken up based on research quality considerations. However, many collaborations have other/additional reasons. For example, we identified some authors from science studies with high PWI\({}_{\text{PM}}\) values with employee relationships. On one side, one may argue that these relationships result from quality considerations. On the other side, however, the relationships make author collaborations more likely than for two authors without any employee relationship. Thus, it might be necessary to consider employee relationships at least in the interpretation of the results. (4) Since price
winners dominate the PWI results, an alternative PWI may leave out price winners in the results. This alternative would measure only collaborations with price winners and their co-authors.
As with all other indicators, the PWI is imperfect or biased. It focuses on a certain part of scientific activity, and therefore, it measures scientific performance in a biased way. According to Moed (2017), the awareness that "all performance indicators are 'partial' or 'imperfect' - influenced as they may be not only by the various aspects of performance but also by other factors that have little to do with performance - is as old as the use of performance indicators itself. Indicators may be imperfect or biased, but in the application of such indicators this is not seldom forgotten" (p. 6). From the awareness that indicators are imperfect or biased, one should follow that "metrics should support, not supplant, expert judgement. Peer review is not perfect, but it is the best form of academic governance we have, and it should remain the main basis by which to assess research papers, proposals and individuals" (Wilsdon, 2015). The royal road of research evaluation lies in "the intelligent combination of metrics and peer review" (Moed & Halevi, 2015).
## Acknowledgments
The bibliographic data used in this paper are from the online version of the WoS provided by Clarivate. The P(top 10%) data used in this paper are from a WoS custom database of the Max Planck Society (MPG) developed and maintained in cooperation with the Max Planck Digital Library (MPDL, Munich) and derived from the Science Citation Index Expanded (SCI-E), Social Sciences Citation Index (SSCI), Arts and Humanities Citation Index (AHCI), Conference Proceedings Citation Index-Science (CPCI-S), and Conference Proceedings Citation Index-Social Science & Humanities (CPCI-SSH) provided by Clarivate via the "Kompetenznetzwerk Bibliometrie" (see [https://bibliometrie.info/en/about-kb/](https://bibliometrie.info/en/about-kb/)) funded by BMBF (grant 16WIK2101A).
|
2309.16071 | Influence Pathway Discovery on Social Media | This paper addresses influence pathway discovery, a key emerging problem in
today's online media. We propose a discovery algorithm that leverages recently
published work on unsupervised interpretable ideological embedding, a mapping
of ideological beliefs (done in a self-supervised fashion) into interpretable
low-dimensional spaces. Computing the ideological embedding at scale allows one
to analyze correlations between the ideological positions of leaders,
influencers, news portals, or population segments, deriving potential influence
pathways. The work is motivated by the importance of social media as the
preeminent means for global interactions and collaborations on today's
Internet, as well as their frequent (mis-)use to wield influence that targets
social beliefs and attitudes of selected populations. Tools that enable the
understanding and mapping of influence propagation through population segments
on social media are therefore increasingly important. In this paper, influence
is measured by the perceived ideological shift over time that is correlated
with influencers' activity. Correlated shifts in ideological embeddings
indicate changes, such as swings/switching (among competing ideologies),
polarization (depletion of neutral ideological positions),
escalation/radicalization (shifts to more extreme versions of the ideology), or
unification/cooldown (shifts towards more neutral stances). Case-studies are
presented to explore selected influence pathways (i) in a recent French
election, (ii) during political discussions in the Philippines, and (iii) for
some Russian messaging during the Russia/Ukraine conflict. | Xinyi Liu, Ruijie Wang, Dachun Sun, Jinning Li, Christina Youn, You Lyu, Jianyuan Zhan, Dayou Wu, Xinhe Xu, Mingjun Liu, Xinshuo Lei, Zhihao Xu, Yutong Zhang, Zehao Li, Qikai Yang, Tarek Abdelzaher | 2023-09-27T23:45:36Z | http://arxiv.org/abs/2309.16071v1 | # Influence Pathway Discovery on Social Media
###### Abstract
This paper addresses _influence pathway discovery_, a key emerging problem in today's online media. We propose a discovery algorithm that leverages recently published work on _unsupervised interpretable ideological embedding_, a mapping of ideological beliefs (done in a self-supervised fashion) into _interpretable_ low-dimensional spaces. Computing the ideological embedding at scale allows one to analyze correlations between the ideological positions of leaders, influencers, news portals, or population segments, deriving potential influence pathways. The work is motivated by the importance of social media as the preeminent means for global interactions and collaborations on today's Internet, as well as their frequent (mis-use to wield influence that targets social beliefs and attitudes of selected populations. Tools that enable the understanding and mapping of influence propagation through population segments on social media are therefore increasingly important. In this paper, influence is measured by the perceived ideological shift over time that is correlated with influencers' activity. Correlated shifts in ideological embeddings indicate changes, such as swings/switching (among competing ideologies), polarization (depletion of neutral ideological positions), escalation/radicalization (shifts to more extreme versions of the ideology), or unification/cooldown (shifts towards more neutral stances). Case-studies are presented to explore selected influence pathways (i) in a recent French election, (ii) during political discussions in the Philippines, and (iii) for some Russian messaging during the Russia/Ukraine conflict.
Social Network Analysis, Influence Network, Ideological Embedding, Social Analysis Pipeline
## I Introduction
The paper advances the science of _influence network tomography_ - the empirical mapping of influence pathways on social media, derived from observable node behaviors. The work is motivated by the exploding popularity of social networks and their growing impact on population beliefs and attitudes. The paper posits that _interpretable unsupervised ideological embedding_[1], a recently proposed belief embedding approach, is a crucial enabler towards inferring potential influence among a diverse range of node types. Being unsupervised, it can be computed at scale for many parties with no need for human labeling. Being interpretable, it yields actionable insights. Since influence produces correlated changes in belief, computing the interpretable unsupervised ideological embedding at scale can serve as a new foundation for influence network tomography. As a proof of concept, an end-to-end self-supervised solution is developed that relies on uncovering correlated changes in nodes' ideological embeddings to discover potential influence pathways. Example outputs are presented based on multiple social media datasets that illustrate the versatility of the approach.
Modern solutions for influence estimation on social networks generally rely on a combination of (i) node embeddings (that distill node attributes correlated with predisposition to exert/follow influence), and (ii) neural networks that learn to estimate influence among nodes given their computed embeddings. A pioneering paper in that space is DeepInf [2], but many variations exist [3, 4, 5, 6, 7]. The ideological embedding we leverage [1], contrary to the aforementioned literature, is an _interpretable_ node representation (computed using a self-supervised representation learning approach) that summarizes ideological beliefs. Changes in ideological embedding therefore directly denote interpretable phenomena such as polarization, radicalization, reconciliation, or desertion of ideological positions. Models that describe how changes in the embedding of one node affect another can therefore serve as interpretable indicators of ideological influence.
The rest of this paper is organized as follows. Section II briefly covers key elements of related work. Section III presents the problem definition of _influence network tomography_. Section IV describes the proposed approach for mapping influence pathways among diverse node types. Example case studies are covered in Section V based on several social media datasets. Section VI describes insights gained from the evaluation and discusses opportunities for future extensions. The paper concludes with Section VII.
## II Related Work
The work described in this paper falls in the broad category of influence estimation on social media. Early work on influence analysis in social networks generally computed influence either directly for individual network edges (resulting in such measures as tie strength [8] and edge betweenness [9]) or independently for individual nodes (resulting in such measures as node centrality [10] and closeness [11]). Early work also concerned itself with the analysis of epidemic diffusion cascades,
producing stylized models that predict cascade propagation, such as the linear threshold model [12] and the independent cascade model [13]. A large number of subsequent models were developed that differ in how diffusion probabilities were computed [14]. More recently, such models were adapted to utilize node embeddings and neural networks for diffusion prediction [15, 16].
Influence estimation can be thought of as a generalization of diffusion prediction with the idea that information diffusion from one node to another is _one_ manifestation of influence. Modern influence estimation literature has focused on different flavors of node embeddings, where network nodes are mapped into latent spaces in a manner that allows predicting influence relations from such mappings. An example is Inf2vec [17], which used relations among node embeddings to infer influence, and Multi-Influor [18] that extended the approach to multiple influence factors from which influence could be predicted along network edges. Node embeddings are powerful because they can, in general, capture pairwise node interactions, local network structures, and global similarity across node categories. Furthermore, they allow the development of neural networks that take such embeddings as input and predict influence along edges as output. A well-cited early example of such an approach to influence estimation is DeepInf [2]. It inspired many subsequent variants to use node embeddings and deep learning for influence prediction, differing in the details of the embedding used to capture node attributes and the subsequent neural network used to infer influence (e.g., see [3, 4, 5, 6, 7]).
In the fast-changing world of social media, it was also recognized that the underlying networks and attention patterns are not static. Influence relations are thus very time-dependent (in addition to being topic-dependent). Recent research therefore introduced time-sensitive and topic-specific solutions for influence measurement [19] and diffusion prediction [20]. Our approach is an instance of this latter category of techniques.
As alluded to in the introduction, our approach is novel in its reliance on a new type of node embedding that we call _interpretable unsupervised ideological embedding_[1]. As its name suggests, the novelty of that embedding comes from being both interpretable and unsupervised - a combination that had not been previously accomplished jointly in ideological mapping solutions. The embedding projects nodes into an ideological space, where different axes denote different ideologies (automatically disentangled in a self-supervised fashion from observing social media posts). On each axis, a positioning further up the axis denotes adherence to a more extreme version of the underlying ideology. A change in a node's embedding represents a change in the node's beliefs. The approach projects both _users_ and _content_ into the _same_ latent space. Thus, the embedding (and subsequently influence propagation) can be computed for a more diverse set of node types, including individuals, communities, articles, and news portals. Furthermore, the projection into the ideological space is independent of the underlying social network platform; the embedding can be computed for users and posts on different social media platforms regardless of their format, such as text [1, 21, 22] and images [23]. Below, we present the problem definition in more detail and then describe the solution architecture.
## III Problem Definition
Given posts on some set of social media, the purpose of this work is to uncover the latent _influence network_, comprised of _entities_ and _directional edges_, that summarize how influence propagates on the media considered. In this network, the entities refer to objects in either the physical world or online that can influence one another. Each directional edge represents influence exerted by one entity on another. To set the scope, we consider four types of entities in analyzing influence pathways: (i) _physical entities_, such as events that transpire in the physical world, external to the social medium (such as protests, hospitalizations, donations, or deaths), (ii) _individual influencers_, whose positions allow them to impact larger population segments, (iii) _user community clusters_ representing communities on social platforms defined by a clustering algorithm or according to the operator, and (iv) _information domains_, each associated with a specific agency portal (_e.g._, YouTube, individual News media, etc) that publish information of a given type, such as cnn.com, foxnews.com, or inflowars.com. Of the above types of entities, the first type is represented by a directly measured time-series. For example, we can use the GDELT database1 to count physical events of a particular type over time. Broadly speaking, physical entities can denote any real-valued time-series measurement to correlate other nodes' embeddings with. For example, economic indicators such as the price of crude oil or the consumer confidence index could constitute valid entities to consider in the influence network (e.g., to help understand the impact of social media activity on such indicators or vice versa). The remaining three types of entities are represented by a time-series of ideological embedding versus time, computed as we describe later.
Footnote 1: [https://www.GDELTproject.org/](https://www.GDELTproject.org/)
Figure 1 shows an example latent influence network (computed by the proposed algorithm), including the four types of entities we defined above.
## IV The Solution Pipeline
To compute the latent influence network, we organize the set of media posts (provided as input) into a graph, compute ideological embeddings for graph nodes, decide on entities to consider, and finally test these entities for significant pairwise directional influence relations. The resulting set of uncovered directional influence links (together with the entities they link) then constitutes the produced latent influence network. The pairwise influence test itself is a plug-in module in the framework that determines whether or not directional influence is likely between a given pair of entities. For simplicity, below we shall use _lagged correlations in belief embedding_ as an indicator of potential influence, implying that a potential
influence link is suspected when the time-series of one entity can predict the time-series of another. This concept aligns with Granger causality [24] but is not in itself a proof of causal relations. Future incarnations of the framework can replace this test with others that align better with actual causality [25] (based on observations of belief embedding). More specifically, the execution pipeline consists of four stages:
1. _Interaction graph construction:_ This stage involves organizing the input posts into a bipartite interaction graph whose nodes are (i) the individual users (who posted on the media), and (ii) their posts.
2. _Interaction graph cleaning:_ Since we might not have visibility into all interactions, a missing-link prediction approach [26] is used to predict (likely) missing links in the computed graph, as well as to eliminate spurious (i.e., unlikely) ones.
3. _Dynamic ideological representation learning:_ Next, we compute the unsupervised interpretable ideological embedding [1] for all nodes in the (cleaned) interaction graph, mapping them (both users and posts) into an interpretable, low-dimensional, latent space. These lower-dimensional latent representations are computed for successive time intervals to capture the evolution of espoused beliefs over time.
4. _Influence pathway discovery:_ This stage focuses on identifying meaningful entities and valuable patterns of influence propagation among those entities. A community detection toolkit is first used to group individual users in the interaction graph into _user community_ entities. For each found community, the ideological embeddings of community members are averaged to yield a single community-wide time-series. Popular users are separately cast as potential _individual influencer_ entities. Popularly cited URLs are similarly cast as _information domain_ entities. Finally, _physical entities_ are added based on supplied input. Correlations are then computed between all entity time-series involved. Edges are plotted to depict large correlations. These edges (and the entities they connect) form the output latent influence network.
Figure 2 depicts the overall workflow. The four stages mentioned above are discussed in more detail in Section IV-A, Section IV-B, Section IV-C, and Section IV-D, respectively.
### _Interaction Graph Construction_
The first step of the pipeline is to design a data structure that can model the relationship between the users and the content they post. The datasets collected for this paper are from Twitter (currently X), where posted tweets can be collected using API calls (subject to a rate limit) together with the posting users and timestamps. Based on the raw data, we extend the user-post graph by extracting embedded URLs in the posted tweets and treating them as separate nodes linked back to the posting users. The result is a bipartite graph of user-content relationships, where the two types of nodes are media users and posts (e.g., tweets and posted URLs). We call them _user nodes_ and _assertion nodes_, respectively.
### _Graph Cleaning: nPUGraph_
The initial interaction graph we create may suffer false positives (i.e., incorrectly attributed interactions) and false negatives (unobserved interactions). These instances of incorrect or missing interactions can significantly distort the subsequent process of learning ideology representations, resulting in the introduction of noticeable noise into the ideological embedding time-series data. To address this issue, we incorporate the nPUGraph method [26], previously introduced in our research, as a data-cleaning step. This method effectively addresses both the prediction of missing interactions and the removal of erroneous ones, thereby enhancing the overall quality of the interaction graph.
Fig. 1: An illustrative example of a derived influence network.
Fig. 2: An overview of the proposed pipeline.
### _Dynamic Ideological Representation Learning_
We map the users and assertions into an explainable latent space using InfoVGAE [1], shown in Figure 3. InfoVGAE is a variational graph auto-encoder that projects inputs into the positive quadrant only (of the latent space). Its overall loss function is further modified to ensure orthogonality of projection coordinates, in addition to optimizing the evidence lower bound (ELBO) of the typical VAE. Prior work has shown that these changes lead to interpretable embedding properties, as the orthogonality constraints (combined with restrictions to the positive quadrant) cause different ideological echo-chambers to be mapped onto orthogonal axes, while users and posts at the intersection of the echo-chambers (i.e., those who are more neutral) are mapped closer to the origin [1].
The above characteristics have important consequences. They imply that the coordinate value of an entity's embedding on a given axis has an interpretable meaning. When that value decreases, the entity must be expressing more neutral beliefs with respect to the ideology represented by that axis. Similarly, when the coordinate value increases, the entity must be getting more entrenched in the ideology represented by the axis. Said differently, changes towards higher coordinate values on an axis represent ideological rhetoric escalation (for the corresponding ideology). Similarly, changes towards lower coordinate values on the axis represent rhetoric de-escalation (for the corresponding ideology).
Unfortunately, the space complexity of InfoVGAE is of the order of the product of users and assertions. To make it scale, we use InfoVGAE to compute the embedding for popular users and assertions only, then propagate the computed node embeddings to their neighbors in the bipartite user-assertion graph, such that previously excluded less popular nodes inherit the average of their neighbors with known ideology embeddings. The process is repeated in successive time windows.
### _Influence Pathway Discovery_
After the ideological embedding is computed for all users and assertions, we identify the important entities to be included in our influence network. As mentioned earlier, we include as entities all nodes corresponding to popular referenced URLs (information domain entities) and all popular users (individual influencer entities), then lump the remaining users into community entities, computing an average embedding time-series for each. We then introduce physical entities. For a proof of concept, we consider events from GDELT, an external database of the world's real-life events. Among 311 event types that GDELT provides, we selected 15 that are relevant to political activity, such as "provide economic aid", "investigate military action", and "obstruct passage". For each such event type, we produce a time series of event counts versus time. With candidate entities selected as discussed above, we calculate a lagged Pearson correlation for each entity pair. If a spike occurs at a given offset, we say that an influence edge was found (in the direction from the leading time-series to the lagging one). The identification of all such (high-correlation) edges completes graph construction.
## V Case Studies
Next, we offer a brief example of using this approach on social media data sets collected on recent events.
### _Datasets_
We test our pipeline on three datasets collected from X (formerly Twitter). They are (i) the 2022 French election, (ii) recent geopolitical events in the Philippines, and (iii) a Russian influence campaign aiming to garner sympathy from claims of "Western Russophobia". Some basic statistics for each are shown in Table I. The corresponding user interaction graphs are displayed in Figure 4.
1. **2022 French Election:** The dataset focuses on the multiple presidential candidates, the relevant media outlets, and the candidates' supporters during the time leading to the 2022 French presidential election. Figure 3(a) shows the corresponding user interaction graph, where one can distinguish several clusters. They happen to correspond to the different candidates and their followers.
2. **Philippines:** The dataset covers a multitude of topics from the Philippines' geopolitical landscape from the first half of the year 2023. User interactions are shown in Figure 3(b).
3. **Russophobia:** The dataset covers posts on a key talking point of pro-Russian messaging during the Russia-Ukraine war, namely, the claims of _"Western Russophobia"_ (referring to the allegation of unfair/hostile world attitudes towards citizens of Russia), collected for nearly a year starting 05/01/2022. Posts reacting to this line of messaging are also included. Figure 3(c) shows the user interaction graph, where one can clearly distinguish two groups of users. They happen to be those siding with the pro-Russia messaging and those against it.
Table II reports empirically tuned parameter settings for optimal performance on each dataset, including _Window Length_ (the interval of time, in days, over which each ideological embedding value is computed), _Time Shift_ (how much to slide this window by in each step for time-series generation), _Time Lag Threshold_ (the maximum time lag we consider between two time-series in searching for correlations), and _Correlation Threshold_ (the minimum Pearson-correlation used to identify an influence edge).
Fig. 3: The InfoVGAE architecture.
### _Selected Observations_
Figure 5 shows the interface used to inspect discovered influence pathways. The screen allows the operator to inspect parts of the influence network, view a heatmap of identified (lagged) correlations between key entities, drill down into the involved entity-pair's ideological embedding time-series for any cell in the heatmap, list the specific posts that explain the underlying entity-espoused beliefs (for influucers, domains, and communities), and control entity selection to view specific parts of the overall influence graph. Example snippets of the computed influence graphs are shown in Figure 6.
It is not the purpose of this paper to report political findings. Thus, we obscure the names of the entities depicted and focus on the general interpretations of the discovered pathway examples. Following the influence graphs shown Figure 6 (in the direction of the arrows, left-to-right), Figure 5(a) shows a key influencer in the French election, and identifies two Twitter communities influenced by them, as well as a news portal that correlates with their view. Other links are also shown, such as a downstream community influenced by the news portal. Figure 5(b) shows another example from the French election, where the rhetoric of a news portal and an online community seem to predict protest counts. The graph also shows other entities apparently influenced by the protests. Figure 5(c) shows the impact of a key influencer in the Philippines dataset, as well as multiple news media that follow their view, including the reuters.com domain. Finally, Figure 5(d) shows a pathway from the Russophobia dataset, depicting how a series of GDELT events, including military aid (to Ukraine) incite increased Russophibia rhetoric within a downstream Twitter community.
These examples offer a flavor of what the pathway discovery analysis can support. The actual results can be fine-tuned by controlling the selection of entities to consider. Such control may be exerted either directly (e.g., by selecting the appropriate physical events, portals, or influucers to include) or indirectly (e.g., by selecting the population segmentation algorithm responsible for generating the community entities of interest).
## VI Discussion and Future Work
This influence pathway discovery solution presented in this paper opens up opportunities for future expansion. It is meant to offer a flexible framework where individual modules can be replaced and enhanced over time. Some such opportunities are discussed below. The first desirable improvement lies in replacing the lagged correlation approach (inspired by Granger causality) with alternative methods that more accurately capture genuine causal effects. Identified (ideally, causal) links can further be viewed as dynamical stimulus/response systems modeled by transfer functions that capture the (stimulus/response) relation between their input and output time-series. An advantage of such models is that they can capture (possibly nonlinear) cumulative effects (where a response is a function of the _integral_ of the stimulus)
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline & Window Length & Time Shift & Time Lag Threshold & Correlation Threshold \\ \hline
2022 French Election & 20 days & 1 day & 5 days & 0.7 \\ \hline Philippine & 20 days & 2 days & 5 days & 0.5 \\ \hline Russophobia & 20 days & 2 days & 5 days & 0.4 \\ \hline \end{tabular}
\end{table} TABLE II: Algorithm Settings.
Fig. 4: User graph visualizations. Nodes are colored dark blue. Repost (retweet) edges are colored blue, reply edges are colored red, and quotation edges are colored green.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline & \#Users & \#Mesages & \#Assertions & Date Range \\ \hline
2022 French Election & 389,187 & 3,195,579 & 829,484 & 02/15/2022-04/11-2022 \\ \hline Philippine & 204,320 & 354,370 & 220,169 & 01/01/2023-06/28/2023 \\ \hline Russophobia & 103,917 & 243,088 & 101,057 & 05/01/2022-04/15/2023 \\ \hline \end{tabular}
\end{table} TABLE I: Dataset Statistics.
as well as novelty effects (where a response is proportional to the _change_ in stimulus), among other functions that better model the dynamics of social belief adoption.
The framework also offers the flexibility to integrate new types of entities for analysis. For example, new types of user communities may be identified by alternative community detection algorithms, and new physical entity types can be integrated besides GDELT events. The community and event names can be automatically generated using large language models (LLMs), applied to summarize or label sets of community posts. More substantial extensions are also possible, such as what-if analysis. We can simulate how influence might spread among entities when newly generated messages or information campaigns are introduced into the social medium.
Finally, scalability remains a challenge. Our ideological embedding and community detection methods do tackle scalability concerns, allowing us to directly analyze extensive social graphs comprising millions of nodes. Next, streaming versions of these solutions are desired to allow continuous updates as new social media posts arrive. We hope the above framework will ultimately contribute to the mitigation of misuse in the information space.
## VII Conclusions
We presented a novel influence graph discovery pipeline that differs from existing methods in its reliance on ideological embedding to uncover evidence of influence. It defines influence as the act of impacting human beliefs and infers changes in beliefs by mapping observed posts to an appropriately-constructed interpretable latent space. An interactive user interface and the corresponding backend were also implemented that enable us to analyze real-world data sets. Case studies were conducted to demonstrate the functionality of the proposed approach. The work serves as a proof of concept for influence network tomography based on belief embedding.
## Acknowledgements
Research reported in this paper was sponsored in part by DARPA award HR001121C0165, DARPA award HR00112290105, and DoD Basic Research Office award HQ00342110002. It was also supported in part by ACE, one of the seven centers in JUMP 2.0, a Semiconductor Research Corporation (SRC) program sponsored by DARPA.
|
2301.05149 | Define, Evaluate, and Improve Task-Oriented Cognitive Capabilities for
Instruction Generation Models | Recent work studies the cognitive capabilities of language models through
psychological tests designed for humans. While these studies are helpful for
understanding the general capabilities of these models, there is no guarantee
that a model possessing sufficient capabilities to pass those tests would
actually use those capabilities in performing real-life tasks. In this work, we
formulate task-oriented cognitive capabilities, which are human-like cognitive
capabilities that language models leverage to perform tasks. These capabilities
are (i) the ability to quickly generate good candidate utterances (the search
capability) (ii) the ability to predict how a listener interprets those
utterances and choose the most appropriate one (the pragmatic capability). We
design an evaluation scheme for comparing these capabilities of a language
model with those of a human. Applying this scheme to examine various models in
a navigation instruction generation problem, we find that their pragmatic
capability is severely lacking. This insight leads us to augment them with
better models of the listener and obtain a significant boost of 11% in success
rate in guiding real humans. Our work advocates for having a principled
procedure for aligning language models with humans that involves (i)
formulating task-oriented capabilities, (ii) devising a method to quantify
their deficiency, and (iii) iteratively improving them. | Lingjun Zhao, Khanh Nguyen, Hal Daumé III | 2022-12-21T04:43:19Z | http://arxiv.org/abs/2301.05149v2 | # A Cognitive Evaluation of Instruction Generation Agents
###### Abstract
We mathematically characterize the cognitive capabilities that enable humans to effectively guide others through natural language. We show that neural-network-based instruction generation agents possess similar cognitive capabilities, and design an evaluation scheme for probing those capabilities. Our results indicate that these agents, while capable of effectively narrowing the search space, poorly predict the listener's interpretations of their instructions and thus often fail to select the best instructions even from a small candidate set. We augment the agents with better theory-of-mind models of the listener and obtain significant performance boost in guiding real humans. Yet, there remains a considerable gap between our best agent and human guides. We discuss the challenges in closing this gap, emphasizing the need to construct better models of human behavior when interacting with AI-based agents.
## 1 Introduction
Instruction generation refers to the problem of guiding humans to accomplish goals through natural language. While being able to hold fluent chit-chatting conversations with humans (Thoppilan et al., 2022; OpenAI, 2022), performances of AI-based agents in this problem are still far from perfect (Zhao et al., 2021; Kojima et al., 2021; Wang et al., 2022). To build agents that communicate pragmatically like humans, we must equip them with cognitive capabilities similar to those of humans. Accomplishing this goal requires (i) mathematically characterize the capabilities that are essential for human pragmatic communication and (ii) designing an evaluation scheme for assessing these capabilities of AI-based agents.
In this paper, we present a framework for conducting fine-grained evaluation of the communication capabilities of instruction generation agents. Our evaluation focuses on cognitive capabilities that are known to be requisite for human-like pragmatic communication. The outcome of the evaluation indicates which cognitive capabilities require further development and thus can help developers direct their effort more deliberately and effectively. Figure 1 provides an overview of our approach.
To identify the cognitive capabilities essential for pragmatic communication, we build on two lines of work from socio-cognitive science: Bayesian models of cooperative communication (Wang et al., 2020; Goodman and Frank, 2016; Shafto et al., 2014) and studies on how humans implement Bayesian reasoning (Sanborn and Chater, 2016; Sanborn et al., 2010; Vul et al., 2014; Mamassian et al., 2002). These models have been shown to be capable of predicting and explaining human behaviors in various communication games. We propose a framework named _bounded pragmatic agent_ that practically characterize the human cognitive process for instruction generation. We show that our framework can also describe the operation of a broad class of AI-based agents, including neural-network-based agents. Interpreting AI-based agents and humans under the same mathematical framework enables us to quantify their differences. We derive the optimality conditions that a bounded pragmatic agent must satisfy in order to generate optimally pragmatic instructions. These conditions correspond to well-known cognitive capabilities of humans: (i) the ability to efficiently generate relevant utterances (the _search_ capability) (Bloom and Fischler, 1980; Gold et al., 2000; Trosborg, 2010) and (ii) the ability to accurately simulate the listener's interpretations of their utterances in the environment (the _theory-of-mind_ capability) (Premack and Woodruff, 1978; Gopnik and Astington, 1988; Tomasello, 2019; Call and Tomasello, 2011). We then design an evaluation scheme for assessing these capabilities of an agent, measuring how close it is to satisfying our optimality conditions.
We evaluate various neural-network-based agents1 on an instruction generation problem in photo-realistic 3D environments Anderson et al. (2018). To evaluate each capability of an agent, we compare it with the same agent but equipped with an optimal version of the evaluated capability, which is simulated by asking a human to perform that capability for the agent. Our evaluation reveals a crucial finding: most evaluated agents possess relatively efficient search capability but inadequate theory-of-mind capability. Specifically, on a majority of test cases, the agents can find an instruction that successfully guide humans by drawing a few samples. But they assign incorrect probabilities to the instructions and thus fail to select the best one as final outputs.
Footnote 1: We release our human-evaluation dataset and interface at [https://lingjunzhao.github.io/coop_instruction.html](https://lingjunzhao.github.io/coop_instruction.html).
We improve the theory-of-mind capability of the evaluated agents by equipping them with an explicit pragmatic reasoning mechanism Andreas Klein (2016); Fried et al. (2017), using state-of-the-art instruction-following agents Magalhaes et al. (2019); Shen et al. (2022); Hong et al. (2021) as theory-of-mind models. We obtain significant improvement over the original agents, shrinking the gap with human performance on test data by 36%. Towards eliminating the remaining gap, we illustrate with empirical evidence a major challenge in developing better theory-of-mind models. Specifically, when employed, these models would be asked to evaluate _AI-generated_ instructions, which may differ dramatically from human-generated instructions. Hence, a standard supervised-learning training scheme that only exposes the model to human-generated instructions would be inadequate for learning reliable theory-of-mind models. We thus call for the construction of novel datasets, approaches, and evaluation methods for developing these models.
## 2 Related Work
**Navigation Instruction Generation.** Instruction generation has been commonly studied in navigation settings Anderson et al. (1991); Byron et al. (2010); Koller et al. (2010); Striegnitz et al. (2011); Goeddel and Olson (2012); Fried et al. (2017, 2018). The Matterport3D simulator and the accompanying datasets (R2R Anderson et al. (2018), R4R Jain et al. (2019), and RxR Ku et al. (2020)) offer more challenging settings by combining photo-realistic scenes with long, verbally rich instructions. Recent work on evaluating instruction generation agents Zhao et al. (2021) reveals the ineffectiveness of standard learning and modeling approaches to this problem. Wang et al. (2021) improve the accuracy and interpretability of instructions in the RxR setting. Kamath et al. (2022) leverage this model to synthesize additional data for training instruction-following agents. Our work offers useful principles for improving these models.
Figure 1: An overview of our approach. We aim to build speaker agents that can guide humans to accomplish goals through natural language. Standard evaluation that computes task performance metrics is not helpful for directing the development of the evaluated agents (a). We propose a mathematical framework called “bounded pragmatic agent” that can characterize the operations of both AI-based and human speakers (b). Viewing AI-based agents and humans through this unifying lens enables us to compare them on more fine-grained capabilities (c), and better instruct future development of these agents towards leveling with human performance (d).
**Mathematical Models of Human Communication.** Human communication is a cooperative act (Grice, 1975; Scott-Phillips, 2014; Tomasello, 2019). Pragmatic communication in humans may involve different cognitive capabilities like basic understanding of language and social rules (Trosborg, 2010) and reasoning about the physical world (Bender and Koller, 2020) and human behavior (Ganae and Mudasir, 2015; Enrici et al., 2019; Rubio-Fernandez, 2021). Our work describes similar capabilities but provides a mathematical interpretation that allows for computational evaluation of those capabilities. Development of mathematical models of human communication have been greatly useful for understanding human behaviors (Ho et al., 2016; Sumers et al., 2022) and building communication agents (Andreas and Klein, 2016; Fried et al., 2017, 2018;, FAIR; Lin et al., 2022). Numerous variants of these models have been proposed. Wang et al. (2020) present a comprehensive comparison of these variants and unify them under a framework inspired by optimal transport. Since we are interested more in characterizing general capabilities than specific implementation, the model we propose in this work is a generalized version capturing the essence of these models.
**Evaluating Cognitive Capabilities of Neural Networks.** A plethora of benchmarks for evaluating the cognitive capabilities of AI-based agents have been created, focusing on theory-of-mind capabilities (Le et al., 2019; Nematzadeh et al., 2018), grounding (Lachmy et al., 2021; Udagawa and Aizawa, 2019; Haber et al., 2019), commonsense reasoning (Talmor et al., 2018; Levesque et al., 2012; Zellers et al., 2019; Sap et al., 2019), etc. Recent work (Sap et al., 2022; Hu et al., 2022) examine performance of large language models on various cognitive tasks. They evaluate a capability by designing language tasks that are assumed to require the evaluated capability to solve. This approach is limited to large language models that can perform few-shot learning. A limitation of the approach is that it may not be possible to determine whether an agent solve the tasks in the intended way. Our evaluation scheme follows a different principle: we mathematically characterize exactly the capabilities we want to evaluate, and compare agents that possess different levels of these capabilities.
## 3 Problem Setting
### Environment and Human Listener
We consider a human listener \(h\) acting in a POMDP environment with state space \(\mathcal{S}\), action space \(\mathcal{A}^{h}\), transition function \(E^{h}(s_{t+1}\mid s_{t},a_{t})\), start-state distribution \(E^{h}_{1}(s_{1})\), observation space \(\Omega\), and observation function \(O^{h}(o_{t+1}\mid s_{t+1})\). An _instruction_\(\mathbf{u}\in\mathcal{U}\) is an utterance consisting of words belonging to a vocabulary \(\mathcal{V}\). The human can follow instructions to generate trajectories. For example, in an indoor navigation setting, upon hearing _"go the kitchen and stop next to the oven"_, a human can walk to the specified location. A \(T\)-step _trajectory_\(\mathbf{e}^{\mathtt{h}}=(s_{1},o^{h}_{1},a^{h}_{1},\cdots,s_{T},o^{h}_{T},a^{h}_{T})\) is an execution of an instruction. The observable part of the trajectory \(\mathbf{\bar{e}}^{\mathtt{h}}=(o^{h}_{1},a^{h}_{1},\cdots,o^{h}_{T},a^{h}_{T})\) is obtained by excluding the states from \(\mathbf{e}^{\mathtt{h}}\).
To follow instructions, we imagine the human implements a policy \(\pi^{\mathtt{h}}(a\mid\mathbf{\bar{e}},\mathbf{u})\) that takes as input a partial observed trajectory \(\mathbf{\bar{e}}\) and an instruction \(\mathbf{u}\), and outputs a distribution over actions in \(\mathcal{A}^{h}\). Given an instruction \(\mathbf{u}\), a \(T\)-step trajectory is generated as follows. The human starts in \(s_{1}\sim E_{1}\) and observes \(o^{h}_{1}\sim O(s_{1})\). At time step \(t\), let \(\mathbf{\bar{e}}_{1:t}=(o^{h}_{1},a^{h}_{1},\cdots,o^{h}_{t})\). The human chooses \(a^{h}_{t}\sim\pi^{\mathtt{h}}(\cdot\mid\mathbf{\bar{e}}_{1:t},\mathbf{u})\), executes the action, and transitions to \(s_{t+1}\sim E^{h}(s_{t},a^{h}_{t})\). There, they perceive \(o^{h}_{t+1}\sim O^{h}(s_{t+1})\). In the end, they issue a special stop action \(a_{T}\) to terminate the trajectory. We define \(L_{h}(\mathbf{e}\mid\mathbf{u})\) as the probability of generating a trajectory \(\mathbf{e}\) according to this process. We will refer to \(L_{h}\) as the real listener to distinguish it with the theory-of-mind listener, which is a mental model of the real listener that an agent constructs.
### Pragmatic Instruction Generation
In pragmatic instruction generation (PIGen), the goal is to learn a speaker agent \(r\) that generates language instructions to guide a human listener \(h\) to reach states in the environment. The term "pragmatic" emphasizes that the agent generates language in a social context to achieve a communication goal. In each PIGen task, the speaker agent first imagines an intended trajectory \(\mathbf{e}^{\star}=(s_{1},o^{r}_{1},a^{r}_{1},\cdots,s_{T},o^{r}_{T},a^{r}_{T})\), which leads to the intended goal state \(s_{T}\) from the state \(s_{1}\) that the human is currently in. Because the human's action space and observation function may differ from those of the agent, they may not be able to comprehend \(\mathbf{e}^{\star}\) even if it is presented to them. Thus, the agent needs to translate the trajectory into an
_instruction_\(\hat{\mathbf{u}}\) that the human can understand and follow. To do so, it implements a _speaker model_\(S_{r}(\mathbf{u}\mid\mathbf{e})\) that takes as input a trajectory and computes a distribution over instructions. The objective of the problem can be written formally as
\[\operatorname*{arg\,max}_{S_{r}}\mathbb{E}_{\mathbf{e}^{\star}}\left[L_{h}(\mathbf{e}^{ \star}\mid\text{Gen}(S_{r},\mathbf{e}^{\star}))\right] \tag{1}\]
where \(\text{Gen}(S_{r},\mathbf{e}^{\star})\) is the process implemented by the agent for generating an instruction.
The agent is evaluated using a dataset \(\mathcal{D}_{\text{eval}}\) of held-out trajectories. For each trajectory \(\mathbf{e}^{\star}_{k}\in\mathcal{D}_{\text{eval}}\), We generate an instruction \(\hat{\mathbf{u}}_{k}=\text{Gen}(S_{r},\mathbf{e}^{\star}_{k})\). The instruction is then presented to a human listener to follow, producing a trajectory \(\mathbf{e}^{\text{h}}_{k}\sim L_{h}(\cdot\mid\hat{\mathbf{u}}_{k})\). The performance of the agent, denoted by \(\rho(r)\), is the average similarity between the human-generated trajectory and the intended trajectory
\[\rho(r)\triangleq\frac{1}{|\mathcal{D}_{\text{eval}}|}\sum_{\mathbf{e}^{\star}_{k }\in\mathcal{D}_{\text{eval}}}\Psi(\mathbf{e}^{\text{h}}_{k},\mathbf{e}^{\star}_{k}) \tag{2}\]
where \(\Psi\) is a similarity metric.
## 4 Building Agents that Communicate Pragmatically like Humans
Faced with instances of the PIGen problem daily, humans have evolved a highly efficient cognitive process for solving this problem. To build agents with a similar level of efficacy, we propose a mathematical model characterizing the human cognitive process for instruction generation (SS 4.1). We then derive the capabilities for an agent implementing that model to optimally solve PIGen (SS4.2). Finally, we present an evaluation scheme for collating these capabilities on a general class of speaker agents (SS4.3).
### A Mathematical Cognitive Model of Instruction Generation
To formulate how humans generate instructions, we build on mathematical models of cooperative communication Wang et al. (2020); Goodman and Frank (2016); Shafto et al. (2014). We consider a general version where a speaker agent constructs a _pragmatic speaker_ model \(S_{\text{prag}}(\mathbf{u}\mid\mathbf{e})\) based on two constituents: a _base speaker_ model \(S_{\text{base}}(\mathbf{u}\mid\mathbf{e})\) and a _theory-of-mind (ToM) listener_ model \(L_{\text{ToM}}(\mathbf{e}\mid\mathbf{u})\). The base speaker represents general knowledge of the agent about the world and the language it speaks. The ToM listener reflects situated knowledge about the listener, simulating how they would behave in the environment given an instruction. The construction of \(S_{\text{prag}}\) is defined as a Bayesian belief update that alters the initial belief \(S_{\text{base}}\) by re-weighting with \(L_{\text{ToM}}\):
\[S_{\text{prag}}(\mathbf{u}\mid\mathbf{e})\propto L_{\text{ToM}}(\mathbf{e}\mid\mathbf{u})S_{ \text{base}}(\mathbf{u}\mid\mathbf{e}) \tag{3}\]
The pragmatic speaker utters an instruction of maximum probability under its model:
\[\hat{\mathbf{u}}_{\text{prag}} \triangleq\operatorname*{arg\,max}_{\mathbf{u}\in\mathcal{U}}S_{ \text{prag}}(\mathbf{u}\mid\mathbf{e}^{\star})\] \[=\operatorname*{arg\,max}_{\mathbf{u}\in\mathcal{U}}L_{\text{ToM}}(\bm {e}^{\star}\mid\mathbf{u})S_{\text{base}}(\mathbf{u}\mid\mathbf{e}^{\star}) \tag{4}\]
This choice reflects that the speaker wants to maximize the chance of the listener interpreting its instruction correctly, but it is still influenced by prior knowledge.
While this model accounts for human behaviors highly accurately on problems where \(\mathcal{U}\) is a small discrete space Frank and Goodman (2012), in problems where \(\mathcal{U}\) is unbounded like PIGen, it is unlikely that humans, which are known to be agents with bounded rationality Simon (1957), are able to implement the full Bayesian update in the model's formulation. A hypothesis, which is supported by empirical evidence, is that humans approximate the update via Monte-Carlo sampling Sanborn and Chater (2016); Sanborn et al. (2010); Vul et al. (2014); Mamassian et al. (2002). Applying this hypothesis to our setting, we derive a more practical model of how human generate instructions, in which they perform the Bayesian update on a subspace \(\mathcal{U}_{\text{sub}}\) of \(\mathcal{U}\) chosen by drawing samples from \(S_{\text{base}}\)
\[\hat{\mathbf{u}}_{\text{bounded-prag}}\triangleq\operatorname*{arg\,max}_{\mathbf{u} \in\mathcal{U}_{\text{sub}}\subset\mathcal{U}}L_{\text{ToM}}(\mathbf{e}^{\star} \mid\mathbf{u}) \tag{5}\]
where \(\mathcal{U}_{\text{sub}}\) is a small set of candidate instructions generated by \(S_{\text{base}}\). We call an agent that generates instructions according to Eq 5 a _bounded pragmatic speaker_ (Figure 2). For such a speaker, instruction generation involves two cognitive tasks: the candidate generation task (performed by \(S_{\text{base}}\)) and the candidate evaluation task (performed by \(L_{\text{ToM}}\)). The former task ensures that the generation of an instruction is efficient, while the latter guarantees the generated instruction conveys the intended meaning.
### Essential Cognitive Capabilities of Pragmatic Instruction Generation Agents
What cognitive capabilities enable humans to effectively solve the PIGen problem (section SS3.2)?
Viewing humans as bounded pragmatic agents, we can characterize those capabilities by identifying the requirements for a bounded pragmatic agent to optimize the PIGen objective (Eq.1). A general condition is that the instruction \(\hat{\mathbf{u}}_{\text{bounded-prag}}\) selected by the agent must satisfy
\[\hat{\mathbf{u}}_{\text{bounded-prag}}=\mathbf{u}^{\star}\triangleq\operatorname*{ arg\,max}_{\mathbf{u}}L_{h}(\mathbf{e}^{\star}\mid\mathbf{u}) \tag{6}\]
where \(L_{h}\) is the real listener.
We translate this condition into conditions for the constituent models, \(S_{\text{base}}\) and \(L_{\text{ToM}}\), of the agent. The condition for \(S_{\text{base}}\) is that the candidate set \(\mathcal{U}_{\text{sub}}\) generated by it must contain the optimal instruction \(\mathbf{u}^{\star}\) (condition ). Fulfilling this condition requires \(S_{\text{base}}\) to be capable of quickly generating candidates and placing sufficiently high probability on \(\mathbf{u}^{\star}\) so that the instruction can be found by sampling a few candidates. We refer to this capability as the _search capability_ of an agent.
The condition for \(L_{\text{ToM}}\) is that it must rank \(\mathbf{u}^{\star}\) first among the candidates (condition ). Meeting this condition demands having the capability of mentally counterfactually simulating the behavior of the listener in an environment, and evaluating whether the communicated intention is actualized in the simulation. We refer to this capability as the _ToM capability_.
The search and ToM capabilities are orthogonal and complementary. An agent with flawless ToM capability can evaluate the goodness of instructions given to it, but may not be able to efficiently generate good instructions by itself. In contrast, an agent with effective search capability can quickly bring to attention highly relevant utterances but may not always select the best one for its communication purposes if it has a misleading ToM model.
### Assessing the Cognitive Capabilities of an Instruction Generation Agent
We consider a speaker agent \(r\) that learns a model \(S_{r}(\mathbf{u}\mid\mathbf{e})\) and communicates a trajectory \(\mathbf{e}^{\star}\) by running an inference algorithm to compute an instruction \(\hat{\mathbf{u}}_{\text{infer}}\approx\operatorname*{arg\,max}_{\mathbf{u}\in \mathcal{U}}S_{r}(\mathbf{u}\mid\mathbf{e}^{\star})\). Generative LSTM- or Transformer-based models that implement greedy or beam-search decoding are examples of such an agent.
We notice that, like humans, \(r\) also possesses search and ToM capabilities. On one hand, it can generate candidate instructions like a base speaker by sampling from \(S_{r}\) or executing an inference algorithm. On the other hand, for a fixed \(\mathbf{e}^{\star}\), it can use \(S_{r}(\mathbf{u}\mid\mathbf{e}^{\star})\) as a ToM model to rank instructions. Improving these capabilities is crucial for \(r\) to better solve PIGen. In fact, suppose \(S_{r}\) satisfies conditions and the following candidate set generated by \(S_{r}\)
\[\mathcal{U}_{\text{sub}}^{r}\triangleq\{\hat{\mathbf{u}}_{\text{infer}}\}\cup\{ \mathbf{u}_{i}\sim S_{r}\mid 1\leq i\leq N\} \tag{7}\]
fulfills condition ). Then instead of running the
Figure 2: The cognitive process of a bounded pragmatic speaker. The speaker implements two models: a base speaker model and a theory-of-mind listener model. In every task, the speaker first imagines a trajectory it wants to convey to the human listener. To reduce the search space, it then uses the base speaker to generate a small set of relevant candidate instructions. After that, it employs the theory-of-mind model to simulate how the human listener would follow each instruction in the candidate set. The speaker finally elects the candidate instruction that causes the theory-of-mind listener to generate the trajectory most similar to the intended trajectory. The output instruction is finally sent to the human listener for a real execution in the environment.
inference algorithm, it can generate instructions as a bounded pragmatic agent as follows
\[\hat{\mathbf{u}}\triangleq\operatorname*{arg\,max}_{\mathbf{u}\in\mathcal{U}^{\prime}_{ \text{sub}}}S_{r}(\mathbf{u}\mid\mathbf{e}^{\star}) \tag{8}\]
and optimizes the PIGen objective.
To evaluate each capability of \(r\), we measure the performance gap between the agent and a skyline agent which is at human level in the evaluated capability but is equally good as \(r\) at the other capability. Specifically, we define \(r_{\text{oracle-search}}\) to be an agent that employs \(S_{r}\) as the ToM model but is given a "gold" candidate set \(\mathcal{U}^{\star}_{\text{cand}}\) that always contains the ground-truth instruction \(\mathbf{u}^{\star}\). It outputs an instruction as follows
\[\hat{\mathbf{u}}_{\text{oracle-search}}\triangleq\operatorname*{arg\,max}_{\mathbf{u }\in\mathcal{U}^{\prime}_{\text{cand}}}S_{r}(\mathbf{u}\mid\mathbf{e}^{\star}) \tag{9}\]
This agent has similar ToM capability as \(r\) but human-level search capability (in fact, its search capability satisfies condition ). Next, we construct \(r_{\text{oracle-ToM}}\) which generates candidates using \(S_{r}\) but employs a real human to select the output instruction
\[\hat{\mathbf{u}}_{\text{oracle-ToM}}\triangleq\operatorname*{arg\,max}_{\mathbf{u} \in\mathcal{U}^{\prime}_{\text{sub}}}L_{h}(\mathbf{e}^{\star}\mid\mathbf{u}) \tag{10}\]
where \(\hat{\mathbf{u}}_{\text{infer}}\) is the instruction generated by the inference algorithm that \(r\) implements and \(\mathcal{U}^{\prime}_{\text{sub}}\) is defined as in Eq 7. The search capability of \(r_{\text{oracle-ToM}}\) is as good as \(r\) but its ToM capability is that of a human.
We define the prospective performance gain (PPG) with respect to each capability as follows
\[\text{PPG}_{\text{search}}(r) \triangleq\rho(r_{\text{oracle-search}})-\rho(r) \tag{11}\] \[\text{PPG}_{\text{ToM}}(r) \triangleq\rho(r_{\text{oracle-ToM}})-\rho(r) \tag{12}\]
where \(\rho\) computes the performance metric of an agent on evaluation data (Eq 2 of SS3.2). The metric computes the potential improvement if one of the capability is enhanced. It implies which of the two capabilities of \(r\) is currently more deficient and thus informs future development direction for the agent. For example, if \(\text{PPG}_{\text{search}}(r)\) is large and \(\text{PPG}_{\text{ToM}}(r)\) is small, it means that the evaluated agent is scoring the candidate instructions highly accurately but it is bad at finding high-score instructions. In this case, developers may want to focus on devising a more effective inference algorithm for the agent. On the other hand, if the agent's estimated scores are poorly calibrated, signified by \(\text{PPG}_{\text{ToM}}(r)\) being large, building a better planning module that simulates the listener's behavior more accurately would yield significant performance boost.
## 5 Improving ToM Capability with Ensemble Instruction-Following Agents
We improve the ToM capability of an agent \(r\) by turning it into a bounded pragmatic agent that uses the original model \(S_{r}\) as the base speaker but is equipped with a better ToM model than \(S_{r}\). A common approach for building a ToM model is to learn an instruction-following policy \(\hat{\pi}(a\mid\mathbf{u},\mathbf{\bar{e}})\) using the same dataset used for learning \(S_{r}\)(Andreas and Klein, 2016; Fried et al., 2017, 2018).
We argue that this approach has a potential drawback. A ToM model learned in this way is only exposed to human-generated input instructions. At deployment time, it would likely experience a _covariate shift_ because as a ToM model, the model is then asked to score instructions generated by a speaker model, not by humans. These instructions may be incorrect, ungrammatical, or may simply have a different style than human-generated instructions. This covariate shift would hamper the model's judgement. Our preliminary experiments (Appendix S A.5) confirms that using a listener trained on only human-generated inputs as the ToM model hurts rather than improves the performance of various speakers.
We show that this problem can be alleviated by employing ToM models that have calibrated uncertainty on unseen instructions. We obtain calibrated models through ensembling (Lakshminarayanan et al., 2017). Specifically, we randomly draw \(K\)90%-samples of the training dataset. We use each sample to train an instruction-following policy \(\hat{\pi}^{(k)}(a\mid\mathbf{u},\mathbf{\bar{e}})\); the policies are also initialized with different random seeds.
When the agent has access to a simulation of the environment, it can leverage the simulation to construct better ToM models. Note that the probability that a ToM model \(L_{\text{ToM}}\) assigns to an instruction can be seen as an expectation of a 0-1 metric: \(L_{\text{ToM}}(\mathbf{e}^{\star}\mid\mathbf{u})=\mathbb{E}_{\mathbf{e}\sim L_{\text{ToM}} (\cdot\mid\mathbf{u})}\left[\mathds{1}\{\mathbf{e}=\mathbf{e}^{\star}\}\right]\), which does not award partial credit if \(\mathbf{e}\) partially overlaps with \(\mathbf{e}^{\star}\). We make two changes: (i) replace the 0-1 metric with a soft metric \(\Psi(\mathbf{e},\mathbf{e}^{\star})\) that can measure partial similarity between trajectories and (ii) ap
proximate the expectation by executing instruction-following policies \(\hat{\pi}^{(k)}\) in the environment to sample trajectories. Our final ToM-augmented agent selects its instruction as follows
\[\hat{\mathbf{u}}_{\text{augment-ToM}}\triangleq\operatorname*{arg\, max}_{\mathbf{u}\in\mathcal{U}_{\text{sub}}^{\prime}}L_{\text{ToM}}(\mathbf{u},\mathbf{e}^{ \star}) \tag{13}\] \[L_{\text{ToM}}(\mathbf{u},\mathbf{e}^{\star})\triangleq\frac{1}{KM}\sum_ {k=1}^{K}\sum_{j=1}^{M}\Psi(\mathbf{e}_{j}(\hat{\pi}^{(k)},\mathbf{u}),\mathbf{e}^{\star})\] \[\mathcal{U}_{\text{sub}}^{\prime}\triangleq\{\hat{\mathbf{u}}_{\text {infer}}\}\cup\{\mathbf{u}_{i}\sim S_{r}\mid 1\leq i\leq N\}\]
where \(\mathbf{e}(\pi,\mathbf{u})\) denotes a trajectory obtained by continuously sampling actions from a policy \(\pi\) conditioned on an instruction \(\mathbf{u}\).
## 6 Experimental Setup
### Environment and Dataset
We setup an instruction generation problem in 3D environments using the Matterport3D simulator Anderson et al. (2018). The simulator photo-realistically emulates the visual perception of a person walking in an indoor environment. Traveling in an environment is simulated as traversing in a graph where each node corresponds to a location. At any location, an agent is provided with RGB images capturing the 360-degree panoramic view when looking from that location.
We train our speaker and listener models using the Room-to-Room (R2R) dataset which accompanies the simulator. The R2R dataset was originally created for training instruction-following agents. Each data point was collected by asking a crowd-worker to write a verbal description of a path in an environment. In the end, each path was annotated with three instructions. Each instruction contains 29 words on average. The dataset is split into a training set (61 environments, 4,675 paths), a seen validation set (340 paths) whose paths are sampled in the training environments, and an unseen validation set (11 environments unseen during training, 783 paths).
We train the models using the training set and validate them on the unseen validation set for model selection. The final performance metrics are computed on the seen validation set.
### Speaker Models
We evaluate three speaker model architectures. The first is a GPT-2 model pre-trained on text Radford et al. (2019) and fine-tuned on the R2R training set. The other two models are encoder-decoders: one implements an LSTM architecture similar to Shen et al. (2022), and the other is based on a Transformer architecture Vaswani et al. (2017). The parameters of these two models are randomly initialized.
Training.We train the speakers with a standard maximum-likelihood objective using the AdamW optimizer Loshchilov and Hutter (2019) with a learning rate of \(10^{-4}\). More detailed model implementation and hyperparameters are provided in SS A.1. During training, we select the best model based on the unseen-validation BLEU score Papineni et al. (2002) of the model-generated instructions with the respect to the ground-truth instructions.
### Human Evaluation
We evaluate each speaker model on 75 paths in the unseen validation data split. In the end, we have annotated 1,200 instructions generated by 16 different systems (humans, 3 speaker models, and their ablated and augmented versions).
To evaluate a speaker model, we present its generated instructions to a human annotator and ask them to follow the instructions to navigate in Matterport3D environments. We adapt the PanGEA tool2 to setup a web navigation interface and create a task on Amazon Mechanical Turk (MTurk) to recruit human evaluators. We pay the evaluator $5.20 per task which takes about 25 minutes. For each evaluation task, we ask the human evaluator to follow six instruction-following sessions.
Footnote 2: [https://github.com/google-research/pagea](https://github.com/google-research/pagea)
Quality Assurance.One of the six sessions, which appears in all tasks, is a quality-control test featuring an easy-to-follow human-written instruction. We only approve an evaluator if they navigate successfully to the goal destination in this test. Following Zhao et al. (2021), we instruct the judges to not explore the environments unnecessarily and not wander back and forth unless they are lost. We record the trajectories created by the human and use them to compute the performance metrics. More details about the crowd-sourcing interface are given in Appendix SSA.4.
Performance Metrics.The quality of a speaker is determined by the similarity between the intended trajectory and the actual trajectories that the
speaker's instructions induce the human evaluators to generate. We compute these similarity metrics:
* Success rate (SR) averages binary indicators of whether the final location of a human-generated trajectory is within 3 meters of the final location of the intended trajectory;
* SPL (Anderson et al., 2018) weights the success indicator with the ratio between the intended traveling distance and the actual one;
* NDTW and SDTW are metrics based on dynamic time-warping alignment (Magalhaes et al., 2019), capturing the similarity between two point sequences. NDTW computes only a sequence similarity score while SDTW weights the score with the success indicator.
## 7 Experiments
We investigate the following questions:
1. _How well do the speakers perform on our problem?_ We find that, while implementing advanced model architectures, these speakers perform poorly compared to human speakers.
2. _What causes their performance deficiency?_ Using our evaluation scheme, we identify that the speakers possess decent search capability but inadequate ToM capability.
3. _Can we improve the speakers by equipping them with better ToM models?_ We train ensembles of state-of-the-art instruction-following agents to serve as the ToM models for the speakers, and obtain significant improvements.
4. _What are the challenges in bridging the performance gap with human speakers?_ We show that state-of-the-art instruction-following agents are not optimally trained to serve as ToM models because they are mostly trained to predict how humans follow human-generated instructions, but as ToM models, they are required to accurately predict how humans follow model-generated instructions.
How well do the speakers perform on our problem?Figure 3 shows the performance of the three speaker models on variety of metrics. We also evaluate the human-written instructions provided by the R2R dataset. Overall, there is a wide margin between the models and the humans. The best model speaker (EncDec-Transformer) lags behind the humans by 21.6 NDTW points. We find that the encoder-decoder architecture with cross-attention of EncDec-Transformer outperforms the decoder-only self-attention architecture of GPT-2 (+11.7 NDTW), indicating that fusing the vision and language features too early in an architecture may be detrimental. On the other hand, EncDec-Transformer leads over EncDec-LSTM by 4.1 points, suggesting that the Transformer architecture is more effective than LSTM in this problem.
What causes the speakers' performance deficiency?Next, we investigate whether the lack of search or ToM capability is responsible for the performance deficiency of the speakers. Following our evaluation scheme, we compute the prospective performance gains when one of the capabilities were made optimal. The results presented in Figure 4 show that it is an under-performed ToM capability that primarily causes the models to perform poorly. While equipping the models with optimal search capability only improves their performance by 30% on average, granting them optimal ToM capability nearly doubles their performance metrics. In fact, the search capability of the models is already as good as that of the humans we employ, because the models with optimal ToM capability achieve even
Figure 4: Performance of the speakers and their human-augmented versions. Possessing human-level ToM capability improves performance of the speakers, showing that their original ToM capability is highly deficient compared to that of humans.
Figure 3: Performance of different speakers on held-out evaluation data. There is a considerable gap between model and humans speakers.
slightly higher SDTW score than the human speakers (e.g., 75.2 of EncDec-Transformer compared to 71.0 of humans), though the differences are not statistically significant.
Can we improve the speakers by equipping them with better ToM models?Following the procedure described in Section SS 5, we train various state-of-the-art instruction-following agents to serve as ToM listener models for the speakers. These listeners are trained using maximum log-likelihood on the same data as the speakers. Performances of different combinations of speakers and listeners are given in Table 1. We attain the largest improvement of 7.9 NDTW points over the best base speaker (EncDec-Transformer) by augmenting this speaker with an ensemble of 10 EnvDropCLIP listeners as the ToM model. We observe that ensemble models consistently outperform single models. More results about the detrimental effects of using single listeners on the speakers is given in Appendix SSA.5. Despite the promising improvements, there remains a large gap of 17.9 NDTW points between our best speaker and the human speakers.
What are the challenges in bridging the performance gap with human speakers?In the previous set of experiments, a notable pattern emerges: the performance superiority of a listener on the R2R instruction-following problem, where it is asked to follow _human-generated_ instruction, does not translate into a superiority in serving as a ToM model, where it is asked to rank _model-generated_ instructions. To further illustrate this phenomenon, we measure the agreement between human listeners and model listeners on instructions generated by different speakers. We define the agreement score between a human \(L_{h}\) and a model \(\tilde{L}\) as
\[\text{Agreement}(L_{h},\hat{L}) \tag{14}\] \[=\text{Average}_{\mathbf{u}\in\mathcal{D}_{\text{eval}}}\left( \text{NDTW}(\mathbf{e}_{h}(\mathbf{u}),\hat{\mathbf{e}}(\mathbf{u}))\right) \tag{15}\]
where \(\mathbf{e}_{h}(\mathbf{u})\) and \(\hat{\mathbf{e}}(\mathbf{u})\) are the trajectories generated by \(L_{h}\) and \(\hat{L}\) given \(\mathbf{u}\), respectively, and \(\mathcal{D}_{\text{eval}}\) denotes the R2R seen validation set.
As seen from Table 2, the listener agents agree more with the humans on human-generated instructions than on model-generated ones. These results can be explained through the lens of training-deployment covariate shift: during training, the model listeners are only trained to agree with human listeners on human-generated instructions and
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{ToM listener \(L_{\text{ToM}}\)} & \multirow{2}{*}{Fine-tuned GPT-2} & \multicolumn{2}{c}{Base speaker \(S_{\text{base}}\)} \\ & & & EncDec-LSTM & EncDec-Transformer \\ \hline None & 37.7 (\(\blacktriangle\) 0.0) & 45.3 (\(\blacktriangle\) 0.0) & 49.4 (\(\blacktriangle\) 0.0) \\ Single VLN-BERT (Majumdar et al., 2020) & 38.9 (\(\blacktriangle\) 1.2) & 39.8 (\(\blacktri\) 5.5) & 46.2 (\(\blacktri\) 3.2) \\ Ensemble of 10 EnvDrop-CLIP (Shen et al., 2022) & 37.8 (\(\blacktriangle\) 0.1) & 53.1\({}^{\dagger}\) (\(\blacktriangle\) 7.9) & 57.3\({}^{\dagger}\) (\(\blacktriangle\) 7.9) \\ Ensemble of 10 VLN \(\odot\) BERT (Hong et al., 2021) & 43.4 (\(\blacktriangle\) 5.7) & 56.4\({}^{\ddagger}\) (\(\blacktriangle\) 11.1) & 54.2 (\(\blacktriangle\) 4.8) \\ Humans (skyline) & 72.9\({}^{\ddagger}\) (\(\blacktriangle\) 35.2) & 76.2\({}^{\ddagger}\) (\(\blacktriangle\) 30.9) & 75.2\({}^{\ddagger}\) (\(\blacktriangle\) 25.8) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance of the speakers when equipped with different ToM models. Employing ensemble instruction-following agents significantly improves their performance. \({}^{\ddagger}\) and \({}^{\dagger}\) indicate results that are significantly higher than those of the corresponding “None” baseline (row 1) with \(p<0.05\) and \(p<0.1\), respectively (according to a two-related-sample t-test).
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{Listener} \\ Instructions generated by & VLN-BERT & EnvDrop-CLIP & VLN \(\odot\) BERT \\ \hline Humans (R2R dataset) & 65.4 (\(\blacktri\) 0.0) & 47.2 (\(\blacktri\) 0.0) & 65.0 (\(\blacktri\) 0.0) \\ Fine-tuned GPT-2 & 43.1\({}^{\ddagger}\) (\(\blacktri\) 22.3) & 31.6\({}^{\ddagger}\) (\(\blacktri\) 15.6) & 39.9\({}^{\ddagger}\) (\(\blacktri\) 25.1) \\ EncDec-LSTM & 50.0\({}^{\dagger}\) (\(\blacktri\) 15.4) & 43.7 (\(\blacktri\) 3.5) & 49.3\({}^{\dagger}\) (\(\blacktri\) 15.7) \\ EncDec-Transformer & 52.1\({}^{\ddagger}\) (\(\blacktri\) 13.3) & 41.5 (\(\blacktri\) 5.5) & 51.9\({}^{\ddagger}\) (\(\blacktri\) 13.1) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Agreement of human and model listeners on instructions generated by different speakers. The level of agreement decreases substantially when shifting from human-generated to model-generated instructions. \({}^{\ddagger}\) indicate results that are significantly lower than the human skyline (row 1) with \(p<0.05\) (according to a two-related-sample t-test).
thus does not know how to behave properly on other types of instructions.
## 8 Conclusion
This work introduces a framework for analyzing of the cognitive capabilities of instruction generation agents. Our analysis highlights the necessity of constructing better ToM models for these agents. We argue that learning ToM models is faced with challenges that are distinct from those of learning instruction-following agents. We hope that our findings will motivate the community to create novel datasets, training methods, and evaluation procedures for tackling this problem.
|
2309.10400 | PoSE: Efficient Context Window Extension of LLMs via Positional
Skip-wise Training | Large Language Models (LLMs) are trained with a pre-defined context length,
restricting their use in scenarios requiring long inputs. Previous efforts for
adapting LLMs to a longer length usually requires fine-tuning with this target
length (Full-length fine-tuning), suffering intensive training cost. To
decouple train length from target length for efficient context window
extension, we propose Positional Skip-wisE (PoSE) training that smartly
simulates long inputs using a fixed context window. This is achieved by first
dividing the original context window into several chunks, then designing
distinct skipping bias terms to manipulate the position indices of each chunk.
These bias terms and the lengths of each chunk are altered for every training
example, allowing the model to adapt to all positions within target length.
Experimental results show that PoSE greatly reduces memory and time overhead
compared with Full-length fine-tuning, with minimal impact on performance.
Leveraging this advantage, we have successfully extended the LLaMA model to
128k tokens using a 2k training context window. Furthermore, we empirically
confirm that PoSE is compatible with all RoPE-based LLMs and position
interpolation strategies. Notably, our method can potentially support infinite
length, limited only by memory usage in inference. With ongoing progress for
efficient inference, we believe PoSE can further scale the context window
beyond 128k. | Dawei Zhu, Nan Yang, Liang Wang, Yifan Song, Wenhao Wu, Furu Wei, Sujian Li | 2023-09-19T08:03:38Z | http://arxiv.org/abs/2309.10400v3 | # PoSE: Efficient Context Window Extension of LLMs via Positional Skip-wise Training
###### Abstract
Large Language Models (LLMs) are trained with a pre-defined context length, restricting their use in scenarios requiring long inputs. Previous efforts for adapting LLMs to a longer length usually requires fine-tuning with this target length (_Full-length_ fine-tuning), suffering intensive training cost. To decouple train length from target length for efficient context window extension, we propose **P**ositional **S**kip-**w**i**E** (PoSE) training that smartly simulates long inputs using a fixed context window. This is achieved by first dividing the original context window into several chunks, then designing distinct _skipping bias terms_ to manipulate the position indices of each chunk. These bias terms and the lengths of each chunk are altered for every training example, allowing the model to adapt to all positions within target length. Experimental results show that PoSE greatly reduces memory and time overhead compared with Full-length fine-tuning, with minimal impact on performance. Leveraging this advantage, we have successfully extended the LLaMA model to 128k tokens using a 2k training context window. Furthermore, we empirically confirm that PoSE is compatible with all RoPE-based LLMs and position interpolation strategies. Notably, our method can potentially support infinite length, limited only by memory usage in inference. With ongoing progress for efficient inference, we believe PoSE can further scale the context window beyond 128k.
## 1 Introduction
Large Language Models (LLMs) have revolutionized language modeling and demonstrated impressive abilities to perform various tasks (Brown et al., 2020). However, even with their remarkable capacity, these LLMs remain restricted by pre-defined _context window_ sizes, suffering from notable performance decline when input tokens exceeds these limits. Nevertheless, numerous application scenarios demand extremely long input sequences, including long document summarization (Huang et al., 2021), in-context learning with numerous examples (Li et al., 2023), and long document retrieval (Zhou et al., 2022), etc. This naturally poses a significant challenge of **context window extension**: Extending the context window of a pre-trained LLM to accommodate longer sequences.
Naively fine-tuning LLMs on inputs of target length for window extension has received limited success due to the large disruption introduced by new position indices (Chen et al., 2023a). Addressing this, Position Interpolation (Chen et al., 2023a; kaikoendev, 2023; Peng et al., 2023) propose to down-scale the position indices to match the original window size, yielding improved results for context extension. However, these methods still rely on _Full-length_ fine-tuning, i.e., fine-tuning with context of target length, which is memory and time-intensive due to the computational complexity that increases quadratically with input length. For example, Chen et al. (2023a) use 32 A100 GPUs to extend LLaMA models from 2k to 8k context, and 128 A100 GPUs for even larger context. These computational cost has made it impossible to extend context window to extreme lengths.
In this paper, we introduce **P**ositional **S**kip-**w**i**E** (PoSE) fine-tuning to decouple the fine-tuning length from the target context window length, unleashing the possibility of efficiently extending
context window to an extreme size. The key idea of PoSE is to simulate long inputs by manipulating position indices within a fixed context window. As depicted in Figure 1, we partition the original context window into several chunks, and adjust the position indices of each chunk by adding a distinct skipping bias term. These bias terms, as well as the length of each chunk, are altered for each training example, so that the model can adapt to all positions (including both absolute and relative) within the target context window through fine-tuning. Meanwhile, by maintaining continuous position indices within each chunk, PoSE bears a close resemblance to pre-training. As a result, the model's pre-trained capacity for language modeling and comprehension is retained to the greatest degree.
The advantages of our PoSE are threefold: **1) Memory and Time Efficiency:** By only requiring the original context size for fine-tuning, PoSE circumvents the quadratic increase in computational complexity with respect to target length during the fine-tuning stage, thereby significantly reducing memory and time overhead. 2) **Potential for Extremely-Long Context:** We manage to extend the context window of LALaM (Touvron et al., 2023) by up to 64 times (2k\(\rightarrow\)128k, k=1,024) while preserving decent ability of language modeling and understanding. 3) **Compatible with all RoPE-based LLMs and PI strategies:** The effectiveness of PoSE has been empirically validated across several representative RoPE-based LLMs, including LLaMA, LLaMA2 (Touvron et al., 2023), GPT-J (Wang and Komatsuzaki, 2021), and Baichuan (Baichuan, 2023). Additionally, PoSE has been demonstrated to be compatible with a variety of position interpolation methods, including Linear (Chen et al., 2023), NTK (Peng and Quesnelle, 2023), and YaRN (Peng et al., 2023) interpolation.
Notably, by decoupling the fine-tuning and target length, PoSE can theoretically extend context window to an infinite length. The only constraint is the memory usage during the inference phase. Hopefully, with the continuous advancements in efficient inference techniques, including Flash Attention (Dao et al., 2022; Dao, 2023), xFormers (Lefaudeux et al., 2022), vLLM (Kwon et al., 2023), etc, we believe PoSE can promisingly push the context window size to a even larger scale.
## 2 Related Work
Training Length-Extrapolatable Models.Length extrapolation aims to ensure that the model continues to perform well, even when the number of input tokens during inference exceeds the size of the context window on which the model is trained (Press et al., 2021). To this end, a series of positional embedding schemes have been proposed, including ALibi (Press et al., 2021), xPos (Sun et al., 2023), NoPos (Haviv et al., 2022), etc.
Similar to our work, Ruoss et al. (2023) also attempted to simulate longer sequences during training time to mitigate out-of-distribution lengths. They proposed randomized positional encoding (RandPos), which randomly selected an ordered subset of position indices from longer sequences.
Figure 1: Position indices of Full-length fine-tuning v.s. PoSE fine-tuning for extending the context window size from 2,048 to 8,192. At each iteration, the former directly takes 8,192 tokens for fine-tuning, while PoSE manipulates the position indices of 2,048 tokens to simulate longer inputs. For example, we partition the original context window of 2,048 tokens into two chunks, and adjust the position indices of the second chunk by adding a distinct skipping bias term. These bias terms, as well as the length of each chunk, are altered for each training example, so that the model can adapt to all relative positions of the target context window through fine-tuning.
Our proposed method, PoSE, diverges from their approach in several key aspects: First, RandPos is a positional embedding scheme designed for pre-training encoder-only models from scratch to enhance length generalization ability. In contrast, PoSE is a fine-tuning method that aims to efficiently extend the context window of pre-trained LLMs, the majority of which follow a decoder-only architecture. Second, in RandPos, the position indices between adjacent tokens are not continuous. However, in PoSE, the position indices within each chunk are intentionally made continuous to closely resemble the pre-training phase, therefore reducing the risk of disrupting the language modeling and understanding abilities learned during the pre-training stage.
Fine-tuning LLMs for Longer Context.Differing from length extrapolation, which primarily involves training a model from scratch to support lengths exceeding those it was initially trained for, context window extension focuses on extending the context window of a pre-trained LLM. Directly fine-tuning an existing LLM with a longer context window has been shown to progress slowly (Chen et al., 2023). To expedite and stabilize training, Chen et al. (2023) first down-scaled position indices to match original context size through Linear Position Interpolation. Subsequently, a range of Positional Interpolation (PI) strategies have been introduced, including NTK (Peng and Quesnelle, 2023) and YaRN (Peng et al., 2023). More recently, LongLora (Chen et al., 2023) propose shift short attention to approximate full attention. However, all these methods require Full-length fine-tuning, suffering computational cost that grows with target context size. By contrast, our method managed to decouple train / target length, requiring only the original context size for fine-tuning.
Memory Transformers.An alternative strategy for managing extremely long input sequences involves the adoption of memory mechanisms. Typically, there are two lines of research for utilizing memory: the recurrence-based approach (Dai et al., 2019; Bulatov et al., 2022) and the retrieval-based approach (Wu et al., 2022; Wang et al., 2023; Tworkowski et al., 2023). The recurrence-based approach involves segmenting long inputs and reusing the hidden states obtained from preceding segments to serve as memory for the current segment. Nonetheless, this architecture is hindered by information loss and limited capacity for random access. On the other hand, the retrieval-based paradigm entails encoding prior sequences as (key, value) pairs and utilizing a memory retriever and reader to extract previously encoded information. The primary limitation of this approach is the absence of interaction between discrete memory segments. More recently, Mohtashami and Jaggi (2023) introduced landmark attention, which facilitates random access to any chunk of the input by introducing landmark tokens. In contrast, our method achieves full access to the entire input without any modifications to the attention mechanism.
## 3 Methodology
### Preliminaries
Rotary Position Embedding (RoPE).The use of RoPE (Su et al., 2021) has become pervasive in contemporary LLMs, including LLAMA (Touvron et al., 2023), GPT-J (Wang and Komatsuzaki, 2021), etc. It encodes position information of tokens with a rotation matrix that naturally incorporates explicit relative position dependency. To elucidate, given a hidden vector \(\mathbf{h}=[h_{0},h_{1},...,h_{d-1}]\), where \(d\) is the hidden dimension, and a position index \(m\), RoPE operates as follows:
\[f(\mathbf{h},m)=\begin{pmatrix}h_{0}\\ h_{1}\\ h_{2}\\ h_{3}\\ \vdots\\ h_{d-2}\\ h_{d-1}\end{pmatrix}\otimes\begin{pmatrix}\cos m\theta_{0}\\ \cos m\theta_{0}\\ \cos m\theta_{1}\\ \cos m\theta_{1}\\ \vdots\\ \cos m\theta_{d/2-1}\\ \cos m\theta_{d/2-1}\end{pmatrix}+\begin{pmatrix}-h_{1}\\ h_{0}\\ -h_{3}\\ h_{2}\\ \vdots\\ -h_{d-1}\\ h_{d-2}\end{pmatrix}\otimes\begin{pmatrix}\sin m\theta_{0}\\ \sin m\theta_{0}\\ \sin m\theta_{1}\\ \sin m\theta_{1}\\ \sin m\theta_{d/2-1}\\ \end{pmatrix} \tag{1}\]
where \(\theta_{j}=10000^{-2j/d},j\in\{0,1,...,d/2-1\}\). Unlike previous absolute position encodings that are directly applied to the input vector \(\mathbf{x}\), RoPE is employed on the query and key vectors at each layer. Considering a query vector \(\mathbf{q}\) at position \(m\) and a key vector \(\mathbf{k}\) at position \(n\), the attention score
\(a(\mathbf{q},\mathbf{k})\) is defined as follows:
\[a(\mathbf{q},\mathbf{k})=<f(\mathbf{q},m),f(\mathbf{k},n)>\] \[\quad=\sum_{j=0}^{d/2-1}\left[(q_{2j}k_{2j}+q_{2j+1}k_{2j+1})\cos{( m-n)}\theta_{j}+(q_{2j}k_{2j+1}-q_{2j+1}k_{2j})\sin{(m-n)}\theta_{j}\right]\] \[\quad:=g(\mathbf{q},\mathbf{k},\mathbf{\theta},m-n) \tag{2}\]
Hence, RoPE encodes position information in a relative manner, as the attention score depends on the relative distances between positions rather than their absolute position values.
Problem Formulation.Given a Large Language Model pre-trained with a context window size of \(L_{c}\), our objective is to extend this context size to a target length \(L_{t}\), so that the model maintains good performance when processing input sequences containing a maximum of \(L_{t}\) tokens.
Position Interpolation (PI).In contrast to directly extending the position indices to \(L_{t}-1\) when dealing with an input text \(\mathbf{x}=\{x_{0},x_{1},...,x_{L_{t}}\}\), position interpolation down-scales the position indices to align with the original context window size \(L_{c}\). This approach effectively mitigates the risk of encountering extreme values and has been empirically demonstrated to enhance stability during fine-tuning. Various interpolation strategies have been proposed, with \(\alpha=L_{t}/L_{c}\) denoting the scaling factor:
* _Linear Interpolation._ As described by Chen et al. (2023a) and kaiokendev (2023), linear interpolation involves a proportional down-scaling of the position index \(m\) to \(m/\alpha\). Consequently, the attention score between a query \(\mathbf{q}\) at position \(m\) and a key \(\mathbf{k}\) at position \(n\) becomes \(g(\mathbf{q},\mathbf{k},\mathbf{\theta},(m-n)/\alpha)\), as defined in Equation 2. Theoretical analysis has substantiated that the interpolated attention score exhibits significantly greater stability compared to the extrapolated counterpart.
* _Neural Tangent Kernel (NTK) Interpolation._ In contrast to linear interpolation, NTK Interpolation alters the base of RoPE, effectively modifying the rotational "speed" of each dimension of RoPE (Peng and Quesnelle, 2023). Specifically, the original \(\theta_{j}=10000^{-2j/d},j\in\{0,1,...,d/2-1\}\) in RoPE is transformed into \(\theta^{\prime}_{j}=(10000\lambda)^{-2j/d}\), where \(\lambda=\alpha^{d/d-2}\). It is noteworthy that the value of \(\lambda\) is chosen to ensure that \(m\theta^{\prime}_{d/2-1}=(m/\alpha)\theta_{d/2-1}\).
* _YaR Interpolation._ Different from Linear and NTK interpolation that treat each dimension of RoPE equally, YaRN (Peng et al., 2023) employs a ramp function to combine Linear and NTK interpolation at varying proportions across different dimensions. Simultaneously, it introduces a temperature factor to mitigate distribution shift of attention matrix caused by long inputs.
### Proposed Approach: Positional Skip-wise Training (PoSE)
Although position interpolation effectively addresses out-of-distribution position indices, extending to an extreme length by fine-tuning on context window of this size remains impractical, owing to the quadratic growth in computational complexity of attention as sequence length increases. Instead, we explore to train within the original context window \(L_{c}\) and achieve context window extension via manipulating position indices to simulate longer inputs.
There are two designing desiderata for this endeavor: First, to avoid out-of-distribution positions during inference, the relative distance of manipulated position indices should comprehensively cover the range of \(\{1,\dots,L_{t}-1\}\). Second, fine-tuning with the manipulated position indices should not harm the original abilities of LLMs, so the structure of manipulated position indices should closely adhere to the original structure to the greatest extent possible.
Initially, we randomly divide the original context window \(L_{c}\) into \(N\) chunks \(c_{0},c_{1},\dots,c_{N-1}\), each with lengths \(l_{0},l_{1},\dots,l_{N-1}\), where \(\sum_{i=0}^{N-1}l_{i}=L_{c}\). We introduce the starting index \(st_{i}\) for each chunk \(c_{i}\), which facilitates the formulation of its position indices as follows:
\[\text{Pos}(c_{i})=\{st_{i},st_{i}+1,\dots,st_{i}+l_{i}-1\},\quad st_{i}=\sum_{ j=0}^{i-1}l_{j} \tag{3}\]
Subsequently, we employ the discrete uniform distribution \(\mathcal{U}(S)\) to sample a _skipping bias_ term \(u_{i}\sim\mathcal{\tilde{U}}(\{u_{i-1},\dots,L_{t}-L_{c}\})\) for each chunk \(c_{i}\). This bias term is applied to the corresponding
chunk to transform the original position indices into:
\[\text{PoSE}(c_{i})=\{u_{i}+st_{i},u_{i}+st_{i}+1,\dots,u_{i}+st_{i}+l_{i}-1\} \tag{4}\]
Note that the constraint of \(u_{i}\geq u_{i-1}\) is applied to prevent position index overlaps between chunks.
Intuitively, the introduction of skipping bias terms exposes model to a more diverse range of relative positions. To achieve comprehensive coverage of the target context window, we re-sample both the length and skipping bias term of every chunk for each training example. Moreover, the continuity of position indices within each chunk closely resembles the structure employed during pre-training. Consequently, fine-tuning the model on these new position indices for language modeling does not compromise its original capabilities.
Concerning the text contained within each chunk, a similar procedure is followed to select continuous spans of tokens from the input text \(\mathbf{x}=\{x_{0},x_{1},...,x_{L_{x}}\}\). To elaborate, we begin by sampling a bias term \(v_{i}\sim\mathcal{U}(\{v_{i-1},\dots,L_{x}-L_{c}\}\) followed by assigning the content of chunk \(c_{i}\) as below:
\[c_{i}=\mathbf{x}[v_{i}+st_{i}:v_{i}+st_{i}+l_{i}] \tag{5}\]
Notably, we have also explored other assigning strategy of \(v_{i}\), including scenarios where \(v_{i}=0\), which results in genuinely continuous content for the chunks, or \(v_{i}=u_{i}\), aligning the manipulated position indices with actual positions in the original text. However, we observe that these variations have relatively little impact on the outcomes of fine-tuning.
After position indices and content for each chunk are settled, we perform position interpolation for stabilized fine-tuning. For simplicity, We set the initial bias terms \(u_{0}\) and \(v_{0}\) to 0. In terms of chunk number \(N\), we view it as an trade-off between efficiency and effectiveness. Because an increase in the number of chunks will further deviates from the position structure of pre-training, which may harm the ability acquired during pre-training. Hence, in this paper we set \(N\) to 2, exposing the models to a wider range of relative positions, while adhering as close to the original position structure as possible. (See Appendix A and B for further discussion of \(v_{i}\) and \(N\).)
## 4 Experiments
In this section, we conduct experiments to verify the effectiveness of PoSE for context window extension. Our method demonstrates impressive results on context lengths of both 16k and 32k for language modeling as well as passkey retrieval. Other advantages of PoSE are discussed in Section 5.
### Setups
Training Procedure.For each setting in the main experiments, we train LLaMA-7B with the next token prediction objective. This training process comprises 1,000 steps, employing a global batch size of 64 on 8 V100 GPUs using Deepspeed ZeRO stage 3 (Rajibhandari et al., 2020). The fine-tuning dataset is sourced from The Pile (Gao et al., 2020), with a minimum length requirement of 2,048 tokens. Our default choice for interpolation strategies is linear interpolation. For evaluation, we use a single A100 GPU. Flash Attention V2 (Dao, 2023) is applied, making it possible to evaluate long documents of up to 128k tokens (k=1,024)
Evaluation Tasks and Datasets.We examine the ability of long text modeling on two tasks: language modeling and passkey retrieval. The language modeling task is a fundamental task that reflects the overall capability of a model in handling long text. Passkey retrieval, on the other hand, can effectively measure the maximum distance that a token can attend to during the inference stage. We evaluate language modeling on GovReport (Huang et al., 2021) and Proof-pile (Zhangir et al., 2022) datasets. For passkey retrieval, we follow Mohtashami & Jaggi (2023) to construct synthetic prompts for evaluation.
Baseline Methods.We compare our PoSE training method against following baselines:
* [leftmargin=*,noitemsep,topsep=0pt,itemsep=0pt]
* _Full-length_ fine-tuning takes input tokens of target length for fine-tuning. For this method, computation complexity scales quadratically with target context window size.
* _RandPos_(Ruoss et al., 2023) is initially designed to train an encoder-only model from scratch for length extrapolation. However, since it shares similar idea of simulating longer sequences via changing position indices, we include it for a comprehensive comparison. Given the original / target context window length \(L_{c}\) / \(L_{t}\), it uniquely samples \(L_{c}\) positions from the set \(\{0,...,L_{t}-1\}\), arranges them in ascending order, and employs them as new position indices for training.
### Language Modeling
First, we investigate the impacts of different fine-tuning methods on long sequence language modeling using the GovReport and Proof-pile datasets. GovReport is a summarization dataset comprising 19,402 reports published by the Congress and the U.S. Government, with an average document length of 7,866 tokens. We randomly select 50 reports containing more than 32,768 tokens for evaluation. Similarly, Proof-pile is a 13GB mathematical dataset of long mathematical documents. In line with the approach taken for GovReport, we choose 50 samples from Proof-pile that contain more than 32,768 tokens for evaluation.
Table 1 presents the results of scaling to 16k and 32k using Full-length training, RandPos, and PoSE. For each scaled model, as well as the non-fine-tuned LLaMA model (None), we report perplexity scores at various evaluation context window sizes, ranging from 2k to 32k, employing the sliding window approach proposed by Press et al. (2021). For evaluation efficiency, we set the stride of the sliding window to 1,024.
First, we observe an overall decreasing trend of perplexity for both models scaled to 16k and 32k via PoSE as evaluation context window size increases, proving their abilities to leverage longer context. Second, with significantly shorter context length during fine-tuning, our PoSE achieves comparable results with Full-length, consolidating its effectiveness. Third, our method achieves much stronger results than RandPos. We suppose it is because our manipulated position indices closely resembles that of pre-training, hereby preserving the pre-trained language modeling ability to the greatest extent.
We also notice that all the scaling methods suffers certain performance degradation as the supported context length increases. We perceive this as a trade-off between the quantity of tokens the model can process and the level of granularity in the attention the model can pay to each individual token.
### Passkey Retrieval for Effective Context Window
To effectively measure the maximum distance that a token can attend to during the inference stage, we adopt the passkey retrieval test proposed by Mohtashami and Jaggi (2023). In this test, models are tasked with recovering a random passkey hidden within a lengthy document. Prompt template used for this task is presented in Figure 1(a).
Specifically, we compare the non-fine-tuned LLaMA model (denoted as _None_) with the PoSE-extended version for 16k and 32k context window size. For each model, we vary the prompt length from 2k to 32k. In each case, we conduct the passkey retrieval test for 50 times, with a random passkey
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**Context size**} & \multicolumn{4}{c}{**GovReport**} & \multicolumn{4}{c}{**Proof-pile**} \\ \cline{3-11} & **Train / Target** & **2k** & **4k** & **8k** & **16k** & **32k** & **2k** & **4k** & **8k** & **16k** & **32k** \\ \hline None & - / - & 4.74 & \(>10^{3}\) & \(>10^{3}\) & \(>10^{3}\) & \(>10^{3}\) & 2.83 & \(>10^{3}\) & \(>10^{3}\) & \(>10^{3}\) & \(>10^{3}\) \\ \hline Full-length & 16k / 16k & 4.87 & 4.70 & 4.61 & 4.59 & - & 2.93 & 2.71 & 2.58 & 2.53 & - \\ \hline RandPos & 2k / 16k & 11.63 & 11.17 & 11.54 & 15.16 & - & 7.26 & 6.83 & 6.76 & 7.73 & - \\ & 2k / 32k & 93.43 & 95.85 & 91.79 & 93.22 & 97.57 & 60.74 & 63.54 & 60.56 & 63.15 & 66.47 \\ \hline PoSE (Ours) & 2k / 16k & 4.84 & 4.68 & 4.60 & 4.60 & - & 2.95 & 2.74 & 2.61 & 2.60 & - \\ & 2k / 32k & 4.91 & 4.76 & 4.68 & 4.64 & 4.66 & 3.01 & 2.78 & 2.66 & 2.60 & 2.59 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Perplexity of models trained with different methods. We conduct evaluation on the GovReport and Proof-pile datasets, varying evaluation context window size from 2k to 32k. Our PoSE, with a fixed training window size of 2k, effectively extended to a target context size of 16k / 32k for inference while receiving only minimal performance degradation compared to Full-length.
of 5 digits generated and placed at a random position inside the prompt for each trial. Figure 1(b) illustrates the results. For the none-fine-tuned LLaMA model (_None_), the retrieval accuracy rapidly drops to 0 when the prompt length exceeds 2k. In contrast, both PoSE-extended models managed to maintain a high retrieval accuracy (\(\geq 90\%\)) within their respective target context window. This indicates that models trained via PoSE genuinely possess the capability to attend to all tokens within the extended context windows.
## 5 Analysis
In this section, we analyze the advantages of PoSE, including 1) memory and time efficiency; 2) compatibility with all RoPE-based LLMs and diverse interpolation strategies; 3) potential for extremely-long context. In Section 5.4, We also verify that model performance within the original context window only receives minimal degradation.
### Memory and Time Efficiency
We study the memory and time efficiency of PoSE compared with Full-length fine-tuning. For each method, we scale LLaMA-7B to 4k / 8k / 16k through 1,000 training steps with a global batch size of 16 on 8 V100 GPUs. Experiment results are demonstrated in Figure 3. Figure 3(a) and (b) respectively illustrates memory and time consumption for 1,000 steps of Full-length versus PoSE. While the training cost of Full-length increases rapidly with target window length, PoSE only requires a fixed quota of memory and time for context extension, which is significantly lower. Figure 3(c) further compares model perplexity of the two training methods at different steps on GovReport. Notably, both models achieve relatively low perplexity levels within the initial 100 training steps. Moreover, at each step, our proposed PoSE, while requiring only a training context size of 2k tokens, exhibits very close language modeling ability to Full-length fine-tuning, which requires an extended training context of 16k. We did not experiment with context window of 32k or above, because V100 machines cannot afford full fine-tuning of these lengths. But it can be expected that the overhead ration between Full-long and PoSE will become more exaggerated as target length increases. Consequently, we can confidently assert that our proposed approach is both memory and time-efficient.
### Compatibility with RoPE-Based LLMs and Diverse Interpolation Strategies
We also delve into the effectiveness of PoSE when applied to different RoPE-based LLMs, as well as various interpolation strategies. Specifically, we employ PoSE on four distinct models: LLaMA-7B, LLaMA2-7B, GPT-J-6B, and Baichuan2-7B, all of which encompasses RoPE in their architectures. The original context size of LLaMA-7B and GPT-J-6B is 2k, while that of LLaMA2-7B and Baichuan2-7B is 4k. For each model, we examine the integration with Linear, NTK, and YaRN interpolation, as well as the non-fine-tuned original version for comparative purposes. The same GovReport dataset as described in Section 4.2 is utilized. The test set is truncated to the first 1k to
Figure 2: (a) Prompt template used for passkey retrieval; (b) retrieval accuracy for the non-fine-tuned LLaMA model (_None_), and the PoSE-extended counterparts for 16k / 32k window size. Both PoSE-extended models maintain a high retrieval accuracy (\(\geq 90\%\)) within their respective context window.
16k tokens for plotting the perplexity curve, as depicted in Figure 4. First, it is evident that PoSE is effective across all four models and three interpolation strategies, as evidenced by the low perplexities achieved by all 12 combinations in comparison to the non-fine-tuned original model. Second, we observe that NTK and YaRN interpolation generally yields superior results compared to Linear interpolation. However, it is noteworthy that NTK exhibits a significant increase in perplexity after a certain turning point, which occurs prior to reaching the target context length. This behavior is consistent with previous findings, indicating that for a given scaling factor \(\alpha\), NTK cannot genuinely expand the context window by \(\alpha\) times (Peng and Quesnelle, 2023; Quesnelle, 2023; Peng et al., 2023).
### Potential for Extremely-Long Context
Because PoSE only takes a fixed context window at training stage to extend to target context window size, we can promisingly extend LLMs to support infinite input lengths using this method. In this section, we extend context window size to 96k and 128k to explore PoSE's potential for extreme context window extension. Given the need to evaluate on extremely long documents, we have opted to employ two book datasets, namely Books3 (Presser, 2020) and Gutenberg (PG-19) (Rae et al., 2019). Both of these datasets consist of extensive collections of literary works, rendering them well-suited subjects for the assessment of long-range modeling. For our evaluation, we randomly selected 20 books from each dataset, each containing more than 128k tokens.
Fine-tuning LLaMA models using PoSE, we experimented with Linear / NTK / YaRN interpolation for both the 96k and 128k models. To calculate perplexity, we adhere to the sliding window strategy adopted in Section 4.2, with an increased sliding window step of 16k to enhance evaluation efficiency.
Figure 4: Perplexity of LLaMA-7B, LLaMA2-7B, GPT-J-6B, Baichuan2-7B extended to 16k via PoSE with Linear / NTK / YaRN interpolation, along with the non-fine-tuned _Original_ model. The consistently low perplexity observed across all nine combinations serves as an indication of the effectiveness of our method across RoPE-based LLMs and diverse interpolation strategies.
Figure 3: Full-length fine-tuning v.s. PoSE in terms of (a) Memory and (b) Time consumption for extending LLaMA-7B from 2k to 4k / 8k / 16k context, each finishing 1000 training steps. (c) Perplexity of both 16k-context models at every training steps. We show that PoSE takes a constantly reduced time and memory for context extension, while attaining a comparable level of PPL performance with Full-length fine-tuning at each step.
The outcomes of these experiments are detailed in Table 2. It is observe that, PoSE successfully extends the model's context window to 96k when coupled with Linear interpolation, and further extends the context window to 128k when paired with YaRN. These promising results consolidates the effectiveness of PoSE for extreme context window extension.
### Evaluation of Capability on Original Context Window
In this section, we examine the capabilities of the PoSE-extended models on the original context window using standard benchmarks. We combine the Hugging Face Open LLM Leaderboard (Face, 2023) with a subset of LLaMA benchmarks to assess zero-shot and few-shot performance. For zero-shot evaluation, we employ BoolQ (Clark et al., 2019), PIQA (Bisk et al., 2020), WinoGrande (Keisuke et al., 2019), and TruthfulQA (Lin et al., 2022). For few-shot evaluation, we utilize 25-shot ARC-Challenge (Clark et al., 2018) and 10-shot HellaSwag (Zellers et al., 2019). Our evaluation metrics are benchmark-specific: for BoolQ, PIQA, and WinoGrande, we report accuracy; for TruthfulQA, we report mc2; and for ARC-C and HellaSwag, we report normalized accuracy.
Table 3 summarizes the results. It is observed that, PoSE-extended models exhibit only marginal performance degradation compared with Full-length fine-tuning and the original LLaMA, with the only exception of the 128k model employing linear interpolation. This indicates that while extending context window size, PoSE effectively preserves original language comprehension ability.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{4}{c}{**Gutenberg (PG-19)**} & \multicolumn{4}{c}{**Books3**} \\ \cline{2-9} & **32k** & **64k** & **96k** & **128k** & **32k** & **64k** & **96k** & **128k** \\ \hline PoSE-Linear-96k & 10.18 & 11.11 & 13.57 & - & 9.98 & 10.90 & 13.42 & - \\ PoSE-NTK-96k & 7.98 & 20.39 & 38.73 & - & 8.29 & 20.82 & 40.39 & - \\ PoSE-YaRN-96k & 8.31 & 8.65 & 9.36 & - & 8.90 & 9.40 & 10.38 & - \\ \hline PoSE-Linear-128k & 16.90 & 22.47 & 26.77 & 31.18 & 26.20 & 43.62 & 57.08 & 70.87 \\ PoSE-NTK-128k & 8.04 & 14.84 & 29.48 & 34.80 & 8.34 & 16.04 & 31.42 & 37.00 \\ PoSE-YaRN-128k & 9.32 & 10.36 & 10.77 & 11.33 & 10.56 & 12.30 & 13.07 & 13.81 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Perplexity of models extended to extreme context size via PoSE on PG-19 and Books3. We show that our training method can effectively extend context window size to 128k when combined with YaRN interpolation.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{4}{c}{**Zero-Shot**} & \multicolumn{2}{c}{**Few-Shot**} \\ \cline{2-7} & BoolQ & PIQA & WinoGrande & TruthfulQA & ARC-C & HellaSwag \\ \hline LLaMA & 75.11 & 78.67 & 69.85 & 34.08 & 51.19 & 77.75 \\ \hline Full-Linear-16k & 70.95 & 77.64 & 69.06 & 31.89 & 48.55 & 74.19 \\ Full-NTK-16k & 75.80 & 78.08 & 68.98 & 33.83 & 48.81 & 76.57 \\ Full-YaRN-16k & 73.88 & 77.64 & 68.15 & 34.12 & 50.60 & 77.18 \\ \hline PoSE-Linear-16k & 74.50 & 78.13 & 68.59 & 32.05 & 48.29 & 75.56 \\ PoSE-NTK-16k & 74.28 & 78.24 & 68.90 & 33.89 & 49.83 & 76.82 \\ PoSE-YaRN-16k & 74.28 & 78.02 & 69.06 & 34.00 & 49.23 & 77.04 \\ \hline PoSE-Linear-128k & 67.71 & 76.22 & 67.56 & 36.16 & 39.93 & 66.04 \\ PoSE-NTK-128k & 75.35 & 78.18 & 68.98 & 32.71 & 49.66 & 76.19 \\ PoSE-YaRN-128k & 73.61 & 77.80 & 70.01 & 34.47 & 48.46 & 75.54 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance of PoSE-extended LLaMA model on standard benchmarks in comparison with Full-length fine-tuning and the original LLaMA. We show that PoSE-extended models exhibit only marginal performance degradation compared with Full-length fine-tuning and the original version.
## 6 Conclusion
In this paper, we introduce **P**ositional **S**kip-**w**is**E** (PoSE) training to efficiently extend the context window of Large Language Models. PoSE simulates long inputs by manipulating position indices, thereby requiring only the original context window for fine-tuning, successfully decoupling train length and target length. Experiments have shown that, compared with fine-tuning on the full length, PoSE greatly reduces memory and time overhead. Taking advantage of this, we have managed to extend LLaMA model to 128k on 8 V100 GPUs, observing only minimal performance degradation on standard benchmarks. We have also empirically verified that PoSE is compatible with all RoPE-based LLMs and position interpolation strategies.
|
2309.10366 | The Raman gap and collisional absorption | One of the long-standing puzzles observed in many laser-plasma experiments is
the gap in the Raman backscattering spectrum. This gap is characterized by the
absence of backscattered light between some critical wavelength and twice the
incident laser wavelength. The latter is associated with the absolute Raman
instability from the quarter-critical density surface. Supported by
particle-in-cell (PIC) simulations, it is suggested that the gap can result
from the collisional damping of the backscattered light. A linear analysis of
the competition between the Raman growth rate and the damping rate in a
non-homogenous plasma predicts the gap's existence and width as a function of
the system's parameters. The theory is compared with the PIC simulations and
past experiments. | Ido Barth, Pierre Michel | 2023-09-19T07:04:14Z | http://arxiv.org/abs/2309.10366v1 | # The Raman gap and collisional absorption
###### Abstract
One of the long-standing puzzles observed in many laser-plasma experiments is the gap in the Raman backscattering spectrum. This gap is characterized by the absence of backscattered light between some critical wavelength and twice the incident laser wavelength. The latter is associated with the absolute Raman instability from the quarter-critical density surface. Supported by particle-in-cell (PIC) simulations, it is suggested that the gap can result from the collisional damping of the backscattered light. A linear analysis of the competition between the Raman growth rate and the damping rate in a non-homogenous plasma predicts the gap's existence and width as a function of the system's parameters. The theory is compared with the PIC simulations and past experiments.
## I Introduction
The stimulated Raman backscattering (SRS) spectra in laser plasma fusion experiments e.g., NIF and Omega, pose a long-standing puzzle. The wavelength of the backscattered light outside the plasma, \(\lambda_{1}\), is theoretically limited to the range \([\lambda_{0},2\lambda_{0}]\), where \(\lambda_{0}\) is the incident laser wavelength. This range results from the Raman resonance conditions
\[\omega_{0} = \omega_{1}+\omega_{2} \tag{1}\] \[k_{2} = k_{0}+k_{1} \tag{2}\]
where \(\omega_{0}\), \(\omega_{1}\), and \(\omega_{2}\) are the incident laser, backscattered light, and the electron plasma wave (EPW), respectively, and \(k_{0,1,2}\) are the respective wave numbers at the location of the Raman interaction inside the plasma. \(\omega_{2}\) can be approximated as the plasma frequency, \(\omega_{p}\), by neglecting the thermal correction that changes the SRS spectrum by a few percent for a few KeV electron temperatures.
However, a typical experimental SRS spectrum is characterized by a gap in the spectrum between some critical wavelength, \(\lambda_{\rm gap}\) and \(2\lambda_{0}\).[1] This gap, which was not foreseen at the beginning, was first found in experiments, starting with a small amplitude and becoming even more prominent for higher laser intensities. Since 1985, many theoretical explanations have been suggested, including Electron density steepening near the quarter-critical density,[2; 3] Large EPW seed driven by a non-Maxwellian distribution with fast electrons,[1] a competition between SRS and stimulated Brillouin scattering (SBS),[4] high sensitivity to (damping driven) detuning near the quarter-critical density[5], nonlinear saturation of the plasma wave driven by Langmuir decay instability (LDI),[6] density-dependent diffraction threshold,[7] and backscatter being overtaken by absolute side-scatter at the relevant densities.[8; 9]
However, the variety and complexity of these explanations hint that the Raman gap effect is not well understood. Additionally, as far as we know, the Raman gap was never demonstrated within ab initio particle-in-cell (PIC) simulations.
In this paper, we study the effect of collisional damping of the backscattered light on the SRS spectrum as observed outside the plasma. The idea is that the backscattered light is absorbed in the plasma on its way out. The amount of absorption depends on the optical depths of the plasma. Particularly, the larger the backscattered wavelength, the higher density it was scattered from, and the longer the way the backscattered light passes in the plasma and through higher densities. The latter is because, typically, the incident laser penetrates along the density gradient towards higher plasma densities. Therefore, it is anticipated that for some reflecting point, \(z_{r}\), the absorption will be strong enough to cancel the amplification such that the total gain would be zero. From this point up to the quarter critical density (where the Raman backscattering becomes absolute and thus much higher), no backscattered light is anticipated to exit the plasma, resulting in a gap in the SRS spectrum, as many experiments exhibit.
The paper is organized as follows. In Sec. II, we present a linear analysis of the problem and show that the gap can result from collisional absorption. A theoretical prediction of the gap location is derived using a single, dimensionless parameter. In Sec. III, we study the Raman gap effect via one-dimension (1D) particle-in-cell (PIC) simulations and discuss the possible explanations in the relevant parameter regimes. In Sec. IV, we compare our theoretical prediction with past experiments and PIC simulation results. In Sec. V, we briefly discuss other theoretical explanations for the Raman gap effect developed in the literature and test their validity against past experimental results and our PIC simulations. Sec. VI summarizes the conclusions.
## II Theory
Our analysis is based on the competition between the Raman amplification and the collisional absorption of the backscattered light. If the former is larger, then backscattered light will be observed, while in the opposite case, all of the backscattered light will be absorbed in the plasma, and no reflected light will be measured. Since both amplification and absorption depend on the reflection point (defined as the origin of the backscattered light for a given wavelength), one can compare the growth and dumping rates for a wave backscat
tered from a given location in the plasma. If the amplification gain is larger than the total damping, SRS backscattered light is expected to be seen in the wavelength associated with the given reflecting point. On the contrary, if the total damping is larger than the total amplification, no light is expected to be backscattered from the plasma at that wavelength. The gap effect can be explained by this competition, where the beginning of the gap is determined by the balance between the SRS amplification and the collisional damping, as we will show next for a simple, idealized plasma profile.
### Density profile
For simplicity, we consider a slab geometry and a density gradient along the \(z-\) axis. The typical ion density, \(n_{i}\) in laser-plasma experiments is generated by isothermal expansion and, therefore, has an exponential profile,[10]\(n_{i}\sim e^{z/c_{s}t}\), where \(c_{s}\) is the sound speed and the density gradient is toward the positive \(z\) direction. Since the Raman instability occurs on timescales much shorter than the isothermal plasma expansion, we define a snapshot of the profile
\[n_{i}=n_{\rm min}+\Delta n\,\frac{e^{\alpha z/l}-1}{e^{\alpha}-1} \tag{3}\]
where \(l\) is the plasma length, defined as the distance between the two chosen densities, \(n_{\rm min}\) and \(n_{\rm max}\). The density difference is \(\Delta n=n_{\rm max}-n_{\rm min}\). Therefore, \(l=\alpha c_{s}t\) for
\[\alpha=\ln\left(\frac{n_{\rm max}}{n_{\rm min}}\right). \tag{4}\]
It is noted that in order to get the full SRS spectrum, the value of \(n_{\rm max}\) must be a little above \(n_{\rm cr}/4\), and the value of \(n_{\rm min}\) should be small enough. Since the simulating of long plasmas is expensive, \(n_{\rm min}\) in PIC simulations cannot be too small (see Sec. III), but in the analytical solution below, we will take the limit \(n_{\rm min}\to 0\). A realization of the density profile with the parameters \(n_{\rm min}=0.01\,n_{\rm cr}\), \(n_{\rm max}=0.25\,n_{\rm cr}\), and \(l=1\) mm is illustrated in the inset of Fig. 1b. Surprisingly, as shown below, although the total amplification depends on \(l\), the spectral width of the gap does not.
### Growth rate
To evaluate the Raman amplification from plasma with a density profile (3), we employ the famous Rosenbluth formula for inhomogeneous plasmas providing the convective amplification factor \(\exp[\Gamma_{R}]\) for the backscattered light after integration through a resonance region in an inhomogeneous plasma,[11]
\[\Gamma_{R}=\frac{\pi\,a_{0}^{2}k_{2}^{2}}{4k_{1}}L_{n} \tag{5}\]
where the density profile length scale is
\[L_{n}=n_{e}\left(\frac{dn_{e}}{dz}\right)^{-1}. \tag{6}\]
For linear polarization, the dimensionless wave amplitude reads \(a_{0}=\sqrt{2e^{2}\lambda I/(\pi m_{e}c^{5})}\approx 0.02\,\lambda_{\rm\mu} \sqrt{I_{15}}\), where \(\lambda_{\rm\mu}=\lambda_{0}/(1\,\mu m)\) and \(I_{15}\) is the incident laser intensity in units of \(10^{15}W/cm^{2}\). Note that it is justified to use this approximation that depends only on the local gradient because further away from the SRS reflection point, \(z_{r}\), where the resonance condition is exactly fulfilled, the detuning becomes large. Therefore, the gain is determined in a small neighborhood of \(z_{r}\). For Raman backscattering, the wave number amplitude of the plasma wave \(k_{2}\) is given by the resonance condition [Eq. (2)] where
\[k_{0,1}=\frac{1}{c}\sqrt{\omega_{0,1}^{2}-\omega_{p}^{2}} \tag{7}\]
are evaluated at the backscattering point, \(z_{r}\), via the (spatially dependent) plasma frequency,
\[\omega_{p}=\sqrt{4\pi e^{2}n_{e}(z_{r})/m_{e}}. \tag{8}\]
Similarly, the backscattered light frequency is determined at the reflecting point, \(z_{r}\), via the approximated Eq. (1),
\[\omega_{1}=\omega_{0}-\omega_{p}(z_{r}) \tag{9}\]
while the incident laser frequency, \(\omega_{0}\), does not depend on the plasma parameters.
### Collisional absorption
The amount of absorption that light experiences on its way out of the plasma, \(\exp(-\Gamma_{d})\), can be estimated by integrating the local damping rate over the distance between the reflecting point, \(z_{r}\), and the plasma edge, \(z=0\), (i.e., where \(n_{i}=n_{\rm min}\))
\[\Gamma_{d}=\int_{0}^{z_{r}}\kappa\,dz. \tag{10}\]
The simplest model for the local damping rate of light in plasma is given by[12]
\[\kappa=\frac{\omega_{1}}{c}\Im[e]\approx\frac{\omega_{p}^{2}v_{ei}}{\omega_{ 1}^{2}v_{g}}. \tag{11}\]
Figure 1: (a): SRS gain from the Rosenbluth formula in Eq. (5) (blue), collisional damping loss from Eq. (10) (red), and the difference between the two, \(\Gamma_{\rm tot}\), (green) that crosses zero (dashed black) at the gap location. (b): The total gain as a function of the backscattered light wavelength outside the plasma. The inset is the electron density profile of Eq. (3).
Here, \(\Im[\xi]\) is the imaginary part of the electric permittivity, \(v_{g}=c^{2}k_{1}/\omega_{1}\) is the group velocity of the backscattered light, and
\[\nu_{ei}=\frac{Z\ln\Lambda}{3(2\pi)^{1.5}}\,\,\frac{\omega_{p}}{n_{e}\lambda_{D} ^{3}} \tag{12}\]
is the electron-ion collision rate [12], which depends on the temperature through the Debye length, \(\lambda_{D}=\sqrt{k_{B}T/4\pi e^{2}n_{e}}\). The Coulomb logarithm can be estimated by [10]
\[\ln\Lambda=\ln\sqrt{\frac{b_{\perp}^{2}+b_{max}^{2}}{b_{\perp}^{2}+b_{min}^{2} }}, \tag{13}\]
where, \(b_{\perp}=\frac{Z^{*}e^{2}}{4\pi e_{0}m_{e}v_{e}^{2}}\), \(b_{max}=\frac{v_{e}}{\omega}\), and \(b_{min}=\frac{\hbar}{2m_{e}v_{e}}\). Here, \(v_{e}\) is the thermal velocity of the electrons (estimated in the low-field limit [10]), \(Z^{*}=\left<Z^{2}\right>/\left<Z\right>\) is the effective ionization number, \(e\) is the electron charge, and \(m_{e}\) is the electron mass. These quantities can be estimated from the diagnostics of a given experiment. Therefore, if the experimental parameters are measured or estimated with sufficient accuracy, one can predict the location of the beginning of the gap in the SRS spectrum by equating
\[\Gamma_{\rm R}=\Gamma_{d}, \tag{14}\]
and solving (numerically) for the \(\omega_{1}\). Because the diagnostics are placed outside the plasma, the location of the beginning of the Raman gap in the wavelength \(x-\)axis is \(\lambda_{1}^{\rm out}=2\pi c/\omega_{1}\). For the example presented in Fig. 1, the gap location is found to be at 608 nm, which, in the left panel, it is associated with the crossing point of \(\Gamma_{\rm tot}=\Gamma_{\rm R}-\Gamma_{d}\) and the \(x\)-axis. It is also indicated in the right panel of the figure by a vertical dashed line.
### Single dimensionless parameter
It is notable that Eq. (14), which defines the location of the gap, can be simplified by isolating all the physical parameters except the laser frequency, \(\omega_{0}\), into one side of the equation, leaving on the other side only an integral expression that, for a given \(\omega_{0}\), depends only on the reflecting point \(z_{r}\),
\[\frac{I\left(k_{B}T\right)^{1.5}}{Z^{*}\ln\Lambda}=\frac{2k_{1}m_{e}^{2.5}c^{4 }}{3\left(2\pi\right)^{1.5}k_{2}^{2}\lambda_{0}^{2}\,e^{2}}\int_{0}^{z_{r}} \frac{dz}{\omega_{1}^{2}v_{g}n_{e}}. \tag{15}\]
It is noted that for a given \(\omega_{0}\), the electron density at \(z=z_{r}\) determines all other frequencies and wave numbers via the SRS resonance conditions [Eqs. (1-2)] and Eqs. (7,9),
\[\omega_{p} = \omega_{0}\sqrt{n} \tag{16}\] \[\omega_{1} = \omega_{0}\left(1-\sqrt{n}\right)\] (17) \[k_{0} = \frac{\omega_{0}}{c}\sqrt{1-n}\] (18) \[k_{1} = \frac{\omega_{0}}{c}\sqrt{\left(1-\sqrt{n}\right)^{2}-n}\] (19) \[k_{2} = \frac{\omega_{0}}{c}\left(\sqrt{1-n}+\sqrt{\left(1-\sqrt{n} \right)^{2}-n}\right) \tag{20}\]
where, \(n=n_{e}(z_{r})/n_{\rm cr}\) is the dimensionless electron density at \(z=z_{r}\). Therefore, by solving Eq. (15) for \(z_{r}\), one can find the spectral gap location, \(\lambda_{\rm gap}=2\pi c/\omega_{1}\), which is the wavelength outside the plasma of the light that was backscattered from \(z=z_{r}\).
By employing the definitions of all the parameters in the right-hand-side (RHS) of Eq. (15), the laser frequency can also be isolated such that the RHS will depend only on \(z_{r}\). At the same time, the left-hand-side (LHS) includes an additional term of \(\lambda_{0}^{3}\). Therefore, it is constructive to define one dimensionless parameter that includes all of the physical parameters in the problem,
\[\xi=\frac{I_{15}T_{\rm eV}^{1.5}\lambda_{\mu}^{3}}{Z\ln\Lambda}, \tag{21}\]
where \(I_{15}=I/10^{15}Wcm^{-2}\), \(T_{\rm eV}=k_{B}T/eV\), and \(\lambda_{\mu}=\lambda_{0}/\mu m\). Now, Eq. (15) can be rewritten in a dimensionless form,
\[\xi=2996\frac{\left(1-\sqrt{n}\right)^{2}\sqrt{(1-\sqrt{n})^{2}-n}}{\left( \sqrt{1-n}+\sqrt{(1-\sqrt{n})^{2}-n}\right)^{2}}\,J, \tag{22}\]
where,
\[J=\int_{y_{0}}^{y_{1}}\frac{y}{\sqrt{1-y}}\,dy \tag{23}\]
with \(y=\frac{\alpha_{0}^{2}}{\alpha_{1}^{2}}n\). The integration limits are \(y_{0}=\frac{\alpha_{0}^{2}}{4\alpha_{1}^{2}}e^{-\alpha/2}\) and \(y_{1}=(1-\frac{\alpha_{0}}{\alpha_{1}})^{2}\). Note that \(y_{0}\to 0\) when \(n_{\rm min}\to 0\) because then \(\alpha\rightarrow\infty\). Assuming this limit, the analytical solution of the dimensionless integral reads
\[J=\frac{1}{6}\cos(3\theta_{1})-\frac{3}{2}\cos(\theta_{1})+\frac{4}{3} \tag{24}\]
where
\[\theta_{1}=\sin^{-1}\left(1-\frac{1}{1-\sqrt{n}}\right) \tag{25}\]
Notably, we obtained in Eq. (22) a relation between the density at the gap location, \(n=n_{\rm gap}\), and the dimensionless parameter, \(\xi\), which is determined by the system's parameters in Eq. (21),
\[\xi=f(n_{\rm gap}). \tag{26}\]
This relation is depicted in a solid blue line in Fig. 5 (left \(y-\)axis). By numerical inverting Eq. (26), one can find the theoretical gap location in terms of the plasma density as a function of the systems' dimensionless parameter \(n_{\rm gap}=n_{\rm gap}(\xi)\). However, this is a multi-valued function, so we omit the higher branch (near the quarter critical density, \(n_{cr}/4\)) because the group velocity of the backscattered light becomes very small, and the SRS interaction is nearly absolute. Finally, for a given laser frequency, \(\omega_{0}\), the wavelength at the gap, \(\lambda_{\rm gap}\) can also be calculated from Eq. (26) and Eq. (17). The right \(y-\)axis of Fig. 5 is associated with the theoretical prediction for the spectral gap location in terms of \(\lambda_{\rm gap}/\lambda_{0}\).
## III PIC simulations
To study the Raman gap effect, we have run 1D PIC simulations using the code EPOCH [13] with 250 cells per \(\mu\)m and 50 particles per cell. It is noted that in addition to the physical noise, PIC simulations also exhibit numerical noise, which depends on the resolution. Therefore, the total reflectivity is not well reproduced, but relative effects can be investigated better within a fixed resolution. In the simulations, we consider typical physical parameters of inertial confinement fusion (ICF) experiments as follows: The laser wavelength and intensity were \(\lambda_{0}=0.351\,\mu\)m and \(I_{0}=2.5\times 10^{15}\) W/cm\({}^{2}\); the pulse duration was \(\tau=12\) ps; both electron and ions (\(Z=1\)) density profiles (\(n_{e}=n_{i}\)) were exponential as defined in Eq. (3) with \(n_{\rm min}=0.05\,n_{\rm cr}\), \(n_{\rm max}=0.27\,n_{\rm cr}\), and \(l=0.6\) mm; the electron temperature was \(T_{e}=1000\) eV while the ions were immobile.
In Fig. 2 (upper panel), we plot a snapshot of the electric field, \(E_{y}\) of the backscattered light (blue) and the longitudinal electric field, \(E_{x}\), of the EPW inside the plasma, for negligible collisions (\(\ln\Lambda=1\)). Dashed lines depict the plasma boundaries. The incident laser pulse propagates from left to right, and the reflected light propagates from right to left. Thus, at the snapshot time, the left part of the figure represents the reflected light while the small right part (right to the plasma) is the transmitted pulse. The spectral analysis of the reflected part is shown in the lower panel of Fig. 2. Notably, it does not exhibit the gap effect as the SRS intensity around \(600-700\) nm is similar to the intensity at smaller wavelength, i.e., within the range of 50 percent from the averaged intensity at \(500-600\) nm.
On the contrary, in Fig. 3, we present the results of an identical PIC simulation but with \(\ln\Lambda=80\), i.e., stronger collisions. First, we note that the total SRS reflectivity in the case with collision, 4 percent, is significantly smaller than that without collisions, 35 percent, and much more similar to the experimental values of SRS reflectivity. Second and most notably, the spectrum exhibits a prominent gap between 630 nm and the absolute SRS at 710 nm (that agrees with the thermal shift), in which the SRS intensity is about 95 percent less than the average intensity at the range of \(510-590\) nm. Finally, the theoretical prediction for the gap location for this simulation's parameters is 608 nm (denoted by a dashed black line), which reasonably agrees with the gap location in the simulation.
The comparison between these two simulations suggests that collisional absorption is a plausible explanation for the Raman gap effect. Moreover, other explanations for the effect can be tested by their ability to explain the results of the 1D PIC simulations presented here. We will remember this argument when discussing the alternative explanations in Sec. V.
To further illustrate how the gap evolves when increasing the collisionality, we repeat the same simulation with two intermediate values of \(\ln\Lambda\) and plot the spectra in Fig. 4. Each
Figure 2: PIC simulation results for negligible collisions (\(\ln\Lambda=1\)). Upper panel: A snapshot of the transverse electric field, \(E_{y}\) (blue), and the longitudinal electric field, \(E_{x}\) (yellow), at a time when the whole incident laser pulse has passed the plasma (denoted in dashed black lines) from left to right. Lower panel: The spectrum of the backscattered light between \(-4.8\) mm and zero of the upper panel. No noteworthy gap is observed in the Raman spectrum in this case.
Figure 3: PIC simulation results for high collisions (\(\ln\Lambda=80\)). Panels are the same as in Fig. 2. A gap in the SRS spectrum is observed between about 640 nm and the (thermally shifted) absolute SRS at about 710 nm. The theoretical location of the lower end of the gap, 608 nm, is denoted by a vertical dashed line.
spectrum in the figure is normalized by its associated total SRS reflectivity, as denoted in the legend. It can be seen that as the collisionality (i.e., the value of \(\ln\Lambda\)) increases, the ratio between the SRS intensity in the range of \([640-700]\) nm and the intensity in the range of \([500-600]\) nm decreases.
We conclude that the introduction of collisions in the PIC simulations yields the appearance of the Raman gap, in agreement with the linear analysis derived in Sec. II. In the next section, we will test the theoretical prediction for the location of the gap against a few past experiments.
## IV Comparison with past experiments
To validate our theory against past experiments, we have analyzed a few published experimental Raman spectra. To this end, a reasonable estimation of the physical parameters in each experiment is required. The relevant parameters for the analysis are the electron temperature, the laser intensity, and the degree of ionization. Also, we implicitly assumed an exponential density profile. However, in cases where the density profile is different but known, the theoretical gap location can be easily calculated by numerically integrating Eq. (15) over the measured density profile.
Unfortunately, although the Raman gap effect was observed in many LPI experiments, the diagnostic of the laser or the plasma parameters was poor in many of them. Particularly in most of the relevant publications, the temperature of the electron temperature was not measured or, at least, not reported in the paper. Besides, even when measurements or estimations for values of the physical parameters are given, the uncertainty in the parameters' values is usually large. These uncertainties must be reflected by error bars in the graph. In the absence of reported estimation for the error bars, we arbitrarily assumed an error of \(10-20\) percent, where we note that the uncertainty in the degree of ionization, \(Z\), is relevant only in experiments with a gold target.
The location of the gap was extracted from the figures in the papers by a rough estimation of the location where the SRS spectrum decreased by about 90 percent from the averaged value at wavelengths below the gap. We employ this definition because it is consistent with the definition of the theoretical location of the gap, \(\Gamma_{tot}=0\), meaning no SRS amplification of the noise.
Because of the quality of the figures in the literature, the estimation was done by eye and a ruler. For each spectrum, we associate some error estimated from the spectral curve's slope. The data extracted from the literature, the error bar associated with the physical parameters, and our extracted gap locations are summarized in Table 1.
The results are presented in Fig. 5. Each experiment is denoted by a different color listed in the legend. The theoretical location of the gap [Eq. (26)] is denoted by a solid blue line. The range of values of the dimensionless parameter, \(\xi\), where the theory predicts there should not be a gap, is denoted by a solid horizontal red line associated with the quarter critical density and a gap "location" of \(2\lambda_{0}\). Interestingly, there is one experiment ("Shiva-red-gold") with parameters in this regime, and indeed, only a tiny gap was observed in the SRS spectrum in this experiment. A good agreement between the experimental dots and the theoretical curve can be seen. In addition, the PIC simulation result is denoted by a black triangle and agrees well with the theory.
Figure 4: Illustration of the evolution of the Raman gap with collisionality. The SRS spectra of four PIC simulations with different values of \(\ln\Lambda,\{1,10,40,80\}\), are presented. For better visualization, each spectrum is normalized by its total SRS reflectivity (see legend).
Figure 5: The theoretical gap location as a function of the dimensionless parameter, \(\xi\) (solid blue line), in comparison with several past experiments (colored circles) and our PIC simulation example (black triangle). Also denoted the gap’s upper limit at twice the laser wavelength i.e., for quarter critical density (solid red line). The left \(y-\)scale is the gap density normalized by the critical density, while the right \(y-\)scale is the associated normalized wavelength of the backscattered light.
## V Discussion
The Raman gap that was observed in many experiments poses a long-standing puzzle. In the literature, we found seven different theories to explain the effect. While we cannot discern the level of verisimilitude claimed in each explanation, we point out that it is reasonable to check if they can explain the gap effect observed in our PIC simulation results. First, as mentioned in Sec. III, the simulations were one-dimensional and considered immobile ions but exhibited the Raman gap effect. Therefore, theories that are based on the dynamics in more than one dimension [7; 8] or involve interaction with ion-acoustic waves [4; 6] (i.e., ion motion) can not explain the gap effect in the simulations. Second, PIC simulations are costly, and thus, the laser pulse duration in the simulation was only 12 ps. Although this timescale is enough for studying the Raman backscattering, it is not sufficient for developing a population of fast electrons (especially without the TPD process, which is a 2D effect). Therefore, the simulations do not support the theory based on enhanced EPW seed driven by fast (non-Maxwellian) electron distribution [1]. Third, the theory based on high sensitivity to detuning near the quarter critical density [5] considered extreme density gradients, i.e., very short plasmas (about 100 wavelength only). The plasma length in the PIC simulations was too long (\(600\mu\)m) to be adequately comprehended by this approach. Finally, the idea of electron density profiles that steepen near the quarter critical density such that Raman backscattering near \(2\lambda_{0}\) is substantially reduced [2; 3] may require a tailored density profile to explain the observed Raman gap effect. In contrast, for typical density profiles in our PIC simulations, the gap effect was not observed in the absence of collisions but was observed when collisions were introduced.
It is important to emphasize that we do not claim that the aforementioned theories do not contribute to forming the gap in the SRS spectrum. We argue that, on the one hand, the effect in both experiments and PIC simulations can be understood as a result of the competition between SRS amplification and collisional absorption. On the other hand, other theories, most of which are much more complicated, might explain the experiments but not the PIC simulations. Additionally, our theoretical prediction for the gap width is in reasonable agreement with past experiments without the need for other supportive effects.
## VI Conclusions
In conclusion, we developed a theory for the Raman gap effect based on the competition between the SRS growth rate in inhomogeneous plasmas and the collisional absorption of the backscattered light on its way out of the plasma. The gap formation was observed in a series of PIC simulations where only the collisionality changes. The location of the gap is calculated via linear analysis and exhibits a reasonable agreement with several past experiments and our PIC simulations. Previous theoretical explanations suggested in the literature for the Raman gap can not explain the PIC simulation results. Therefore, we conclude that the Raman gap can often result from the collisional absorption of the backscattered light without the support of other explanations.
## Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
###### Acknowledgements.
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, and was supported by the PAZI Foundation, Grant No. 2020-191. The computations in this paper were run on the ICPL cluster at the Hebrew University of Jerusalem. One of the authors (IB) would like to thank Jonathan Wurttele for his hospitality at UC Berkeley, where this work was mainly conducted.
|
2309.09637 | Designing a Hybrid Neural System to Learn Real-world Crack Segmentation
from Fractal-based Simulation | Identification of cracks is essential to assess the structural integrity of
concrete infrastructure. However, robust crack segmentation remains a
challenging task for computer vision systems due to the diverse appearance of
concrete surfaces, variable lighting and weather conditions, and the
overlapping of different defects. In particular recent data-driven methods
struggle with the limited availability of data, the fine-grained and
time-consuming nature of crack annotation, and face subsequent difficulty in
generalizing to out-of-distribution samples. In this work, we move past these
challenges in a two-fold way. We introduce a high-fidelity crack graphics
simulator based on fractals and a corresponding fully-annotated crack dataset.
We then complement the latter with a system that learns generalizable
representations from simulation, by leveraging both a pointwise mutual
information estimate along with adaptive instance normalization as inductive
biases. Finally, we empirically highlight how different design choices are
symbiotic in bridging the simulation to real gap, and ultimately demonstrate
that our introduced system can effectively handle real-world crack
segmentation. | Achref Jaziri, Martin Mundt, Andres Fernandez Rodriguez, Visvanathan Ramesh | 2023-09-18T10:13:03Z | http://arxiv.org/abs/2309.09637v1 | # Designing a Hybrid Neural System to Learn Real-world
###### Abstract
Identification of cracks is essential to assess the structural integrity of concrete infrastructure. However, robust crack segmentation remains a challenging task for computer vision systems due to the diverse appearance of concrete surfaces, variable lighting and weather conditions, and the overlapping of different defects. In particular recent data-driven methods struggle with the limited availability of data, the fine-grained and time-consuming nature of crack annotation, and face subsequent difficulty in generalizing to out-of-distribution samples. In this work, we move past these challenges in a two-fold way. We introduce a high-fidelity crack graphics simulator based on fractals and a corresponding fully-annotated crack dataset. We then complement the latter with a system that learns generalizable representations from simulation, by leveraging both a pointwise mutual information estimate along with adaptive instance normalization as inductive biases. Finally, we empirically highlight how different design choices are symbiotic in bridging the simulation to real gap, and ultimately demonstrate that our introduced system can effectively handle real-world crack segmentation.
## 1 Introduction
The process of structural monitoring and assessment of civil infrastructure is an important task to ensure safety and usability. Executed primarily by humans, the inspection process is time consuming and labor-intensive, as it needs to be carried out at the target location, is potentially dangerous and can lead to down-times in the infrastructure use. To alleviate these challenges, the deployment of robots with integrated computer vision systems is emerging as an exciting, safe and low cost addition to traditional inspection methods[29, 50].
In general, such a computer vision system should be robust and invariant to a variety of nuisance variables such as illumination, object scale or pose. Early works achieved these desiderata by stacking and combining quasi-invariant transformations specified by a domain expert to guarantee that the output remains unchanged for a range of transformations that are irrelevant to the application domain [7, 12]. In contrast, modern data-driven systems may learn these transformations by relying on large amounts of labeled data. In recent years, deep learning techniques in conjunction with labelled datasets were introduced for structural inspection tasks like crack identification [30, 14, 26].
However, gathering appropriate real-world data for training is tremendously challenging. Data acquisition is particularly tough in the case of cracks on concrete bridges, where defects tend to be located in difficult to capture areas and overlap with other defects like spalling, exposed metal bars etc. Moreover, data labeling is not only excessively time consuming, it is prone to errors due to the fine-grained nature of cracks and requires highly specialized experts (who may not end up agreeing) to provide precise ground-truth [3]. Previous works have thus proposed datasets addressing the crack identification challenge from a multi-target classification perspective [37], but similar real-world efforts are still needed for semantic segmentation in diverse contexts.
Faced with a lack of appropriately annotated and diverse
data, one may resort to physics based rendering, which has enabled the creation of photorealistic synthetic data for training and testing computer vision models [35, 17]. Compared to data-driven based generative models, the approach promises full control over the scenes and automatically generated ground truth maps. Alas, there is a typical statistical mismatch between simulated and real images due to modeling assumptions and computational approximations. Therefore, purely data-driven neural networks trained with synthetic images tend to suffer from performance degradation when applied to real images.
In this work, we seek to overcome existing limitations by combining the strengths of context specific modeling and data-driven designs (hybrid system), enabling us to fully leverage physics based rendering as an underlying source of data. For this purpose, we first introduce a fractal-based concrete crack simulation pipeline and investigate the use of synthetic images for learning in the context of crack segmentation. We also propose a model that can achieve better performance by leveraging image-based pointwise mutual information as well as style transfer techniques for better generalization. To empirically examine the latter, we annotate a subset of a prominent real-world concrete defect classification dataset [37] and thoroughly experimentally corroborate our proposed design choices in additional settings. In summary, our contributions are as follows:
* We propose the "Cracktal" high-fidelity simulator for cracks on concrete surfaces to generate pixel wise annotated data with depth and surface normal maps 1. Footnote 1: We will make the simulator and rendered datasets available on Zenodo.
* We present an approach to close the gap between performance on simulated and real data through Consistency enforced training between Adaptive instance normalization and Pointwise mutual information, CAP-Net for short2. Footnote 2: CAP-Net code will be also open-sourced.
* We annotate real-world images of concrete bridges with cracks from the popular CODEBRIM dataset [37] to empirically corroborate our approach.
* We investigate the performance of different algorithms for closing the Sim2Real gap in the context of crack detection using a variety of validation metrics specifically tailored for single object semantic segmentation. We empirically validate our approach on the annotated CODEBRIM images as well as other publicly available crack segmentation benchmarks.
## 2 Related Work
In this section, we summarize related work for crack detection, the use of synthetic data for training neural networks and approaches to reduce the Sim2Real gap.
### Crack Identification
Traditional works on crack recognition focus on using image processing algorithms like edge and boundary detection techniques [1], morphological operation based methods [55], principle component analysis [2], or automatic clustering for segmentation based on Canny and K-Means [31]. The work of Koch et. al [29] presents an exhaustive literature review on the common practices of assessing the state of concrete infrastructure and crack detection.
Recent works leverage data-driven approaches for crack identification using classification or semantic segmentation neural architectures [11, 14, 16]. The works of Cao et. al [8] and Konig et. al [30] provide a review of current data-driven crack detection approaches. However, one of the main limitations of current approaches is that the training data are mostly composed of simple and small datasets with uniform asphalt or concrete backgrounds [57, 20, 44, 5, 59], which hinders effective generalization of data-driven approaches in the context of precise semantic crack segmentation. In our work, we overcome the data hurdles by proposing a high-fidelity data simulator, which we can fully leverage by proposing a model that incorporates necessary inductive biases while enabling effective learning.
### Simulating Data and the Sim2Real Gap
In recent years, data-driven generative models have gained considerable popularity. Despite that, simulators based on physics-based rendering engines have maintained their significance. This is largely attributed to their ability to effortlessly produce pixel-accurate labels, thus reducing the burden of manual annotation. Furthermore, these simulators offer a unique advantage in generating data with controlled priors, enabling the generation of diverse datasets tailored to specific scenarios and applications. Proposed simulators in the literature include GTA5 [40], SYNTHIA [42] and endless runner for continual learning [21].
Whereas some works show promising results for the use of synthetic data in detection tasks [46, 36, 53], models trained with synthetic data are well-known to face difficulties in generalizing to real data, due to the statistical gap between synthetic images and real images [48, 47, 54]. Apart from improving the graphics rendering pipeline itself, this Sim2Real gap is typically reduced by seeking out domain adaptation (DA) or domain generalization (DG) techniques.
DA approaches focus on adapting the statistics of the synthetic data to that of the target domain, for instance by adversarially tuning the parameters of the generative models based on the statistics of the real data for better generalization [49, 4]. Others [28] make use of style transfer methods to adapt the training data. The main limitation of DA techniques remains that they require access to samples
in the target domain in order to adapt the model. We refer to [58, 52] for comprehensive surveys.
In contrast, DG approaches seek to improve the robustness of DNNs to arbitrary unseen domains, see Wang et. al [51] for a detailed review. Approaches for learning domain-agnostic feature representations can leverage meta-learning [6, 19], adversarial training [33], instance normalization [39], selective whitening [13], style transfer or data augmentation [56].
Other works bias their models to focus on image features that are more realistic and transferable to improve generalization in a given application domain. For instance, features related to the image geometry can be more generalizable in the case of car detection [43]. Since geometry and semantics are naturally connected, [10, 25] propose to mitigate the limitations of synthetic data by leveraging the geometric information in a multi-task learning framework. In our work, we follow the spirit of these works, but note that most application contexts of synthetic data in the literature focus on objects with well defined shapes (e.g cars, buildings etc.). In contrast, we emphasize that cracks, like most defects, have highly irregular shapes. We design an extendable physics-based crack simulator and subsequently leverage specific crack sensitive quasi-invariant models to learn more generalizable representations in our CAP-Net approach to reduce the Sim2Real gap.
## 3 Cracktal: A Fractal-based Simulator for Cracked Concrete Surfaces
In this section, we introduce Cracktal: a physics-based simulator that generates images of cracked concrete surfaces along with their semantic ground truth, depth and surface normal maps. The overall rendering workflow consists of two main steps: scene and crack generation. A set of albedo, roughness, normal and height maps are used to set the scene based on physics based rendering rules. A random crack is then generated using our fractal generator model, detailed below, and added to the scene's material. The full scene is then rendered and corresponding ground truth maps are generated. Figure 1 illustrates examples of the synthetic images generated with a \(2048\times 2048\) resolution.
### Physics-based Scene Generation
In the scene generation process, non relevant backgrounds (e.g sky, out of focus buildings etc.) are excluded, assuming an up-close camera. The base components of physics based rendering (PBR) workflows: albedo, normal, roughness, and height maps are applied to a plane mesh grid to generate a realistic looking concrete surface, defining its color, surface and subsurface scattering, and geometrical displacement respectively. The required PBR metallicity map is included but remains uniformly zero, as concrete is a dielectric. An optional ambient occlusion map can be included to introduce surface markings, e.g. graffiti. The textures used in this work were created from real concrete images from the CODEBRIM training dataset [37] and decomposed manually by the Substance B2M software.
The environment is illuminated utilizing a simulated natural sunlight source. The black body radiator possesses two key attributes: luminous intensity (\(L_{I}\)) and color temperature (\(L_{T}\)). Luminous intensity determines the amount of energy that the light source emits into the scene, whereas color temperature defines the chromaticity of the illuminant. In the datasets of our later study, a color temperature of \(L_{T}=5800\) Kelvin and intensity \(L_{I}=3.3\) were chosen. The rotation of the light source is parameterized by its Euler \(\alpha\), \(\beta\)\(\gamma\) angles as in common conventions. \(\alpha\) and \(\gamma\) are fixed angles with values \(\frac{\pi}{3}\) and \(0\). By varying the \(\beta\) angle, we simulate the change of the hour of day during which the image is captured. The \(\beta\) angle is randomly sampled from:
\[\beta\sim\mathcal{U}(\frac{-\pi}{6},\frac{\pi}{6}) \tag{1}\]
### Fractal-based Crack Generation
Cracks are highly irregular, but like many other patterns found in nature can be represented as fractals. In order to generate a crack pattern, we draw inspiration from a decades old model presented by [32] as a baseline. The authors suggest the use of a stochastic version of the Koch "snowflake" fractal in the generation of pavement distress features, e.g cracks on a road surface. A conventional Koch "snowflake" fractal can be generated through the iterative splitting of each straight line into three equal length segments. The middle segment is then replaced by two segments of equal length to form an equilateral triangle. These steps are repeated for each straight line to create a regular fractal until a desired subdivision depth is reached.
By modifying the displacement parameters at each step, it is feasible to generate non-uniform fractals that resemble cracks. Rather than dividing each line into identical segments to form an equilateral triangle, the position of the third point, which is determined by both the magnitude \(r\) and angle \(\theta\), is altered in each step. In our simulation, the angle is sampled from a Gaussian normal distribution with a mean of \(\mu=0\) and a standard deviation of \(\sigma=30\) degrees. The probability density of displacement magnitude r is given by \(P(r)=\frac{2r}{p^{2}}\) where p is a hyper-parameter. For intuition, Figure 2 illustrates these steps for a "snowflake" and the stochastic version for crack generation.
Finally, before adding the crack to the rendered scene, the generated crack is randomly translated and rotated across the scene, and a Gaussian blur is applied in order to introduce width to the crack.
### Annotation of Real World Data
In conjunction with simulated data, we require real world images to validate systems trained with synthetic images. Ideally, the chosen real world image should offer additional challenges that make generalization less straightforward. For these reasons, we semantically annotated images provided in the CODEBRIM dataset [37] on a pixel basis.
We have chosen this dataset as its development has been motivated by the need for a concrete visual inspection dataset that contains other overlapping defects and features various levels of deterioration, defect severity and surface appearance. Previous works [57, 20, 44, 5, 59], focused on data where cracks are the only visible defect, and they are usually centered in the image, making them unrealistically easy to segment. Most of them also show pavement cracks, which may differ in appearance from concrete cracks. In many CODEBRIM images, other defects like exposed reinforcement bars, spallation, corrosion and calcium leaching are present. In particular the latter share visual similarities with cracks, which makes the prediction more challenging.
We selected image patches containing visible cracks and annotated them using GIMP. In this way, Multiple annotators semantically annotated images of \(1500\times 844\) resolution, each containing at least one crack. We consolidated consistent annotations into a set of 420 examples for our real-world test set.
## 4 CAP-Net: A Hybrid Neural Approach for Crack Segmentation
To fully leverage our simulator and bridge the Sim2Real gap, we introduce **CAP**-Net, a hybrid neural model based on **C**onsistency enforced training between **A**daptive instance normalization and **P**ointwise mutual information. It is composed of two parallel network branches, each one based on a U-Net architecture [41]. During training, we input the RGB image stylized by the AdaIN module. The second network is equipped with a PMI module to extract representations that are projected into a quasi-invariant feature space that helps with the domain transfer. Both networks are connected with a consistency loss to enforce common representations across the different domains. We train our pipeline end to end with the help of the synthetic images and their ground truths generated by Cracktal, as depicted in Figure 3.
### Pointwise Mutual Information
Cracks can be viewed as anomalies in a textured surface. Pointwise mutual information (PMI), computed in a local neighborhood, is a measure of deviation of the gray-level co-occurrence statistics in the neighborhood relative to marginal statistics of gray-levels globally. Thus, the PMI measure flags boundaries between dominating texture patterns in the image. The resulting output is an indicator of texture anomalies and directly relates to hypothesized cracks and other boundary structures.
Drawing inspiration from prior work [24], natural objects produce probability density functions that are well clustered. These clusters can be discovered in an unsupervised manner, and fitted by kernel density estimation (KDE). The obtained density functions can then further be leveraged to distinguish between common pixel pairs (belonging to the background texture) from less common pairs
Figure 1: Cracktal examples with texture variety and presence of other perturbations/ anomalies like moss and graffiti. Images were generated at \(2048\times 2048\) resolution and are heavily down-sampled for view in pdf at loss of quality.
Figure 2: Example Koch fractal (left) and stochastic version for cracks (right). While the magnitude \(r\) and the angle \(\theta\) of the displacement are typically fixed, they are randomized for each displacement iteration (i) when generating the irregular crack shape.
(belonging to anomalies in the texture or edges). To compute the PMI scores between two pixels, we first need to estimate the joint distribution and marginal distributions for image pixels. For the marginal distribution \(P(A)\), we sample pixels randomly from the image to perform the KDE. To estimate the joint distribution \(P(x_{i},x_{j})\), we sample pairs of pixels at various distances and perform KDE.
For a pixel pair \((x_{i},x_{j})\) the PMI score is computed as:
\[\textit{PMI}(\textit{x}_{i},\textit{x}_{j})=\textit{log}\frac{P(x_{i},x_{j})^ {\tau}}{P(x_{i})\cdot P(x_{j})} \tag{2}\]
The parameter \(\tau\) boosts the scores of common pairs and addresses the bias of PMI towards low-frequency events (i.e when the marginal distributions are small). When \(\tau=1\), \(\textit{PMI}(x_{i},\textit{x}_{j})\) specifically compares the likelihood of observing the pixel \(x_{i}\) near \(x_{j}\) to the overall probability of observing \(x_{i}\) and \(x_{j}\) in the image. The final affinity score for each pixel is computed using the PMI score with the neighboring pixels. We define the set of neighbouring pixels of \(x_{i}\) as \(N_{i}\). The PMI scores between a pixel and its neighbors are exponentiated and summed to estimate an affinity score for each pixel in the original image, indicating if this pixel belongs to the dominant background texture or is an anomaly:
\[\textit{Affinity}(x_{i})=\sum_{x_{j}\in N_{i}}e^{PMI(x_{i},x_{j})} \tag{3}\]
The scores are then passed to a neural network for crack prediction. Note that the exponential is important to obtain more stable affinity scores, which helps with learning.
### Style Transfer
In addition to the use PMI as an inductive bias, we further reduce the Sim2Real gap from the data-driven angle by performing style transfer operations based on the adaptive instance normalization (AdaIn) [22], which aligns the mean and variance of the content features with those of the style features. The content features are obtained by encoding an image generated by Cracktal simulator using a VGG network pretrained on ImageNet [18]. Similarly, the style features are encoded from a texture image. The AdaIN layer is used to perform style transfer in the feature space and aligning the features of the content and style images. A decoder is learned to invert the AdaIN output to the image spaces.
We sample texture from the describable textures dataset [15] to perform the style transfer on Cracktal images. This way, we can augment the synthetic training data and increase the texture variety while at the same time keeping the semantic content of the original image and more specifically the crack intact. We note that style transfer is only performed during training with probability of \(0.5\) and is completely dropped during testing.
### Consistency Loss
Finally, to get the best of both worlds, we add a consistency loss between the network trained with RGB images and the network trained with PMI based affinity scores. We postulate that ensuring consistency of the latent space representations across projected subspaces of the outputs of two networks will lead to robust features that will enable better transfer to real data. For training image \(X_{i}\), The consistency loss is imposed as follows:
\[\mathcal{L}_{\textit{CL}}(X_{i})=(f_{1}(\textit{enc}_{\textit{rgb}}(X_{i}))-f_ {2}(\textit{enc}_{\textit{pmi}}(X_{i})))^{2} \tag{4}\]
where \(\textit{enc}_{\textit{rgb}}\) and \(\textit{enc}_{\textit{pmi}}\) are the encoding functions for the RGB and PMI networks respectively. The obtained latent encodings are then passed to projection heads (\(f_{1}\) and \(f_{2}\)) before contrasting them.
## 5 Experiments
Our empirical investigation follows four key questions:
**(Q1) Are Cracktal assumptions plausible?** To corroborate the plausibility of our modelling assumptions and the
Figure 3: Schematic of the Cracktal to CAP-Net training pipeline. Based on simulated images and their automatic annotation (yellow shading), consistency enforced training (green) is performed between two networks extended with adaptive instance normalization (AdaIN, blue) and pointwise mutual information (PMI, purple) respectively. For final inference, the AdaIN style-transfer module is dropped.
utility of synthetic data for crack detection, we contrast the performance of a U-Net trained with real world publicly available data with a U-Net trained with our synthetic data.
**(Q2) Do simulated auxiliary tasks improve generalization?** In the spirit of prior works [10, 25], we further consider how the addition of auxiliary tasks to the baseline U-Net can improve generalization performance in the context of crack segmentation. More specifically, crack patterns have local geometric variations, e.g. surface normal distribution variation, and depth variations relative to the geometry and depth in the surrounding context. Similarly, PMI maps address estimation of an auxiliary task of appearance anomaly extraction.
**(Q3) Does our approach of CAP-Net reduce the Sim2Real gap?** We empirically corroborate that our proposed method outperforms existing baselines, even when the latter are trained on real-data, effectively demonstrating how our design choices along with a domain specific simulation can lead to more robust crack segmentation models.
**(Q4) Are all design choices for CAP-Net meaningful?** We ablate each component of our hybrid CAP-Net to showcase that each proposed element has meaningful impact towards the overall CAP-Net performance.
### Baselines and Additional Evaluation Datasets
In addition to SegCODEBRIM, we evaluate our models on a collection of the following public datasets: CRACK500 [57], GAPs384 [20], CFD [44], AEL [5], Cracktree200 [59]. We merge these into 950 images of cracks captured under various conditions. We refer to the experiments using these datasets collectively as the multi-source set. For consistency, We downsample all images to \(256\times 256\). As intuitive baselines, we consider the following models: A U-Net trained with synthetic data (U-Net), A U-Net trained with collection of multi-source data (U-Net(MultiSet)), A U-Net trained with real and synthetic data (U-Net(Sim+Real)). In addition, we compare to the attention based U-Net variant (Att-U-Net) [38] and to TransU-Net [9] that combines transformer-based architectures with U-Net. For analysis of the multi-task training in Q2, we further construct Multi-U-Net architectures, based on a single joint encoder and one separate decoder per modality, in the spirit of prior segmentation works outside the crack defect application [10, 25].
### Evaluation Metrics
Evaluating binary semantic segmentation maps with common overlap based scores such as Dice or Intersection over Union (IOU) comes with various limitations. For cracks, connectivity is important but slight over- or under-segmentation of crack pixels can be tolerated, especially knowing that the ground truth maps are usually annotated by humans using different annotation tools with varied settings. For these reasons, we take inspiration from the medical imaging literature and adapt various metrics to obtain more insights into our models [34, 27].
**Hausdorff based Metrics [23]:** For two point sets X and Y, the one-sided Hausdorff Distance from X to Y is:
\[\mathit{hd}(X,\,Y)=\max_{x\in X}\min_{y\in Y}dist(x,y) \tag{5}\]
where \(dist\) is a distance measuring function between pixels x and y. The bidirectional Hausdorff Distance is then:
\[\mathit{HDF}(X,\,Y)=\max(\mathit{hd}(X,Y),\mathit{hd}(Y,X)) \tag{6}\]
We use both the euclidean distance and radial basis function (RBF) as a distance measure between pixels. RBF, also known as the squared exponential kernel, is defined as:
\[\mathit{RBF}(x,y)=exp(-\frac{d(x,y)^{2}}{2l^{2}}) \tag{7}\]
where d is the euclidean distance between x and y. A main advantage of using RBF as a distance measure is that it decreases gradually the further the prediction is from the ground truth, whereas Dice-scores or IOU decays completely regardless of the distance between the actual prediction and the ground truth. In the case of cracks, this distance could be just very few pixels.
**clDice:** The authors of [45] introduce a similarity measure centerlineDice (clDice), calculated by comparing the intersection of the prediction and ground truth masks and their morphological skeleta. Given two binary segmentation maps, ground truth \(GT\) and prediction \(P\), \(S_{GT}\) and \(S_{P}\) are the respectively extracted skeletons. Subsequently, the fraction of \(S_{X}\) that lies within \(Y\) (Topology Precision), and vice-a-versa (Topology Sensitivity) are:
\[T_{prec}(S_{P},GT)=\frac{\mid S_{P}\cap GT\mid}{S_{P}} \tag{8}\]
\[T_{sens}(S_{GT},P)=\frac{\mid S_{GT}\cap P\mid}{S_{GT}} \tag{9}\]
These can then be used to define the clDice score:
\[\mathit{clD}(GT,P)=2\times\frac{T_{prec}(S_{P},GT)\times T_{sens}(S_{GT},P)}{ T_{prec}(S_{P},GT)+T_{sens}(S_{GT},P)} \tag{10}\]
\(\boldsymbol{F1_{\theta}}\)**:** We also consider a F1 scores with tolerance measure. In the experiments in the main body, we set \(\theta=10\).
### Results and Discussion
**(Q1) The modelling assumptions in Cracktal are plausible:** The top half of table 1 shows the performance of models when evaluated on SegCODEBRIM. Despite training with real-world data and annotations of the multi-source dataset, U-Net (MultiSet) achieves an F1 score of \(25.6\%\), which is worse than the performance of U-Net trained with synthetic Cracktal data. A similar trend can be observed on all metrics except \(F1_{\theta=10}\), where U-Net (MultiSet) outperforms the baseline U-Net only marginally. We hypothesize that the overall worse performance achieved by U-Net (MultiSet) can be explained by the fact that the used training dataset comes from a variety of sources that tend to feature inconsistent annotation styles. More generally, U-Net (MultiSet) achieves a higher number of false positives and detects other anomalies present on concrete surfaces compared to U-Net, as evidenced by clDice and Hausdorff distance scores. These results underscore the significance of accurate labeling, which is guaranteed in simulation. Thus, we find the plausibility of our modelling assumptions in the Cracktal simulator to be well supported.
**(Q2) Auxiliary simulated tasks improve the generalization:** We consider two auxiliary tasks: depth and surface normals prediction and estimation of pointwise mutual information, denoted by the trained Multi-U-Net (D-SN) and Multi-U-Net (PMI) in table 1 respectively. Both models outperform the baseline U-Net significantly, improving clDice by \(7+\) and the Euclidean Hausdorff distance measure by \(10+\) on SegCODEBRIM dataset. Similar trends can be observed for the other metrics. Clearly, the depth and surface normal maps predicted by Multi-U-Net (D-SN) provide valuable information about the 3D spatial structure and layout of the scene, thus improving generalization on real data. Similarly, estimating geometry information can be seen as an inductive bias; Multi-U-Net (PMI) learns representation that focus on the anomalies in the images and cracks can be also understood as anomalies.
However, both of these models are less robust than our CAP-Net, highlighted also by the fact that the Multi-U-Nets do not significantly outperform the baseline U-Net when evaluated on the multi-source data (bottom half of Table 1).
**(Q3) CAP-Net's hybrid modelling effectively reduces the Sim2Real gap:** Revisiting Table 1, CAP-Net clearly outperforms all approaches on SegCODEBRIM (top half of table). For instance, we observe an improvement of \(7\%\) in F1 and \(11\) in Hausdorff distance with RBF kernel compared to U-Net. Similarly, our model performs better than all the baselines on multi-source set (bottom half of table), except U-Net(Multiset), which has been trained with in-distribution training data. Overall, training on the real multi-source data is only beneficial when deploying in a closely related context, whereas the modelling of CAP-Net in conjunction with the Cracktal simulator provides a robust solution for widely applicable crack segmentation by adapting from purely synthetic data.
\begin{table}
\begin{tabular}{l l l l l l l} & Model & \(F1(\uparrow)\) & \(F1_{\theta=10}(\uparrow)\) & \(clDice(\uparrow)\) & \(HDF_{Euc}(\downarrow)\) & \(HDF_{RBF}(\downarrow)\) \\ \hline \multirow{4}{*}{
\begin{tabular}{l} **U-
**(Q4) All CAP-Net design choices contribute to performance improvements:** The ablations in Table 2 shows the performance of different sub-modules of our system on SegCODEBRIM and multi-source set.
First, the style transfer provided by AdaIN improves the generalization to real world data compared to a simple U-Net on SegCODEBRIM (U-Net(AdaIN) vs CAP-Net w/o AdaIn), but leads to statistically insignificant performance change on multi-source data. Second, the addition of a second encoder branch (bottom half of figure 3) that receives affinity scores based on PMI as input further increases the performance on most metrics, even when the branches are not contrasted (CAP-Net w/o CL). For instance, we obtain a \(1.5\%\) improvement in F1 and \(3\) on the RBF Hausdorff distance on SegCODEBRIM. Third, the subsequent addition of the consistency loss leads to consolidated segmentation maps between both encoders and improves performance on various metrics (CAP-Net w/o CL vs. "full" CAP-Net). We obtain a \(5\%\) improvement in F1 and decrease of \(6\) in Hausdorff distance respectively on SegCODEBRIM.
The results of Table 2 empirically corroborate the efficacy of our design choices. The incorporation of PMI-based modeling approaches and purely data-driven U-Net style learning, augmented by consistency loss improves the appearance invariance of our model and thus leads to better generalization to out of distribution data.
## 6 Conclusion
In this paper, we introduced Cracktal, a flexible simulator for generating synthetic cracked concrete surface data with ground truth labels. Additionally, we proposed a hybrid design that combines data-driven models with single-image statistical estimation models, to fully leverage synthetic data. Our empirical validation demonstrates that this approach reduces the Sim2Real gap. Our work emphasizes the importance of fusing expert-based inductive biases with learning from simulated data and provide new domain to explore domain generalization and adaptation methods.
## 7 Acknowledgments
We acknowledge funding from the European Union H2020 Research and Innovation Programme under grant agreement number 769066. This work was also supported by the Artificial Intelligence Systems Engineering Laboratory (AISEL) project under funding number 01IS19062, funded by the German Federal Ministry of Education and Research (BMBF) program "Einrichtung von KI-Laboren zur Qualifizierung im Rahmen von Forschungsvorhaben im Gebiet der Kunstlichen Intelligenz".
|
2309.11078 | Assemblies as Semigroups | In this paper we give an algebraic characterization of assemblies in terms of
bands of groups. We also consider substructures and homomorphisms of
assemblies. We give many examples and counterexamples. | Ulderico Dardano, Bruno Dinis, Giuseppina Terzo | 2023-09-20T06:00:34Z | http://arxiv.org/abs/2309.11078v1 | # Assemblies as Semigroups
###### Abstract
In this paper we give an algebraic characterization of assemblies in terms of bands of groups. We also consider substructures and homomorphisms of assemblies. We give many examples and counterexamples.
## 1 Introduction
The notion of assembly was introduced in [2], as a generalisation of the notion of group. The main idea is that every element \(x\) of an assembly has its own "neutral" element denoted \(e(x)\) which can be seen as a sort of error term, or the degree of flexibility of the element \(x\). Some basic properties of assemblies can be found in [3, 4, 5]. In [2] it was also shown that besides groups (which are assemblies for which the function \(e\) is always constantly equal to the universal unique neutral element), the so-called _external numbers_ also satisfy the assembly axioms.
In order to better understand this sort of algebraic structures and to compare them with existing structures based on semigroups, we consider a slightly modified definition in which commutativity is not required. An additional novelty lies in the last condition in Definition 2.1. Prior to this, the requirement was that for all \(x\) and \(y\), \(e(xy)\) would have to be equal to either \(e(x)\) or \(e(y)\) (we sometimes call this a _strong assembly_). We now assume a weaker condition which is indeed implied by the previous one: we require \(e(xy)=e(x)e(y)\), for all \(x\) and \(y\), which means that the function of local neutral elements is an homomorphism of semigroups. Thus, by changing the definition, we are proposing a more general notion that, for example, allows us to consider the not necessarily commutative assembly \(\mathcal{A}(\mathcal{G})\) of all cosets \(gN\) where \(g\) and \(N\) range in a group \(G\) and in the lattice \(n(G)\) of all normal subgroups of \(G\), respectively (see Example 2.2). Moreover, some expected results such as the fact that any Cartesian product of assemblies is an assembly now hold, while before this was not the case (see Example 2.2).
In this paper we give a characterization of assemblies from a purely algebraic point of view by showing that _a semigroup is an assembly if and only if it is a band of groups and even a semilattice of groups if idempotent elements commute among themselves._ This provides perhaps a new way to approach the theory of bands of groups. For example, we can state that _an assembly is strong if and only if the set of idempotents (which are called_ magnitudes _in the literature) is totally ordered_ by the usual relation \(x\leq y\) if and only if \(xy=y\). We also briefly consider subassemblies, i.e. substructures which are also assemblies, and homomorphisms of assemblies, i.e. homomorphisms of semigroups which are assemblies.
For fundamental results and/or undefined notions about semigroups we refer to [6] and [8]. According to what is customary in semigroup theory we generally (but not always) use multiplicative notation (and consequently juxtaposition) for the binary operation.
## 2 Assemblies as bands of groups
We introduce the definition of assembly in multiplicative notation. By a semigroup we mean a nonempty set with a binary associative operation.
**Definition 2.1**.: _A nonempty semigroup \((S,\cdot)\) is called an assembly if the following hold_
\[(A_{1}) \forall x\,\exists e=e(x)\,(xe=ex=x\wedge\forall f\,\,(xf=fx=x\to ef =fe=e))\] \[(A_{2}) \forall x\,\exists s=s(x)\,(xs=sx=e\wedge e(s)=e(x))\] \[(A_{3}) \forall x\,\forall y\,(e(xy)=e(x)e(y))\]
_To make explicit the functions that exist by conditions \((A_{1})\) and \((A_{2})\) we write \((S,\cdot,e,s)\) instead of \((S,\cdot)\)._
The functional notation \(e(x)\) and \(s(x)\) used above is justified by the fact that the element \(e\) and \(s\) are unique as in fact:
* if \(e^{\prime}\) satisfies condition \((A_{1})\), one has \(e^{\prime}=e^{\prime}e=ee^{\prime}=e\),
* if \(s^{\prime}\) satisfies condition \((A_{2})\) one has \(s^{\prime}=s^{\prime}e(s^{\prime})=s^{\prime}e(x)=s^{\prime}xs=xs^{\prime}s=e(x )s=e(s)s=s\).
So, we may write indiscriminately \(e(x)\) or \(x^{0}\) to denote the unique element \(e\) associated with \(x\) and \(s(x)\) or \(x^{-1}\) to denote the unique element element \(s\) such that \(sx=xs=x^{0}\).
**Examples 2.2**.:
1. _Every (possibly non-commutative) group_ \(G\) _is an assembly with_ \(e(x)\) _constantly equal to the neutral element of_ \(G\)_. Furthermore, the semigroup_ \(G\cup\{0\}\)_, obtained by adding a zero to_ \(G\) _(in the usual way, by postulating_ \(0x=x0=0\)_), is also an assembly. In particular, the multiplicative semigroup of a (possibly skew) field is an assembly, while on the other hand the usual multiplicative semigroup of the integers is not._
2. _An element_ \(e\) _of a semigroup_ \(S\) _is said_ idempotent _if_ \(e^{2}=e\)_. A semigroup in which all elements are idempotent is called a_ band _and is clearly an assembly. A group with more than_ \(1\) _element is not a band. A commutative band is called a_ semilattice _since it may be regarded as a lower semilattice with meet operation equal to the product. Clearly, for any set_ \(S\)_, its powerset has two canonical semigroup structures,_ \((P(S),\cap)\) _and_ \((P(S),\cup)\)_, which are both semilattices._
3. _Any totally ordered set is a strong assembly with_ \(xy=\inf\{x,y\}\)_, and_ \(e(x)=x=s(x)\)_. In particular,_ \((B,\cup)\) _where_ \(B\) _is the set of all ordinals less than a given ordinal is a strong assembly._
4. _The cartesian product of any family of assemblies is an assembly, with respect to the pointwise multiplication (the proof is a straightforward verification). However, the product of strong assemblies may be not strong as in the example_ \(\{0,1\}\times\{0,1\}\)_,_
5. _The structures_ \((\mathbb{E},+)\) _and_ \((\mathbb{E}\backslash\mathcal{N},\cdot)\) _are strong assemblies, where_ \(\mathbb{E}\) _denotes the external set of external numbers and_ \(\mathcal{N}\) _the external set of all neutrices (see_ _[_2_, Thm. 4.10]__)._
6. _Let_ \(\mathbb{F}\) _be a non-archimedean ordered field. Let_ \(C\) _be the set of all convex subgroups for addition of_ \(\mathbb{F}\) _and_ \(\mathcal{Q}\) _be the set of all cosets with respect to the elements of_ \(C\)_. The set_ \(\mathcal{Q}\) _is called the_ quotient class _of_ \(\mathbb{F}\) _with respect to_ \(C\)_. In_ _[_3_]_ _it was show that_ \(\mathcal{Q}\) _is a strong assembly._
By using a standard technique, let us rewrite the assembly axioms with details and proofs.
**Lemma 2.3**.: _Condition \((A_{1})\) is equivalent to_
\[\forall x\,\exists e=e(x)\,((xe=ex=x)\wedge e^{2}=e).\]
_So, in particular the set \(e(S)=\{e(a)\mid a\in S\}\) of all magnitudes of an assembly coincides with the set of idempotents of \(S\), usually denoted by \(E(S)\)._
Proof.: If \((A_{1})\) holds, \(e=e(x)\) is unique by the above and \(ee=e\); therefore \((A_{1}^{\prime})\) holds. Conversely if \((A_{1}^{\prime})\) holds and \(xf=fx=x\), then we have \(f=e\) by unicity in \((A_{1}^{\prime})\). Hence \(e^{2}=ef=fe=e\) and thus \((A_{1})\) holds.
Note that is trivial that if the idempotents commute, i.e. \(\forall x,y\in E(S)\,(xy=yx)\), then \(E(S)\) is a semigroup, even a semilattice, as \((xy)(xy)=x(yx)y=x(yx)y=x(xy)y=(xx)(yy)=xy\). But this is not always the case.
**Example 2.4**.: _If \(A=\left(\begin{smallmatrix}1&0\\ 0&0\end{smallmatrix}\right)\) and \(M=\left(\begin{smallmatrix}0&1\\ 0&1\end{smallmatrix}\right)\), then \(S=\{0,A,M,AM\}\) is a semigroup under the usual matrix multiplication, where \(MA=0\) and \(E(S)=\{0,A,M\}\), but this set is not closed under multiplication._
**Lemma 2.5**.: _Assume that in a semigroup \(S\) condition \((A_{1})\) holds. Then condition \((A_{2})\) is equivalent to_
\[\forall x\,\exists ts=s(x)\,(xs=sx=e(x)).\]
Proof.: If condition \((A_{2})\) holds, then it is clear that the same \(s\) works in \((A_{2}^{\prime})\) as well. Let us verify that it is unique. If \(s^{\prime}\) is in the same condition w.r.t. \(x\), then by \((A_{2})\) we have that \(e(s)=e(x)=e(s^{\prime})\), hence \(s^{\prime}=s^{\prime}e(s^{\prime})=s^{\prime}e(x)=s^{\prime}xs=e(s^{\prime})s=e (s)s=s\), as desired. Conversely, from \((A_{2}^{\prime})\) it follows immediately that \(s(s(x))=x\) and \(e(x)=e(s)\) by the symmetric roles of \(x\) and \(s\).
The first two conditions in the definition of assembly may now be regarded from a different point of view by the next proposition which deals with associative _union of groups_. They are also known as _completely regular semigroups_ and have been studied in many papers. For the fundamental results on the topic we refer to [6, ch. IV] and [8, ch. IV]. However, here we prefer to give a short direct proof which uses the Clifford decomposition argument to see a completely regular semigroup as a union of groups.
**Proposition 2.6**.: _For a semigroup \(S\) the following are equivalent:_
1. _Conditions_ \((A_{1}^{\prime})\) _and_ \((A_{2}^{\prime})\) _hold;_
2. \(S=\bigcup_{e\in E(S)}S_{e}\)_, where_ \(S_{e}=\{a\in S\ |\ e(a)=e\}\) _is a group (said Clifford component of_ \(S\) _at_ \(e\)_);_
3. \(S\) _is a union of (disjoint) groups._
Proof.: First of all recall that \((A_{1}^{\prime})\) and \((A_{1})\) are equivalent and the same happens to \((A_{2}^{\prime})\) and \((A_{2})\). If all of them hold, since \(e^{2}=e\) by \((A_{2}^{\prime})\) we have \(e=e(e)\) and then \(e\in S_{e}\). Moreover, if \(a\in S_{e}\) then \(s(a)\in S_{e}\) by \((A_{2})\). Finally, if \(a,b\in S_{e}\), then \(abe=aeb=eab\) and so \(ab\in S_{a}\). Thus each \(S_{e}\) is a group with neutral element \(e\). Finally, it is clear that \(a\in S_{e(a)}\) for each \(a\in S\). Furthermore, all \(S_{e}\) are disjoint since such are distinct subgroups in any semigroup since groups have only one neutral element (compare also to \((A_{1}^{\prime})\)).
Finally, it is clear that if \(S\) is a union of (disjoint) groups then \((A_{1}^{\prime})\) and \((A_{2}^{\prime})\) hold by considering, for each \(a\in S\), the neutral element and the inverse (resp.) from the unique group in which \(a\) lies.
If we denote by \(s(x)=x^{-1}\) and \(e(x)=x^{0}\), then we have the formulas
\[x^{0}=x^{-1}x=x^{-1}x\quad,\ \ (x^{-1})^{-1}=x\quad,\ \ (xy)^{-1}=y^{-1}x^{-1}\]
which are consistent with the language of group theory and appear also in [2], but in a commutative context.
Thus, it seems natural to ask if also the formula
\[(xy)^{0}=x^{0}y^{0}\]
holds. In other words, we investigate if \((A_{1})\) and \((A_{2})\) imply \((A_{3})\), that is if the function \(e(x)\) is an homomorphism. This is certainly _true when \(A\) is commutative_, as indeed we have \((xy)(x^{0}y^{0})=x(x^{0}y)y^{0}=x(yx^{0})y^{0})=xx^{0}yy^{0}=xy\) and similarly \((x^{0}y^{0})(xy)=xy\). Moreover, since here we have not used plain commutativity but just the fact that idempotents commute with all other elements, then, according to [1, Lemma 3.1], we know that the _formula \((A_{3}^{\prime})\) always holds if idempotents commute_. So we are in a condition to state a consequence of the fundamental Clifford's Theorem [1, Theorem 3].
**Theorem 2.7**.: _Semigroups which are a union of groups and in which idempotents commute are assemblies._
Of course there exist elementary semigroups in which idempotents do not commute.
**Example 2.8**.: _Any left-zero band \(B\), that is any semigroup in which the formula \(xy=x\) holds, is an assembly in which distinct idempotents do not commute (even if they still are a semigroup)._
_In particular the multiplicative structure \(B=\{a,b\}\), with \(a^{2}=ab=a\neq b=ba=b^{2}\) is a non-commutative a band (and hence an assembly)._
Let us see now a more complicated, but very natural, example of assembly. Before, let us recall that in the set \(P(S)\) of all subsets of a semigroup \(S\) one can define a multiplication of \(X,Y\in P(S)\) by the setwise product \(XY=\{xy\ |x\in X,y\in Y\}\) and get a semigroup structure for \(P(S)\), called the _power-semigroup_[8, I.7.5].
**Example 2.9**.: _The set \(\mathcal{A}(\mathcal{G})\) of all cosets \(gN\) of all normal subgroups \(N\) of a group \(G\) is a subsemigroup of the power-semigroup \(P(G)\) since \(g_{1}N_{1}=\{g_{1}\}N_{1}\) and for each \(g_{1},g_{2}\in G\) we have_
\[(g_{1}N_{1})(g_{2}N_{2})=(g_{1}g_{2})N_{1}N_{2},\]
_where \(N_{1},N_{2}\) lie in the (semi)lattice in\((G)\) of all normal subgroups of \(G\), which is a subsemigroup of \(\mathcal{A}(\mathcal{G})\)._
_The functions \(\ e(gN)=N\) and \(\ s(gN)=g^{-1}N\) equip \(\mathcal{A}(\mathcal{G})\) with a structure of assembly, as it can be easily checked, which is non-commutative if \(G\) is non-commutative. Then \(E(G)=n(G)\) and any coset \(gN\) belongs only to the group \(G/N\) which is, of course, a subsemigroup of \(\mathcal{A}(\mathcal{G})\). Thus, the semigroup \(\mathcal{A}(G)\) is the union of the factor groups \(G/N\) (with their multiplicity):_
\[\mathcal{A}(G)=\bigcup\nolimits_{N\in n(G)}G/N\]
_In such a structure idempotents do commute. Finally, when \(G\) is cyclic with order \(p^{n}\), the assembly \(\mathcal{A}(G)\) has order \(1+p+\cdots+p^{n}\) while the assembly \(G\times n(G)\) has order \((n+1)p^{n}\)._
Now a crucial example: there are unions of groups which are not assemblies.
**Counterexample 2.10**.: _Let \(R\) be the semigroup \(M(2,G,2,P)\) of so-called Rees \(2\times 2\)-matrices where \(G=\{1,-1\}\) is a multiplicative group of order \(2\) and \(P=\left(\begin{smallmatrix}1&-1\\ 1&1\end{smallmatrix}\right)\). Then \(R\) is a union of groups which is not an assembly._
Proof.: If we equip the set of \(2\times 2\) matrices over any field with characteristic \(\neq 2\) with the multiplication \(*\) defined by \(A*B=APB\) (where juxtaposition on the right-hand side means the usual row-by-column product), which is clearly associative, then the set \(S\) consisting of the matrices: \(A=\left(\begin{smallmatrix}1&0\\ 0&0\end{smallmatrix}\right)\), \(B=\left(\begin{smallmatrix}0&1\\ 0&0\end{smallmatrix}\right)\), \(C=\left(\begin{smallmatrix}0&0\\ 1&0\end{smallmatrix}\right)\), \(D=\left(\begin{smallmatrix}0&0\\ 0&0\end{smallmatrix}\right)\) and their opposites \(-A,-B,-C,-D\) is a semigroup (with respect to \(*\)) which is the union of the non-trivial groups \(\{A,-A\},\{B,-B\},\{-C,C\},\{D,-D\}\) with neutral elements \(A,B,-C,D\) respectively. Thus, \((A_{1})\) and \((A_{2})\) hold. However \((A_{3})\) fails since, on the one hand, from \(B*C=A\) one has \(e(B*C)=e(A)=A=A^{2}\), while on the other hand \(e(B)*e(C)=B*(-C)=-A\) is not even idempotent as \((-A)^{2}=A^{2}=A\)
**Definition 2.11**.: _Let \(S\) be a semigroup. If there exists \(\varphi:S\to B\) a semigroup homomorphism where \(B\) is a band (resp. semilattice), we say that \(S\) is a band (resp. semilattice) of the subsemigroups \(S_{e}:=\varphi^{-1}\{e\}\) with \(e\in B\)._
Note that in the above circumstances we have that
\[S=\bigcup\nolimits_{e\in B}S_{e}\]
where \(S_{e}\) is a subsemigroup since by \(x,y\in S_{e}\) it follows \(xy\in S_{e}\) since \(\varphi(xy)=\varphi(x)\varphi(y)=ee=e\).
We are now able to characterize assemblies in terms of bands of groups.
**Theorem 2.12**.: _A semigroup is an assembly if and only if it is a band of groups._
Proof.: Let \((A,\cdot)\) be an assembly, then by Proposition 2.6 and \((A_{3})\) the map \(x\mapsto e(x)\) is an homomorphism whose image is a subsemigroup which is the band \(E(A)\).
Conversely, let \(A\) be a band of groups via the homomorphism \(\varphi:A\to B\) an homomorphism of semigroups where \(B\) is a band. If \(\varphi\) is the canonical map \(e(x)\) there is nothing left to show since we may apply Proposition 2.6 to get that \((A_{1})\) and \((A_{2})\) hold and note that \((A_{3})\) just means that the map \(e\) is an homomorphism.
To treat the general case, let us show that, up to an isomorphism, we can reduce to the case \(\varphi=e\). Let us define \(\psi:B\to E(A)\) where \(\psi(b)\) is the identity element of the group \(\varphi^{-1}\{b\}\). Let us show that \(\psi\) is the inverse map of the restriction \(\varphi_{1}:E(S)\to B\) of \(\varphi\). In fact \(\varphi_{1}(\psi(b))=\varphi(\psi(b))=b\) by definition of \(\psi\). Moreover for each \(\varepsilon\in E(S)\) we have that \(\psi(\varphi_{1}(\varepsilon))\) is the identity element of the group \(\varphi^{-1}(\varphi(\varepsilon))\). On the other hand, \(\varepsilon\in\varphi^{-1}(\varphi(\varepsilon))\) and \(\varepsilon\) is idempotent. Thus \(\varepsilon\) is the identity element of \(\varphi^{-1}(\varphi(\varepsilon))\), hence \(\psi(\varphi_{1}(\varepsilon))=\varepsilon\).
Thus \(\psi=\varphi_{1}^{-1}\) is an homomorphism, as \(\varphi\) is by hypothesis. Then \(e=\psi\varphi\) is an homomorphism as well, as wished.
**Corollary 2.13**.: _For a semigroup \(A\) the following are equivalent:_
1. \(A\) _is an assembly whose magnitudes commute._
2. \(A\) _is an assembly whose magnitudes are central._
3. \(A\) _is a semilattice of groups,_
Proof.: Observe that (1) and (2) are equivalent by Clifford's Lemma [1, Lemma 3.1]. If they hold, then (3) holds since \(e(x)\) is the wished homomorphism. Finally, (3) implies (1) via Theorem 2.12.
**Example 2.14**.: _If \(G\) is a group and \(S\) a semilattice, then \(S\times G\) is a possibly non-commutative semilattice of groups._
## 3 Subassemblies
In group theory it is possible to characterize the subgroups of a given group \((G,\cdot)\) as any nonempty subset of \(G\) which is closed under multiplication and inversion. A similar characterization of subassemblies of a given assembly holds.
**Proposition 3.1**.: _If \((A,\cdot,s,e)\) is an assembly and \(B\) is a non-empty subset of \(A\), then the following are equivalent._
1. \(\forall x,y\in B\)__\(x\cdot s(y)\in B\)__
2. \((B,\cdot_{B},s_{B},e_{b})\) _is an assembly, where_ \(\cdot_{B},s_{B},e_{b}\) _are the restrictions to_ \(B\) _of_ \(\cdot,s,e,\) _respectively._
_If these condition hold, we say that \(B\) is a subassembly of \(A\). Moreover, if \(A_{e}\) is a Clifford component of \(A\) with \(e\in B\), then \(B_{e}=A_{e}\cap B\) is a Clifford component of \(B\)._
Proof.: It is clear that (2) implies (1). Assuming (1), we have to prove that \(B\) is closed under the maps \(\cdot,e,s\). If \(b\in B\), then \(e(b)=bs(b)\in B\), as wished. Moreover, \(s(b)=e(s(b))s(b)=e(b)s(b)\in B\). Finally if \(b_{1}\in B\), then \(b_{1}b=b_{1}s(s(b))\in B\). Therefore (2) holds.
To prove that a structure is a subassembly of a given assembly becomes quite simpler using the previous result. We illustrate this with some relevant examples.
**Example 3.2**.: _The following are subassemblies of \((\mathbb{E},+)\)._
1. \((\mathbb{R},+)\)_, because_ \(\mathbb{R}\subset\mathbb{E}\) _and_ \((\mathbb{R},+)\) _is a group._
2. \(B=\{x+A:x\in\mathbb{R}\}\,,\) _where_ \(A\) _is a given neutral. We have_ \(A\in B\) _and_ \(B\subseteq\mathbb{E}\)_. If_ \(\alpha=a+A\)_,_ \(\beta=b+A\in B\) _then_ \(\alpha-\beta=(a+A)-(b+A)=(a-b)+A\in B\)
3. \((\mathcal{N},+)\), where \(\mathcal{N}\) is the class of all neutrices. Note that the class of all neutrices is nonempty because \(0\in\mathcal{N}\) and the difference of two neutrices is equal to the larger of the two.
4. \((\mathbb{E}\backslash\mathbb{R},+)\). Clearly \(\oslash\in\mathbb{E}\backslash\mathbb{R}\), hence \(\mathbb{E}\backslash\mathbb{R}\) is nonempty. Let \(x=a+A\), \(y=b+B\in\mathbb{E}\backslash\mathbb{R}\). Then \(x-y=(a-b)+\max\left(A,B\right)\in\mathbb{E}\backslash\mathbb{R}\).
5. \((A_{\rho},+)\), where \(\rho\in\mathbb{R}\) and \(A_{\rho}=\{x\in\mathbb{E}:x\subseteq\bigcup_{\mathfrak{s}(\mathfrak{n})} \left[-\rho^{n},\rho^{n}\right]\}.\) Clearly \(\emptyset\neq A_{\rho}\subseteq\mathbb{E}\). Let \(x,y\in A_{\rho}\). Then there are standard \(m,n\) such that \(x\subseteq\left[-\rho^{m},\rho^{m}\right]\) and \(y\subseteq\left[-\rho^{n},\rho^{n}\right].\) Let \(p=\max\left\{m,n\right\}\). Then \(\left|x-y\right|\leq 2\max\left\{x,y\right\}\leq 2\rho^{p}\leq\rho^{p+1}\).
6. Let \((A,+)\) and \((B,\cdot)\) be assemblies. Let \((G,+)\) be a subassembly of \((A,+)\) and \((H,\cdot)\) be a subassembly of \((B,\cdot)\). Then \((G\times H,*)\) is a subassembly of \((A\times B,*)\).
7. If \(H\) is any subgroup of a group \(G\), then \(\mathcal{A}(H)\) is a subassembly of \(\mathcal{A}(G)\).
An important difference between assemblies and groups is that subassemblies do not need to contain all a universal neutral element, allowing both \((\mathbb{E}\backslash\mathbb{R},+)\) and \((\mathbb{R},+)\) to be subassemblies of \((\mathbb{E},+)\). This fact shows that, unlike what happens with groups, _it is possible for the intersection of two subassemblies of a given assembly to be empty_. Moreover, for \((B,\cdot),(C,\cdot)\) subassemblies of an assembly \((A,\cdot)\) it may happen that \(B\cup C\) is a subassembly of \(A\) and both \(B\nsubseteq C\) and \(C\nsubseteq B\). However, the following holds.
**Proposition 3.3**.: _Let \(B,C\) be subassemblies of an assembly \(A\). Then \(B\cap C\) is either empty or a subassembly of \(A\). Moreover, if \(A\) is commutative, the set \(B\cdot C\) is a subassembly of \(A\), where the product is meant to be defined in the power-semigrop \(P(A)\)._
Proof.: Suppose that \(B\cap C\) is nonempty. Let \(x,y\in B\cap C\). Then, because \(B\) and \(C\) are assemblies, \(x\cdot y^{-1}\in B\) and \(x\cdot y^{-1}\in B\cap C\). Hence \((B\cap C,\cdot)\) is a subassembly of \((A,\cdot)\), by Proposition 3.1.
Suppose now that \(x,y\in B\cdot C\). Then there are \(u,v\in B\) and \(r,t\in C\), such that \(x=u\cdot r\) and \(y=v\cdot t\). Because \(B\) and \(C\) are assemblies, \(u\cdot v^{-1}\in B\), \(r\cdot t^{-1}\in C\) and then \(x\cdot y^{-1}=(u\cdot r)\cdot(v\cdot t)^{-1}=(u\cdot v^{-1})\cdot(r\cdot t^{-1 })\in B\cdot C\), by formulas (F) and because \(A\) is commutative. Hence \((B\cdot C,\cdot)\) is a subassembly of \((A,\cdot)\), by Proposition 3.1.
**Example 3.4**.: _By Proposition 3.3 we have that \(A=\left\{x+N:x\in\mathbb{Z},N\in\mathcal{N}\right\},\ B=\left\{x+\oslash:x\in \mathbb{Q}\right\}\) and \(C=\{x\in\mathbb{Z}:x\text{ is limited}\}\) are assemblies because \(A=\mathbb{Z}+\mathcal{N}\), \(B=\mathbb{Q}+\oslash\) and \(C=\mathbb{Z}\cap\mathcal{L}\)._
**Proposition 3.5**.: _The subset \(Z(A)\) of elements of an assembly \(A\) commuting with all elements of \(A\) is a subassembly of \(A\) that we call the centre of \(A\)._
Proof.: If \(z,z^{\prime}\in Z(A)\) it is trivial that \(zz^{\prime}\in Z(A)\). Let us prove that \(z^{-1}\in Z(A)\). For each \(a\in A\), by formulas \((F)\) we have \(az^{-1}=((az^{-1})^{-1})^{-1}=(za^{-1})^{-1}=(a^{-1}z)^{-1}=z^{-1}a\) and we may apply Proposition 3.1.
## 4 Homomorphisms
An homorphism \(\varphi\) between two assemblies \((A,\cdot_{A},s_{A},e_{A})\) and \((B,\cdot_{B},s_{B},e_{B})\) is expected to be a map \(\varphi:A\to B\) which respects the \(3\) given operations. However, in the case under consideration, the request is so easily fulfilled that we can proceed even with a slight abuse of notation as in the next proposition.
**Proposition 4.1**.: _Let \(A\) and \(B\) be assemblies. If \(\varphi:A\to B\) is a semigroup homomorphism, i.e. if \(\varphi(xy)=\varphi(x)\varphi(y)\), for all \(x,y\in A\), then for each \(x\) in \(A\)_
1. \(\varphi(x^{0})=\varphi(x)^{0},\)__
2. \(\varphi(x^{-1})=\varphi(x)^{-1}.\)__
Proof.: We have \(\varphi(x)\varphi(x^{0})=\varphi(xx^{0})=\varphi(x)=\varphi(x^{0}x)=\varphi(x^{0 })\varphi(x)\), hence by the uniqueness of \(\varphi(x)^{0}\) we deduce \(\varphi(x)^{0}=\varphi(x^{0})\). The second part follows in a similar way since \(\varphi(x)\varphi(x^{-1})=\varphi(xx^{-1})=\varphi(x^{0})=\varphi(x)^{0}= \varphi(x^{0})=\varphi(x^{-1}x)=\varphi(x^{-1})\varphi(x)\).
Thus the homomorphic image of a magnitude is a magnitude and the homomorphic image of the inverse of a given element is the inverse of the homomorphic image of that same element. These properties generalize similar properties for group homomorphisms.
**Example 4.2**.: _The following are assembly homomorphisms (sometimes in additive notation):_
1. _All group homomorphisms, because every group is an assembly._
2. _The identity map_ \(f(x)=x\) _and the map_ \(e(x)=x^{0}\) _are assembly homomorphisms._
3. _Let_ \(A\) _be a neutral. Then_ \(f:(\mathbb{E},+)\rightarrow(\mathbb{E},+)\) _such that_ \(f(x)=x+A\) _is an homomorphism. In fact, if_ \[x,y\in\mathbb{E},\] \[f(x+y)=(x+y)+A=x+y+A+A=(x+A)+(y+A)=f(x)+f(y).\]
4. _The function_ \(f:(\mathbb{E},+)\rightarrow(\mathbb{E},+)\) _such that_ \(f(x)=\omega x\) _for some_ \(\omega\simeq+\infty\) _is an homomorphism. Let_ \(x,y\in\mathbb{E}\)_. Then, using_ _[_2_, Lemma 5.12]__,_ \[f(x+y)=\omega(x+y)=\omega x+\omega y=f(x)+f(y).\]
5. _The function_ \(f:(\mathcal{N},+)\rightarrow(\mathcal{N},+)\)_,_ \(f(x)=\oslash x\)_, where_ \(\oslash\) _is the external set of infinitesimal numbers. Let_ \(x,y\in\mathcal{N}\)_. Using_ _[_2_, Corollary 5.10]__,_ \[f(x+y)=\oslash x+y=\oslash x+\oslash y=f(x)+f(y).\]
6. _The function_ \(f:(\mathcal{N},+)\rightarrow(\mathbb{E}\setminus\left\{0\right\},\cdot)\) _such that_ \(f(x)=\exp_{S}\left(x\right)\equiv[-e^{x},e^{x}]\) _(see_ _[_7_, Def. 1.4.2]__). Let_ \(A,B\in\mathcal{N}\)_. Then_ \[\exp_{S}\left(A+B\right) = [(-e^{A})\,e^{B},\left(e^{A}\right)e^{B}]=[-e^{A},e^{A}]e^{B}\] \[= [-e^{A},e^{A}][-e^{B},e^{B}]=\exp_{S}\left(A\right)\exp_{S}\left(B \right).\]
7. _If_ \(G\) _is a group, the function_ \[(g,N)\in G\times n(G)\mapsto gN\in\mathcal{A}(\mathcal{G})\] _is a possibly non-injective epimorphism of assemblies, where from_ \(g_{1}N_{1}=g_{2}N_{2}\) _it follows if_ \(N_{1}=N_{2}\)_, by applying the function_ \(e\)_._
**Counterexample 4.3**.: _Obvious examples of functions which are not homomorphisms are nonlinear functions. Consider for instance the function_ \(f:(\mathbb{E},+)\rightarrow(\mathbb{E},+)\) _such that_ \(f(x)=x^{2}\)_. In fact, if_ \(x=-1+\oslash\) _and_ \(y=1+\oslash\) _then_
\[f(x+y)=f\left(\oslash\right)=\oslash^{2}\]
_and_
\[f\left(x\right)+f\left(y\right)=(1+\oslash)^{2}+(-1+\oslash)^{2}=(1+\oslash)+(1+ \oslash)=2+\oslash.\]
_However there are also functions which may appear to be linear but are really not. As such one may not extend Example 4.2.5 to the whole of \(\mathbb{E}\):_
\[\oslash(1-1)=0,\]
_while_
\[\oslash 1-\oslash 1=\oslash.\]
**Proposition 4.4**.: _Let \(\varphi:A\to B\) be an assembly homomorphism. Then \(\varphi(A)\) is a subassembly of \(B\)._
Proof.: Apply Propositions 3.1 and 4.1.
Thus, in studing assembly homomorphisms, there is not much loss of generality in assuming that these are onto. Furthermore, since -via the Clifford decomposition- any assembly \(A\) may be partitioned into disjoint groups and the homomorphic image of a group is likewise a group, one may regard any assembly homomorphism
\[A=\bigcup_{e\in E(A)}A_{e}\ \stackrel{{\varphi_{*}}}{{\longrightarrow}}\ B= \bigcup_{e\in E(B)}B_{e}\]
as a disjoint union of group homomorhisms \(A_{e}\ \stackrel{{\varphi_{*}}}{{\longrightarrow}}\ B_{\varphi(e)}\). Then, for \(e\in E(A)\), we define \(\varphi_{e}\) to be the \(e\)-th component of \(\varphi\) and we have:
**Proposition 4.5**.: _An assembly homomorphism is into (resp. onto) if and only if all components of its are into (resp. onto). _
Therefore, if we denote by \(\mathrm{Ker}(\varphi)\) the usual kernel of \(\varphi\) we have that it is the union of the kernels of its components \(\varphi_{e}\). By the above this is a subassembly and we have:
\[\mathrm{Ker}(\varphi)=\bigcup_{e\in E(A)}\left\{x\in A:\varphi(x)=\varphi(e) \right\}=\bigcup_{e\in E(A)}ker_{\varphi_{*}}=\varphi^{-1}(\varphi(e(A))\supseteq E (A)\]
**Corollary 4.6**.: _An homomorphism of assemblies \(\varphi\) is injective if and only if \(\mathrm{Ker}_{\varphi}=E(A)\), i.e. its kernel coincides with the set of idempotents of \(S\). _
**Example 4.7**.: _If \(A\) is an assembly and its semilattice \(E(A)\) of idempotents has maximum \(m\), then there are no non-trivial homomorphisms \(\varphi:A\to G\) to any group \(G\). This holds in particular for the assembly of cosets \(A=\mathcal{A}(G)\)._
Proof.: By Proposition 4.1, the element \(\varphi(m)\) must be idempotent, but in \(G\) there is only one idempotent: its unique neutral element \(1_{G}\). Then \(\varphi(a)=\varphi(a)\cdot 1_{G}=\varphi(a)\varphi(m)=\varphi(am)=\varphi(m)=1_{G}\), for all \(a\in A\)
### Acknowledgments
The second author acknowledges the support of FCT - Fundacao para a Ciencia e Tecnologia under the projects:
UIDP/04561/2020 and UIDP/04674/2020, and the research centers CMAFcIO - Centro de Matematica, Aplicacoes Fundamentais e Investigacao Operacional and CIMA - Centro de Investigacao em Matematica e Aplicacoes.
|
2309.04313 | Fast, low-loss all-optical phase modulation in warm rubidium vapour | Low-loss high-speed switches are an integral component of future photonic
quantum technologies, with applications in state generation, multiplexing, and
the implementation of quantum gates. Phase modulation is one method of
achieving this switching, but existing optical phase modulators either achieve
high bandwidth or low loss, but not both. We demonstrate fast
($100\,\mathrm{MHz}$ bandwidth), low-loss ($83\pm2\%$ transmission) phase
shifting ($\Delta\phi = (0.90\pm0.05)\pi$) in a signal field, induced by a
control field, and mediated by the two-photon $5S_{1/2} \rightarrow{} 5P_{3/2}
\rightarrow{} 5D_{5/2}$ transition in $^{87}\text{Rb}$ vapour. We discuss
routes to enhance both performance and scalability for application to a range
of quantum and classical technologies. | William Davis, Paul Burdekin, Tabijah Wasawo, Sarah E Thomas, Peter J Mosley, Joshua Nunn, Cameron McGarry | 2023-09-08T13:19:19Z | http://arxiv.org/abs/2309.04313v2 | # Fast, low-loss all-optical phase modulation in warm rubidium vapour
###### Abstract
High-speed switching with low loss would be a versatile tool for photonic quantum technologies, with applications in state generation, multiplexing, and the implementation of quantum gates. Phase modulation is one method of achieving this switching, but existing optical phase modulators either achieve high bandwidth or low loss, but not both. We demonstrate fast (100 MHz bandwidth), low-loss (74(2) % transmission) phase shifting (\(\Delta\phi=(0.90(5))\pi\)) in a signal field, induced by a control field, and mediated by the two-photon \(5S_{1/2}\to 5P_{3/2}\to 5D_{5/2}\) transition in \({}^{87}\)Rb vapour. We discuss routes to enhance both performance and scalability for application to a range of quantum and classical technologies.
## I Introduction
Photonics has revolutionised telecommunications since the development of fibre optics in the 1980s. Photonic data buses are supplanting electronics in high performance computing [1], and more recently photonic platforms for machine learning are emerging [2]. Looking forwards, photonics can provide a platform for communication with enhanced security by quantum key distribution [3] and support the transfer of quantum information between nodes [4]. All of these applications require high speed switching which can be achieved by phase modulation of an optical signal, and fibre-integrated electro-optical modulators are commercially mature. Nevertheless the insertion losses of these devices (typically 3 dB) add a practical overhead: mitigating these losses requires increased input power, intermediate amplifiers, and waste heat management [5]. Further, increasing demand on switching speeds could lead to the obsolescence of existing semiconductor-based telecommunications devices [6]. More efficient technologies for optical modulation are thus desirable across a range of application areas.
The primary motivation for our work in this paper is in the area of photonic quantum computing [7], where engineered non-classical optical states are used to solve computational problems that are intractable with classical (i.e. non-quantum) resources. Photonic quantum computing is appealing for a number of reasons, including room-temperature operation of all or many components, high clock-rates, high connectivity, insensitivity to stray fields and modular construction. But a key technical challenge remains: the requirement to switch and dynamically re-route photons with high speed and extremely low loss. This is an essential stage in a variety of processes for photonic quantum computing, such as implementing: loop memories [8; 9], synchronisation [10] or multiplexing [11; 12; 13] of single photon sources and demultiplexing for graph state generation [14]. Amplification destroys quantum coherence and so cannot be used to mitigate losses in a quantum system. The lifetime of photons in a waveguide is limited, and so high bandwidth is required for scalability. For these reasons, quantum systems have extremely stringent tolerances for speed and loss [15], which motivates an exploration of alternative platforms that could ultimately deliver better performance than current fibre-integrated electro-optic modulators.
In this letter, we describe and demonstrate efficient, all-optical phase modulation of light. Ideal implementation of phase modulation (a phase shift of \(\pi\) radians with no loss) would be equivalent to implementation of an optical switch, by embedding the phase modulator in one arm of a Mach-Zehnder interferometer. Our scheme, depicted in Fig. 1 (a), takes advantage of the \(5S_{1/2}\to 5P_{3/2}\to 5D_{5/2}\) two-photon transition in \({}^{87}\)Rb. This is the same transition used in electromagnetically-induced transparency (EIT) optical control [16; 17] and memory [18] schemes. A weak signal field, detuned from resonance with the \(5S_{1/2}\to 5P_{3/2}\) transition by frequency \(\Delta_{s}\), counter-propagates through a \({}^{87}\)Rb vapour cell with
Figure 1: (a) Ladder scheme used for phase modulation of weak signal field (red) by presence of strong control field (blue) detuned from \(5S_{1/2}\to 5P_{3/2}\) and \(5P_{3/2}\to 5D_{5/2}\) transitions by \(\Delta_{s}\) and \(\Delta_{c}\) respectively. (b) Transmission with control off (black, solid) and on (orange, dotted) and phase shift (red, dashed) through \({}^{87}\)Rb vapour for various \(\Delta_{s}\), as determined by the theoretical model described in the main text. Control detuning is fixed at \(\Delta_{c}=-1.6\) GHz. We identify the experimental region of interest (shaded), where there is low-loss for control on and off, as well as a high phase shift. Note also the two-photon absorption feature at \(\Delta_{s}=-6\) GHz, which is away from \(\Delta_{s}=-\Delta_{c}\) due to the a.c. Stark-shift. |
2309.12565 | Modeling Spatiotemporal Periodicity and Collaborative Signal for
Local-Life Service Recommendation | Online local-life service platforms provide services like nearby daily
essentials and food delivery for hundreds of millions of users. Different from
other types of recommender systems, local-life service recommendation has the
following characteristics: (1) spatiotemporal periodicity, which means a user's
preferences for items vary from different locations at different times. (2)
spatiotemporal collaborative signal, which indicates similar users have similar
preferences at specific locations and times. However, most existing methods
either focus on merely the spatiotemporal contexts in sequences, or model the
user-item interactions without spatiotemporal contexts in graphs. To address
this issue, we design a new method named SPCS in this paper. Specifically, we
propose a novel spatiotemporal graph transformer (SGT) layer, which explicitly
encodes relative spatiotemporal contexts, and aggregates the information from
multi-hop neighbors to unify spatiotemporal periodicity and collaborative
signal. With extensive experiments on both public and industrial datasets, this
paper validates the state-of-the-art performance of SPCS. | Huixuan Chi, Hao Xu, Mengya Liu, Yuanchen Bei, Sheng Zhou, Danyang Liu, Mengdi Zhang | 2023-09-22T01:34:10Z | http://arxiv.org/abs/2309.12565v1 | # Modeling Spatiotemporal Periodicity and Collaborative Signal for Local-Life Service Recommendation
###### Abstract.
Online local-life service platforms provide services like nearby daily essentials and food delivery for hundreds of millions of users. Different from other types of recommender systems, local-life service recommendation has the following characteristics: (1) _spatiotemporal periodicity_, which means a user's preferences for items vary from different locations at different times. (2) _spatiotemporal collaborative signal_, which indicates similar users have similar preferences at specific locations and times. However, most existing methods either focus on merely the spatiotemporal contexts in sequences, or model the user-item interactions without spatiotemporal contexts in graphs. To address this issue, we design a new method named SPCS in this paper. Specifically, we propose a novel _spatiotemporal graph transformer_ (SGT) layer, which explicitly encodes relative spatiotemporal contexts, and aggregates the information from multi-hop neighbors to unify spatiotemporal periodicity and collaborative signal. With extensive experiments on both public and industrial datasets, this paper validates the state-of-the-art performance of SPCS.
S Spatiotemporal Periodicity, Collaborative Signal, Recommendation +
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Journal: Information systems Location based services
+
Footnote †: journal: Journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Information systems Location based services
+
Footnote †: journal: Journal of Information Systems Location based services
+
Footnote †: journal: Information systems Location based services
+
between current and future steps to capture the transitional regularities in sequences. However, they ignore the _spatiotemporal collaborative signal_ and fail to capture preferences from similar users. Another line of research is the graph-based recommender. Earlier works (Gendelman et al., 2017; Wang et al., 2018) simply adopt GCN layers on the user-item interaction graph for collaborative filtering, ignoring the spatiotemporal contexts. Recent works (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) aim to utilize spatial or temporal information in graphs. However, these methods either focus solely on the temporal information, or consider merely spatial information (location), which fail to model the _spatiotemporal periodicity_. In short, existing solutions leave a blank space in simultaneously modeling the spatiotemporal periodicity and collaborative signal for local-life service recommendation.
To address this issue, we propose a novel method, Modeling Spatiotemporal Periodicity and **C**ollaborative **S**ignal (SPCS) for local-life service recommendation. First, to capture the spatiotemporal contexts in graphs, we design the encoding layer, which explicitly encodes relative time interval as _temporal encoding_ and relative spatial distance as _spatial encoding_. Second, to simultaneously model the spatiotemporal periodicity and collaborative signal, we design a novel _spatiotemporal graph transformer_ (SGT) layer, which utilizes two encodings and aggregates the spatiotemporal information from multi-hop neighbors. The main contributions of this paper are as follows:
* We propose a novel SPCS method to simultaneously model the spatiotemporal periodicity and collaborative signal for local-life service recommendation. Specifically, we capture the spatiotemporal contexts in the encoding layer, and then aggregate this information from multi-hop neighbors in the spatiotemporal graph transformer (STG) layer.
* We conduct extensive experiments on both real-world public and industrial datasets. Experimental results demonstrate the effectiveness of our proposed SPCS.
## 2. Related Works
**Sequential-based Recommenders**. Most of the earlier works like GRU4Rec (Wang et al., 2018) and TiASRec (Wang et al., 2018) adopt sequential methods, such as RNN (Wang et al., 2018) and self-attention (Wang et al., 2018), to capture the temporal evolution of user preferences. Besides, some other works (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) like ST-RNN (Wang et al., 2018) and STAN (Wang et al., 2018) aim to utilize spatiotemporal contexts between current and future steps to capture the transitional regularities. However, these methods ignore the _spatiotemporal collaborative signal_, which is essential to capture preferences from similar users at specific locations and times.
**Graph-based Recommenders**. Early works like NGCF (Wang et al., 2018) and LightGCN (Gendelman et al., 2017) simply adopts GCN layers on the user-item interaction graph without using spatiotemporal contexts. Later works (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) exploit the extensive collaborative signals for item-item relations with location information. Some other works (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) like TGSRec (Wang et al., 2018) and TGN (Wang et al., 2018) attempt different utilization of time intervals with time-decaying functions to capture the temporal collaborative signal. However, these methods fail to fully utilize the spatiotemporal contexts and ignore the _spatiotemporal periodicity_. Therefore, we aim to simultaneously model the spatiotemporal periodicity and collaborative signal for local-life service recommendation.
## 3. Methodology
In this section, we first present related terms and formalize the problem of local-life service recommendation, then introduce the three main parts of our SPCS: (1) _encoding layer_, which explicitly encodes the spatiotemporal contexts as temporal encoding and spatial encoding. (2) _spatiotemporal graph transformer layer_, which combines two encodings and aggregates the information from multi-hop neighbors. (3) the prediction and optimization. Figure 2 illustrates the overall structure of SPCS.
### Problem Formulation
Definition 1 ().: _Spatiotemporal User-Item Graph is defined as \(\mathcal{G}=\{\mathcal{U},\mathcal{I},\mathcal{E}\}\), where \(\mathcal{U}\) and \(\mathcal{I}\) is the set of users and items, \(\mathcal{E}\) is a set of spatiotemporal edges. Each edge \(e_{u,i}^{t}\in\mathcal{E}\) is denoted as a quintuple, \(e_{u,i}^{t}=(u,i,t,p_{u}^{t},p_{i})\), where \(u\in\mathcal{U}\), \(i\in\mathcal{I}\), \(t\in\mathbb{R}^{+}\), \(\{p_{u}^{t},p_{i}\}\subset\mathcal{P}\), and \(\mathcal{P}\) is a set of locations with latitude and longitude. Each \((u,i,t,p_{u}^{t},p_{i})\) means that a user \(u\) will interact with item \(i\) at time \(t\), traveling from location \(p_{u}^{t}\) to location \(p_{i}\). We denote the neighbor set for user \(u\) in the time interval \([0,t]\) as \(\mathcal{N}_{u}^{t}=\{(j,t_{j},p_{j})\mid e_{u,i}^{t_{j}}\in\mathcal{E},t_{j} <t\}\)._
Definition 2 ().: _Local-Life Service Recommendation. Given a spatiotemporal user-item graph \(\mathcal{G}\) and a new query tuple \((u,t,p_{u}^{t})\), local-life service recommendation aims to recommend item list that user \(u\) would be interested at time \(t\) and location \(p_{u}^{t}\)._
### Encoding Layer
#### 3.2.1. **User/Item Embedding**
To maintain each user's (item's) history in a compressed format, we adopt a widely-used memory mechanism (Wang et al., 2018). First, the memory state for a user \(u\) (item \(i\)) at time \(t\) is denoted as \(\mathbf{s}_{u}^{t}\) (\(\mathbf{s}_{i}^{t}\)) \(\in\mathbb{R}^{c}\), initialized as an all-zero vector and updated by the memory mechanism. Then, we define \(\mathbf{h}_{u}^{(t-1),t}\) as the hidden embedding that serves as the input to the \(\ell\)-th layer for user \(u\) at time \(t\) (the same applies to the hidden embedding for item \(i\)). Note that, in the first layer, \(\mathbf{h}_{u}^{(0),t}=s_{u}^{t}\). When \(\ell>1\), it is generated from the previous layer.
#### 3.2.2. **Temporal Encoding (TE)**
Inspired by recent works (Wang et al., 2018; Wang et al., 2018), the periodicity can be reflected by relative time intervals between the user's different interactions. We design the temporal encoding function as \(\phi(\cdot,\cdot)\rightarrow\mathbb{R}^{c_{\ell}}\) based on Bochner's Theorem (Bochner, 2010) to encode the time intervals into embedding explicitly. Specifically, given two time points \(t_{1}\) and \(t_{2}\), we implement \(\phi(\cdot,\cdot)\) as:
\[\phi(t_{1},t_{2})=\left[\cos(\alpha_{1}[t_{1}-t_{2}]+b_{1}),\cdots,\cos(\alpha_ {c_{\ell}}[t_{1}-t_{2}]+b_{c_{\ell}})\right] \tag{1}\]
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline
**Type** & **Model** & **Collaborative** & **Temporal Info** & **Spatial Info** \\ \hline \multirow{4}{*}{Sequential-based} & ST-RNN & \(\times\) & \(\checkmark\) & \(\checkmark\) \\ & STAN & \(\times\) & \(\checkmark\) & \(\checkmark\) \\ & TiASRec & \(\times\) & \(\checkmark\) & \(\times\) \\ & SLRC & \(\times\) & \(\checkmark\) & \(\times\) \\ \hline \multirow{4}{*}{Graph-based} & LightGCN & \(\checkmark\) & \(\times\) & \(\times\) \\ & SAE-NAD & \(\checkmark\) & \(\times\) & \(\checkmark\) \\ & TGSRC & \(\checkmark\) & \(\checkmark\) & \(\times\) \\ & TGN & \(\checkmark\) & \(\checkmark\) & \(\times\) \\ \hline Our method & **SPCS** & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ \hline \end{tabular}
\end{table}
Table 1. Comparison of related methods.
where \(\cos(\cdot)\) is the cosine function. \(\mathbf{\omega}=[\omega_{1},\cdots,\omega_{c_{t}}]\) and \(\mathbf{b}=[b_{1},\cdots,b_{c_{t}}]\) are learnable weights and bias of linear transformation for the time interval.
#### 3.2.3. **Spatial Encoding (SE)**
To quantify the change of locations between different users and items, we design the spatial encoding function as \(\psi(\cdot,\cdot)\rightarrow\mathbb{R}\) to derive the geographical weight. Specifically, given two locations \(p_{1}\) and \(p_{2}\), we implement \(\psi(p_{1},p_{2})\) as:
\[\psi(p_{1},p_{2})=\frac{1}{f\left(\text{Haversine}(p_{1},p_{2})/\tau\right)+1}, \tag{2}\]
where \(\text{Haversine}(\cdot,\cdot)\) denotes the Haversine formula (Haversine, 1977), which is widely used to calculate the distance from latitude and longitude. \(\tau\) is used to control the decay rate of the weight. And \(f(\cdot)\) is a mapping function, such as an identity or exponential function. Note that other effective spatial encoding can also be explored and used in future work.
### Spatiotemporal Graph Transformer Layer
In this section, we will introduce the spatiotemporal graph transformer (SGT) layer in two parts: (1) construction of query, key, and value; (2) spatiotemporal self-attention. In the following, we take the calculation for user \(u\) at time \(t\) as an example.
#### 3.3.1. **Construction of Query, Key and Value**
To unify the spatiotemporal information and collaborative signal, we construct the input information of each SGT layer as the combination of hidden node embeddings, temporal encoding, and spatial encoding. Specifically, given the query user \((u,t,p_{u}^{t})\), we construct the query vector \(\mathbf{q}^{(\ell),t}\) for user \(u\) itself at \(\ell\)-th layer as:
\[\mathbf{q}^{(\ell),t}=\left[\begin{array}{cc}\mathbf{h}_{u}^{(\ell-1),t}&\oplus& \phi(t,t)&\otimes&\psi(p_{u}^{t},p_{u}^{t})\end{array}\right], \tag{3}\]
where the layer number \(\ell=\{1,\cdots,L\}\). And \(\oplus\) is the vector concatenate operation and \(\otimes\) is the product operation in this paper. Other operations including summation are possible. In addition to query user \(u\) itself, we also propagate spatiotemporal collaborative information from its neighbors. We sample \(M\) most recent neighbors \((j_{1},t_{1},p_{j_{1}}),(j_{2},t_{2},p_{j_{2}}),\cdots\) of user \(u\) from \(\mathcal{N}_{u}^{t}\). Similar to the construction of the query vector, the key/value matrix for user \(u\)'s most recent neighbors can be formulated as:
\[\mathbf{K}^{(\ell),t}=\mathbf{V}^{(\ell),t}=\left[\begin{array}{cc}\mathbf{h}_{ \tilde{b}}^{(\ell-1),t_{1}}&\oplus&\phi(t,t_{1})&\otimes&\psi(p_{u}^{t},p_{j_ {1}}),\\ \mathbf{h}_{\tilde{b}}^{(\ell-1),t_{2}}&\oplus&\phi(t,t_{2})&\otimes&\psi(p_{u}^{ t},p_{j_{2}}),\\ \cdots\end{array}\right]. \tag{4}\]
#### 3.3.2. **Spatiotemporal Self-Attention**
Then, we adopt a self-attention mechanism to propagate the information as follows:
\[\mathbf{h}_{u}^{(\ell),t}=\left(\mathbf{W}_{v}^{(\ell)}\mathbf{V}^{(\ell),t}\right) \cdot\sigma\left(\frac{[\mathbf{W}_{k}^{(\ell)}\mathbf{K}^{(\ell),t}]^{\top}[ \mathbf{W}_{q}^{(\ell)}\mathbf{q}^{(\ell),t}]}{\sqrt{c+c_{t}}}\right), \tag{5}\]
where \(\sigma(\cdot)\) is the softmax function. \(\mathbf{W}_{q}^{(\ell)},\mathbf{W}_{k}^{(\ell)},\mathbf{W}_{v}^{(\ell)}\in \mathbb{R}^{c\times(c+c_{t})}\) are learnable transformation matrices at \(\ell\)-th layer.
By stacking \(L\) layers, we can obtain the final embedding for user \(u\) as \(\mathbf{h}_{u}^{t}=\mathbf{h}_{u}^{(L),t}\). Analogously, for item \(i\), we need to alternate the user information to item information, and change the neighbor information in Eq. (3), (4) and (5) according to user-item pairs. Thus, \(\mathbf{h}_{i}^{t}\) for item \(i\) can also be calculated. The time complexity of the proposed SPCs is \(O\left(|\mathcal{U}\cup\mathcal{I}|\cdot L\cdot(M(c+c_{t})c+Mc)\right)\).
### Prediction and Optimization
For each \((u,i,t,p_{u}^{t},p_{i})\), we can calculate the affinity score \(y_{u,i}^{t}=\text{MLP}(\mathbf{h}_{u}^{t}\|\mathbf{h}_{i}^{t})\) between user \(u\) and item \(i\) at time \(t\). To optimize parameters, we utilize the widely-used pairwise BPR loss (Kang et al., 2018) for top-K recommendation, which can be formulated as:
\[\mathcal{L}=\sum_{u\in\mathcal{U}}\sum_{(u,i,t,t)\in\mathcal{O}}-\ln\sigma \left(y_{u,i}^{t}-y_{u,t}^{t}\right)+\lambda\|\mathbf{\theta}\|_{2}^{2}, \tag{6}\]
where \(\mathcal{O}=\{(u,i,t^{\prime},t)\mid e_{u,i}^{t}\in\mathcal{E},e_{u,i^{\prime} }^{t}\notin\mathcal{E}\}\) denotes the pairwise training data, and \(\sigma(\cdot)\) is the sigmoid function. \(\lambda\|\mathbf{\theta}\|_{2}^{2}\) denotes the \(L_{2}\) regularization for addressing over-fitting.
## 4. Experiment
### Experimental Settings
**Dataset.** We conduct experiments on two real-world datasets: Gowalla-Food (Gowalla-Food, 2018) and Meituan. Gowalla-Food is a location-based service network for recommendation research. The Meituan dataset is collected from one of the largest local-life service platforms in China. Each interaction contains a user-ID, item-ID, timestamp, and GPS locations. For each dataset, we use the 10-core settings in (Gowalla-Food, 2018) and chronologically split for train, validation, test in 8:1:1. This means we use the most recent 10% interactions for testing, which can be regarded as a multi-step sequential recommendation task. The statistics of all datasets are summarized in Table 2.
**Baselines.** We compare our SPCs with baselines in two categories (Table 1), including: (1) sequential-based methods, where ST-RNN (Gowalla-Food, 2018), STAN (Gowalla-Food, 2018), TISARec (Gowalla-Food, 2018), and SLRC (Kang et al., 2018) utilize spatial or temporal information to capture the dynamic preferences of users; (2) graph-based methods, where LightGCN (Gowalla-Food, 2018), SAE-NAD (Gross et al., 2018), TGSRec (Gowalla-Food, 2018), and TGN (Gowalla-Food, 2018) capture the collaborative signal and utilize the temporal information from multi-hop neighbors.
**Implementation and Evaluation.** We implement our SPCS with PyTorch (Kipf and Welling, 2018) and conduct experiments on Tesla V100 (32GB). We search the node dimension \(c\) and time dimension \(c_{t}\) from (Gowalla-Food, 2018; Kang et al., 2018; Kang et al., 2018).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Dataset** & **\#User** & **\#Item** & **\#Instance** & **Timespan** \\ \hline Gowalla-Food & 15,058 & 26,594 & 553,121 & 2009.1.21-2010.3.11 \\ Meituan & 17,862 & 26,236 & 649,101 & 2022.2.14-20223.28 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Statistics of Datasets.
Figure 2. The overall framework of our proposed SPCS.
256). The learning rate is selected from {1e-3, 1e-4, 1e-5}, and the layer number \(L\) is selected from {1, 2, 3}. For recent neighbor sampling number \(M\), we search from {10, 20, 30}. We fix function \(f(\cdot)\) as identity and search \(\tau\) from {1, 2, 5}. The best version is \(c=c_{t}=128\), Ir = 1e-4, \(L=2,M=20\), \(\tau=1\). For interaction in the test set, we perform a full ranking (Beng et al., 2017) with all item candidates to evaluate the top-K recommendation performance, including Hit@K and NDCG@K, where K in (Kang et al., 2017; Wang et al., 2018).
### Overall Performance
Table 3 shows the performance comparison between our SPCS and baselines. The observations from the table are: (1) SPCS consistently outperforms all the baselines on two datasets. In particular, SPCS improves over the strongest baseline _w.r.t_ Hit@10 by 0.1132 and 0.0961 on Gowalla-Food and Meituan datasets, respectively. The superiority of our SPCS lies in simultaneously modeling spatiotemporal periodicity with the encoding layer and spatiotemporal graph transformer layer. (2) In sequential-based methods, STAN performs best on Gowalla-Food dataset, while SLRC performs best on Meituan dataset. The reason is that SLRC adopts the Hawkes process to explicitly model the temporal periodicity on Meituan dataset. However, they ignore the spatiotemporal collaborative signal so that they perform worse than our SPCS. (3) TGN performs best in graph-based methods. This is because TGN captures collaborative signals with temporal information in dynamic graphs. However, it ignores the spatiotemporal periodicity, especially the spatial information, which makes it worse than our SPCS.
### Model Analysis of SPCS
#### 4.3.1. Ablation Study
We further conduct an ablation study with several variants to validate the contributions of two encodings in SPCS: (1) _-SE_, in which spatial encoding is not used; (2) _-SE+L_, in which spatial encoding is not used but the recall is based on location within 5km; (3) _TE\(\rightarrow\)PE_, which replaces temporal encoding with position encoding. Table 3 reports the performance of these variants on two datasets. Here, we can make the following observations: (1) variant _-SE_ suffers severe performance degradation on two datasets, which demonstrates the importance of spatial encoding in local-life service recommendation. (2) variant _-SE+L_ improves the performance of variant _-SE_, but still performs worse than SPCS. This indicates the importance of spatial information, and propagating this information in graphs leads to better performance. (3) The performance degradation of variant _TE\(\rightarrow\)PE_ also demonstrates the crucial role of temporal encoding.
#### 4.3.2. Case Study
To further investigate the effectiveness of our SPCS, we conduct a case study comparing SPCS with TGN on Meituan dataset. As shown in Figure 4, the black mark denotes the user's latest visited location, green marks denote the user's target items at different times, blue marks denote the top-5 predicted items, and red marks denote the hit items. The inner blue circle and outer green circle denote a 2km and 5km radius, respectively. From
\begin{table}
\begin{tabular}{c|c c c c|c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{4}{c|}{**Gowalla-Food**} & \multicolumn{4}{c}{**Meituan**} \\ \cline{2-10} & **Hit@10** & **Hit@20** & **NDCG@10** & **NDCG@20** & **Hit@10** & **Hit@20** & **NDCG@10** & **NDCG@20** \\ \hline ST-RNN & 0.0264\(\pm\)0.0002 & 0.0397\(\pm\)0.0001 & 0.0189\(\pm\)0.0001 & 0.0222\(\pm\)0.0001 & 0.0185\(\pm\)0.0002 & 0.0357\(\pm\)0.0003 & 0.0098\(\pm\)0.0001 & 0.0131\(\pm\)0.0001 \\ STAN & 0.1971\(\pm\)0.0021 & 0.2459\(\pm\)0.0025 & 0.1443\(\pm\)0.0021 & 0.1563\(\pm\)0.00023 & 0.0846\(\pm\)0.0008 & 0.1351\(\pm\)0.0011 & 0.0421\(\pm\)0.0008 & 0.0485\(\pm\)0.0011 \\ TiSAResc & 0.0396\(\pm\)0.0017 & 0.0630\(\pm\)0.0023 & 0.0214\(\pm\)0.0011 & 0.0272\(\pm\)0.0012 & 0.0793\(\pm\)0.0007 & 0.1276\(\pm\)0.0001 & 0.0409\(\pm\)0.0001 & 0.0530\(\pm\)0.0001 \\ SLRC & 0.1837\(\pm\)0.0014 & 0.2262\(\pm\)0.0008 & 0.1390\(\pm\)0.0012 & 0.1441\(\pm\)0.0015 & 0.1053\(\pm\)0.0010 & 0.1650\(\pm\)0.0016 & 0.0516\(\pm\)0.0011 & 0.0580\(\pm\)0.0011 \\ \hline LightGCN & 0.0337\(\pm\)0.0001 & 0.0562\(\pm\)0.0003 & 0.0109\(\pm\)0.0001 & 0.0142\(\pm\)0.0001 & 0.0380\(\pm\)0.0002 & 0.0646\(\pm\)0.0002 & 0.0179\(\pm\)0.0001 & 0.0205\(\pm\)0.0001 \\ SAE-NAD & 0.0873\(\pm\)0.0043 & 0.1314\(\pm\)0.0049 & 0.0541\(\pm\)0.0019 & 0.0678\(\pm\)0.0020 & 0.0555\(\pm\)0.0013 & 0.0938\(\pm\)0.0022 & 0.0379\(\pm\)0.0008 & 0.0512\(\pm\)0.0012 \\ TGSRc & 0.1595\(\pm\)0.0163 & 0.2141\(\pm\)0.0116 & 0.1071\(\pm\)0.0205 & 0.1208\(\pm\)0.0194 & 0.0619\(\pm\)0.0007 & 0.0998\(\pm\)0.0010 & 0.0315\(\pm\)0.0006 & 0.0409\(\pm\)0.0006 \\ TGN & 0.2440\(\pm\)0.0046 & 0.3041\(\pm\)0.0029 & 0.1692\(\pm\)0.0064 & 0.1843\(\pm\)0.0059 & 0.0902\(\pm\)0.0037 & 0.1448\(\pm\)0.0076 & 0.0468\(\pm\)0.0024 & 0.0604\(\pm\)0.0033 \\ \hline
**SPCS** & **0.3576\(\pm\)**0.0007 & **0.3931\(\pm\)**0.0014 & **0.2931\(\pm\)**0.0010 & **0.3021\(\pm\)**0.0008 & **0.1863\(\pm\)**0.013 & **0.2662\(\pm\)**0.0020 & **0.1073\(\pm\)**0.0007 & **0.1273\(\pm\)**0.0008 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Comparison results on two datasets. Underline means the best baseline, and bold means the best performance. We report the average and standard deviation over 3 independent runs.
Figure 4. Case study for user-994 over a period of time on Meituan dataset.
Figure 3. Ablation study for SPCS on two datasets.
the results, we find that the user's target items are mostly located within 2km, and SPCS performs better than TGN in hitting these items. This further demonstrates the importance of simultaneously modeling the spatiotemporal periodicity and collaborative signal for local-life service recommendation.
## 5. Conclusion
In this paper, to simultaneously model the spatiotemporal periodicity and collaborative signal for local-life service recommendation, we propose a novel SPCS. To capture the spatiotemporal contexts in graphs, we design the encoding layer with temporal encoding and spatial encoding. To further capture the spatiotemporal periodicity and collaborative signal, we design a novel spatiotemporal graph transformer (SGT) layer, which aggregates the spatiotemporal information from multi-hop neighbors. Extensive experiments on both real-world public and industrial datasets demonstrate the effectiveness of our proposed SPCS.
|
2309.04307 | Flat holography and celestial shockwaves | In this paper we systematically develop the flat/CFT holographic dictionary,
building on AdS/CFT holography. After analysing the behaviour of scalar field
modes on hyperbolic slices of Minkowski and performing the holographic
renormalisation for the associated onshell action, we obtain a holography
dictionary between the bulk theory and the corresponding dual theory on the
celestial sphere. We propose that a single scalar field in the bulk is dual to
two series of operators on the celestial sphere; the scaling dimension of these
operators takes values on the principal series. The real time features of the
bulk theory, such as the dynamical and the casual structure, are encoded in the
construction of correlation functions on the boundary via the coefficients of
the bulk modes. Moreover, we will see that the two series of operators can be
interpreted as ingoing and outgoing waves in the bulk. We illustrate our
dictionary with the example of a single shock wave. Our results lay foundations
for further computation within the flat/celestial CFT correspondence. | Zezhuang Hao, Marika Taylor | 2023-09-08T13:10:49Z | http://arxiv.org/abs/2309.04307v3 | # Flat holography and celestial shockwaves
###### Abstract
In this paper we systematically develop the flat/CFT holographic dictionary, building on AdS/CFT holography. After analysing the behaviour of scalar field modes on hyperbolic slices of Minkowski and performing the holographic renormalisation for the associated onshell action, we obtain a holography dictionary between the bulk theory and the corresponding dual theory on the celestial sphere. We propose that a single scalar field in the bulk is dual to two series of operators on the celestial sphere; the scaling dimension of these operators takes values on the principal series. The real time features of the bulk theory, such as the dynamical and the casual structure, are encoded in the construction of correlation functions on the boundary via the coefficients of the bulk modes. Moreover, we will see that the two series of operators can be interpreted as ingoing and outgoing waves in the bulk. We illustrate our dictionary with the example of a single shock wave. Our results lay foundations for further computation within the flat/celestial CFT correspondence.
[MISSING_PAGE_POST]
###### Contents
* 1 Introduction
* 2 Mode Analysis on Minkowski
* 2.1 Milne Slicing
* 2.2 Explicit Modes
* 2.3 Radial Equation
* 3 Holography
* 3.1 Holographic Dictionary
* 3.2 Holography Dictionary for Milne
* 3.3 Correlation Functions
* 3.4 Holographic Dictionary for Onshell Scalar Fields
* 4 Shock Waves and Their Holographic Interpretation
* 4.1 Coefficients
* 4.2 Cauchy Problem and Scattering
* 5 Discussion and Conclusions
* A Coordinates
* B Solutions
* C Harmonic Modes
Introduction
General relativity and quantum mechanics have been brought up for a century and they are believed to be the most fundamental rules which respectively govern the large scale structure of the universe and the microscopic interactions between elementary particles even though they are not compatible with each other. After their great success in predicting the observation from the lab, physicists spent long time looking for a unified theory of quantum and gravity, e.g. semiclassical field theory, supergravity, string theory. The quantum gravity theory that would please everyone has not been figured out yet while, during the extensive study of various proposed models, another fundamental principle which relates the dimension of spacetime, quantum and gravity effects has been found and caused a great attention in this century, so called holography principle.
The idea of projecting the physical world to a lower dimension one living on the boundary exists for long while it has not been formally discussed in physics literature until the work [1; 2], initiated by the study of the black hole entropy [3; 4; 5; 6] which tells us that the entropy of the black hole is proportional to its horizon area.
Based on such observation, one can further conclude that the degrees of freedom or information for a given system is bounded by its area of boundary rather the volume, which makes it possible to encode all the bulk information into the proposed boundary system. Such correspondence concerning the assignment of degrees of freedom is then developed to the duality between the subregion of the bulk and the boundary, conjectured to be characterised by the Ryu-Takayanagi surface [7]. After proper assumptions, the conjecture was proofed in [8; 9] and then, taking the quantum effect into consideration, the concept of RT surface is generalised to the surface named quantum extremal surface [10].
Here we will not follow the stream of the discussion of entropy thus it turns out that the structure of holography is more than the projection of degrees of freedom after the first concrete realization of the holography principle discovered by [11], called AdS/CFT correspondence. In that work, Maldacena pointed out that type IIB string theory on the \(AdS_{5}\times S^{5}\) background is dual to the super-Yang-Mills theory in 3+1 spacetime dimensions by studying the decoupling limit of the stack of \(D_{3}\) branes in string theory and its corresponding low energy supergravity solution, which implies that, in additional to the reduction of dimension, the theory of quantum and gravity could also be relevant when comparing the theory in the AdS bulk with its boundary CFT correspondence. Such relation works exactly like the relativity between space and time in gravity or the relativity between the particle and wave in quantum.
However in practice, due to the lack of knowledge for the quantum gravity theory and the difficulty of studying strongly coupled gauge theory at low energy, one can first choose to investigate the AdS/CFT correspondence at the 't Hooft large \(N\) limit, under which the gauge theory will be simplified since the contribution from planar diagrams will become dominant if the number of colors \(N\) is large when keeping \(\lambda=g_{YM}^{2}N\) constant [12]. From the bulk side, we see that the string theory will become classical by comparing the map between parameters \(g_{s}\sim g_{YM}^{2}\) and \(\alpha\sim 1/\sqrt{g_{s}N}\). Moreover, by taking large value of \(\lambda\), the bulk theory will be weakly coupled thus the AdS/CFT correspondence becomes a weak/strong duality. In such case, the bulk theory is described by the semiclassical field theory and one can write down the effective action, decomposing the field at the boundary, and then map the data from asymptotic AdS infinity to the boundary CFT named AdS/CFT dictionary.
In the literature, there are mainly two ways to construct the AdS/CFT dictionary [13; 14]. One starts from the effective field theory on \(AdS_{5}\times S^{5}\) background while the other starts from \(AdS_{5}\) thus they are called top -down and bottom-up approaches to AdS/CFT, respectively. At first sight, the bottom-up approach looks easier if one just consider the fields on the \(AdS_{5}\) background but
the supersymmetric information is lost since the ignorance of the Kaluza-Klein fields on the \(S^{5}\) sphere, e.g. we would obtain non-zero vacuum energy. Such issue is rescued in the work [15; 16] by Kostas and Marika. They developed a KK reduction map which reduces all the fields in 10d to 5d in a gauge invariant way therefore concludes that the top-down and bottom-up approaches could be equivalent provided that proper reduction procedure is applied. For this article, we will adopt the bottom-up approach and ignore the KK fields on the internal space. In this case, the duality is clarified by the dictionary proposed by Witten
\[\exp\Big{(}-S_{\text{AdS}_{d+1}}(\Phi)\Big{)}_{\Phi\sim\phi_{0}}=\Big{\{}\exp \,-\int_{S^{d}}\phi_{0}\;\mathcal{O}\,\Big{\}}_{CFT}, \tag{1}\]
in which \(S_{\text{AdS}_{d+1}}(\Phi)\) is the action of the semi-classical theory in the bulk with scalar fields characterised by the asymptotic behaviour \(\Phi\sim\rho^{-d+\Delta}\phi_{0}\) at large AdS radius \(\rho\). From the right hand side, we can see that \(\phi_{0}\) is dual to the source in the CFT theory and it is coupled to the operator \(\mathcal{O}\). The scale dimension \(\Delta\) of the operator and the mass \(M\) of the particle in the bulk preserve the relation \(\Delta(\Delta-d)=M^{2}\), which is obtained by solving the equation of motion of \(\Phi\).
During last two decades, the AdS/CFT correspondence has caused a great interest among physicists and the general correspondence principle itself or the dictionary (1) has been verified in a quite amount of work, by constructing proper models and comparing calculation results between two sides. For example one can see the work on HS/CFT correspondence [17; 18; 19; 20; 21] and the work for AdS\({}_{3}\)/CFT\({}_{2}\) correspondence [22; 23; 24]. However, although the AdS/CFT correspondence has passed through all theoretical tests, it is still far from being tested in the lab or producing any non-trivial predictions of the physical world. One of the main issue here is that, on the bulk side of the duality, the geometry background is AdS while the measurement of cosmology tells us that the geometry of our universe is with small positive curvature near to the flat spacetime.
There are developments on the dS/CFT correspondence [25; 26] by matching the generators of the isometry group of de Sitter and the boundary conformal symmetry group while it is still not clear how to decompose the boundary data of fields on dS and map them to the boundary CFT therefore construct the dS/CFT dictionary. As for the flat case, following the idea from [27; 28; 29], people have proposed that scattering amplitudes in Minkowski are dual to correlation functions of celestial CFT living on the celestial sphere. Such proposal has been developed dramatically in recent years (One can see the lecture notes [30; 31; 32] and references therein.) but the approach goes beyond from the standard treatment of AdS/CFT.
Actually, there have been attempts to address the problems for developing a flat version of holographic principle dated even back to the birth of AdS/CFT in the talk given by Witten [33]. During that talk, he discussed various obstacle to writing down the flat/CFT dictionary. Conceptually, if one assumes that both of the quantum gravity theory and scattering amplitudes are dual to the CFTs on the boundary, then it will be hard to understand that why the quantum gravity theory should be equivalent to its own scattering amplitudes. From the technical point of view, the complexity for the geometric structure and the behaviour of fields at the two null boundaries of Minkowski space make it hard to write down the boundary correlation functions or to study the distribution of the degrees of freedom. At the end, he proposed that if the flat theory is dual to the structure \(X\) on the boundary, then \(X\) should be more complicated than CFT. The complicated nature of the structure \(X\) can also be seen from the study of symmetries of the asymptotic flat spacetime. Not like the AdS case, the isometry group for asymptotically flat space will reduce to the infinite BMS group [34; 35; 36] rather than the Poincare group. Globally, the BMS group is generated by the supertranslations and superrotations in which supertranslations behaves like 1d translation while superrotations are characterised by \(SL(2,\mathbb{C})\). After fifty years of study of BMS group, people realised that the superroations could be locally generalised to the Virasoro even with
central extension [37; 38; 39], which brings hope to construct the duality between the flat theory and the 2d CFT [40].
Based on another observation that the supertranslation Ward identity is equivalent to the Weinberg's soft graviton theorem on the celestial sphere [41; 42] when studying the symmetry of the graviton scattering amplitudes, Strominger with his collaborators then conjectured the duality between scattering amplitudes and celestial CFT so called celestial holography. After that, many properties concerning the celestial CFT have been explored e.g. they have constructed the dual celestial stress tensor [43] and the corresponding celestial CFT OPE coefficients are also discussed [44; 45; 46]. Although the celestial CFT exhibits a rich structure for people to study the scattering amplitudes, it is different from the standard 2d CFT and there are some other issues which cause a great confusion. For example, people could not understand the reason why a real time flat theory should be dual to an Euclidean theory on the sphere and it is still hard to say if the celestial CFT is unitary or not since the scale dimension living on the principle series are complex. There are later developments which claim that the \(4d\) scattering amplitudes should be dual to the \(3d\) Carrollian CFT [47; 48] thus the BMS symmetry is manifested and signatures from both sides will fit. Here we are not going to follow their approach while one can see that some of the problems will be clear once the structure \(X\) is specified and a proper flat/CFT dictionary is given.
The main goal of this article is to develop the AdS/CFT correspondence into the flat/CFT correspondence thus bring the holography principle and especially all the work on AdS/CFT to the measurable level. More precisely, we will finally construct the dictionary between flat spacetime and the CFT on the boundary which works the same as (1).
Before going into the flat/CFT dictionary, we first introduce the other principle used many times in this article, which is the completeness relation of mode expansion for a generic physical field which tells us that a generic field configuration can be decomposed into given modes for the linear physical system and the information of the field is encoded in coefficients. They are determined by the boundary data. Physically, for example in quantum mechanics and quantum field theory, one always assumes that all the modes form a complete set of basis for the physical solution space while the rigorous mathematical structure has been studied so-called Strum-Liouville theory even though the boundary conditions are often hard to specify or to check in the physical situation.
In physics the mode expansion is also called superposition principle and has been used widely dated from the birth of quantum mechanics. Here we will reconsider the mode expansion and find it is not as obvious as people thought it would be although it has been taken for granted for long. Taking the story of quantum field theory for example, the traditional mode is the plane wave mode \(\Phi_{K}=e^{iK\cdot X}\) and all on-shell modes satisfying \(-K^{2}=M^{2}\) form a complete basis for the field describing particles of mass \(M\). After the quantisation, coefficients for the plane wave are promoted to be creation and annihilation operators. Recently, except for plane waves, people have constructed a new kind of basis so called conformal basis \(\Phi_{\Delta}\)[49; 50] to highlight the symmetry of Lorentz group \(SO(1,3)\) and the unitarity of the representation of Lorentz group requires that \(\Delta\) should lie on the principal series thus one assumes that all states on the principal series form a complete basis. In addition to the conformal basis, for this article, we are going to introduce another kind of mode based on the foliation of the Minkowski as
\[-(X^{0})^{2}+(X^{1})^{2}+(X^{2})^{2}+(X^{3})^{2}=-\tau^{2}, \tag{2}\]
in which one can treat it as the embedding of the AdS hyperboloid with radius \(\tau\geq 0\). Given such foliation, one can further choose \(\tau\) together with the coordinate on AdS surface as Minkowski space coordinates therefore recast the equation of motion for Minkowski into the AdS hyperboloid. Then, according to the superposition principle, one can claim that a generic field can be decomposed into modes \(\Phi_{k}\) with effective mass \(k\) on the AdS surface. Since here the equation on AdS is not physical,
\(k\) could take all the value in the complex plane and till now it is not clear how to determine which of them will form the necessary complete basis. From the boundary point of view, it leaves the range of scale dimension \(\Delta_{k}\) undetermined since we have the dictionary
\[\Delta_{k}(\Delta_{k}-2)=k^{2}. \tag{3}\]
We will use Klein-Gordon equation as an example to illustrate how the mode expansion works in the context of AdS slicing (2) and discuss various possible choices of \(k\) in the section 2 by exploring the physical meaning and the stability of the on-shell modes.
After a careful study of the mode analysis, then one will be able to decompose the bulk action \(S\) for Minkowski space into \(k\)-mode components \(S(k)\) like what has been done for on-shell fields. To construct the flat/CFT dictionary like (1), a technical issue ahead is that the on-shell action \(S^{\text{onshell}}\) is infinite due to the integral over the infinite spacetime volume and one needs to perform the renormalisation on \(S^{\text{onshell}}\) in order to make the action finite, denoted as \(S^{\text{ren}}\) or equivalently \(S^{\text{ren}}(k)\). Such problem was addressed in the work [13] then has been fully discussed by following work [51; 52; 53; 54; 55]. The developed systematic procedure is so called holographic renormalisation. The basis idea of holographic renormalisation is that one should treat the infinite part of the action in the bulk as IR divergences and introduce local counterterms \(S^{\text{ct}}\) to cancel the divergence, i.e. \(S^{\text{ren}}=S^{\text{onshell}}+S^{\text{ct}}\). Such IR divergences in the bulk are dual to the UV divergences of the boundary QFT through the UV/IR connection [56]. The UV divergence in the bulk is dual to the IR divergence of the boundary QFT while it should be absent when working in the full context of the holography principle since the bulk quantum gravity theory is UV finite. As for the low energy effective description of the bulk theory, the UV divergence will appear and contribute to anomalous dimensions of CFT operators from boundary point of view. We will not discuss them in this article and one can see the straightforward treatment of UV divergences from the bulk side in the recent work [57]. In the section 3, we will first decompose the field into AdS modes then apply the holographic renormalisaton procedure on each single AdS surface thus complete the holographic renormalisation for flat spacetime.
Given the flat holography renormalisation, one then obtains the dictionary between the effective theory on Minkowski and the CFT living on the boundary sphere. The CFT context can be read off from the renormalised action \(S^{\text{ren}}\). It turns out a single bulk scalar field is dual to two series of CFT operators on the sphere with scale dimension living on the principal series. The way that they are coupled with each other is determined by the dynamical and causal structure of the bulk theory. Corresponding two-point and three-point correlation functions on the celestial sphere are also studied in the context of \(\Phi^{3}\) and \(\Phi^{4}\) interaction and they are represented as double-disk diagrams.
Later in section 4, we will see that we can decompose a massless field into in and out going shock waves and each of the shock wave is dual to one series of operators on the celestial sphere. One-point and two-point functions on the celestial sphere dual to the shock wave are also derived. For spherical shock waves, the two-point function will become trivial at leading order while the subleading term relies on the study of backreaction of the metric and the broken of the spherical symmetry in the perturbative sense. As a simple model, we take the spherical shock wave as an example to perform the mode analysis procedure introduced in section 2 and the corresponding coefficients are determined. Furthermore, we find that the full information of Minkowski could be stored in a pair of AdS hyperboloid, which forms a new kind of Cauchy surface.
Mode Analysis on Minkowski
In this section we consider solutions of the scalar field equation on Minkowski space and discuss how these can be used to construct a basis for scalar fields satisfying the given boundary conditions. We will begin our discussions with the familiar analysis within Minkowski coordinates before moving to Anti-de Sitter and de Sitter slicings.
Let us begin with the massive scalar equation
\[\left(\frac{\partial}{\partial X^{\mu}}\frac{\partial}{\partial X_{\mu}}-M^{2} \right)\Phi_{M}(X)=0, \tag{2.1}\]
in which \(M\) is the mass of the scalar field \(\Phi_{M}\) and \(X^{\mu}=\{X^{0},X^{i}\}\) are coordinates of the Minkowski space \(\mathbb{R}^{1,3}\) with signature \((-,+,+,+)\).
One can immediately write down a basis for solutions for this equation
\[f_{K}(X)=e^{iK\cdot X} \tag{2.2}\]
in which \(K^{2}+M^{2}=0\) and \(K^{\mu}\) is understood as the momentum of the particle. The restriction of \(K^{\mu}\) to be real follows from imposing boundedness of the field as either \(X^{0}\) or \(X^{i}\) approach infinity; this is implicitly assumed in most analyses. A generic scalar field \(\Phi\) satisfying these boundary conditions can then be expressed as usual as
\[\Phi(X)=\int d^{4}K\;\Phi(K)f_{K}(X) \tag{2.3}\]
For an onshell field of mass \(M\) the field in momentum space is such that \(\Phi_{M}(K)\propto\delta(K^{2}+M^{2})\).
The approach above intrinsically respects relativistic covariance. In some contexts one works with bases that partially break this covariance, particularly by separating space and time. The basis above can trivially be rewritten as
\[f_{w,k}(X^{0},X^{i})=e^{iwX^{0}}e^{ik_{i}X^{i}}, \tag{2.4}\]
where the mass-shell condition is \(\omega^{2}=k^{i}k_{i}+m^{2}\). Again this basis can be used to express any scalar field with the same boundary conditions. One can also use a mixed representation to express a field e.g.
\[\Phi(X^{0},X^{i})=\int d^{3}k\;\Phi(X^{0},k^{i})e^{ik_{i}X^{i}} \tag{2.5}\]
although the onshell condition for the mixed representation field is then a differential equation rather than algebraic i.e.
\[\partial^{2}_{X^{0}}\Phi(X^{0},k^{i})=(k^{2}+M^{2})\Phi(X^{0},k^{i}). \tag{2.6}\]
### Milne Slicing
Following this review, we now consider solution of the scalar field equation using Anti-de Sitter and de Sitter slicing of Minkowski space. We illustrate these slicings in Figure 1. Region \(\mathcal{A}^{\pm}\) are foliated by Euclidean Anti-de Sitter (hyperbolic) surfaces while region \(\mathcal{D}\) is foliated by de Sitter surfaces. To describe the region \(\mathcal{A}\) which is sliced by hyperboloids, we use Milne coordinates written as
\[ds^{2}=G_{\mu\nu}dX^{\mu}dX^{\nu}=-d\tau^{2}+\tau^{2}\left(\frac{d\rho^{2}}{1+ \rho^{2}}+2\rho^{2}\gamma_{z\bar{z}}dzd\bar{z}\right), \tag{2.7}\]
in which \(\rho,\tau\in\mathbb{R}\). Here \(z,\bar{z}\) are complex coordinates and the metric \(\gamma_{z\bar{z}}\) is the standard metric on the sphere. \(\tau\) is the radius of the AdS hyperboloid introduced in (1.2) and one only needs to take the positive part \(\tau\geq 0\) to cover the single region \(\mathcal{A}^{+}\). The Milne horizon is given by \(\tau\to 0\) while the
ull infinity is the region where \(\tau\to\infty\). In such coordinates, the scalar equation can be separated into two equations
\[\left(\rho(\rho^{2}+1)\partial_{\rho}^{2}+(3\rho^{2}+2)\partial_{ \rho}-k^{2}\rho-\frac{l(l+1)}{\rho}\right)\phi_{l}(\rho,k) = 0, \tag{8}\] \[\left(-3\frac{\partial_{\tau}}{\tau}-\partial_{\tau}^{2}+\frac{ \omega^{2}}{\tau^{2}}-M^{2}\right)\psi(\tau,\omega) = 0, \tag{9}\]
where the first equation represents a particle of effective mass \(k\) on the hyperboloid and the second equation depends only on the time \(\tau\). Here \(l\) labels the usual discrete eigenvalue of scalar spherical harmonics \(Y_{m}^{l}(z,\bar{z})\). Accordingly the scalar basis can be expressed as
\[f_{\omega,k,l,m}(\tau,\rho,z,\bar{z})=\psi(\tau,\omega)\phi_{l}(\rho,k)Y_{m}^{ l}(z,\bar{z}) \tag{10}\]
where the onshell condition requires \(\omega=k\). As above, we will be interested in using this basis to represent fields with the same boundary conditions which are not necessarily onshell, hence we do not impose \(\omega=k\) a priori. Now, based on the superposition principle, any scalar satisfying the boundary conditions can be expressed as
\[\Phi(\tau,\rho,z,\bar{z})=\sum_{l,m}\int\,d\omega dk\;f_{\omega,k,l,m}(\tau, \rho,z,\bar{z})\tilde{\Phi}(\omega,k,l,m)=\int\,d\omega dkf(\tau,\rho,z,\bar{z };k,\omega), \tag{11}\]
where \(\tilde{\Phi}\) can be treated as coefficients and one can deduce them by applying the orthogonality relation of the basis
\[\int\,d\tau d\rho dzd\bar{z}\;w(\tau,\rho,z,\bar{z})\;f_{\omega,k,l,m}(\tau, \rho,z,\bar{z})f_{\omega^{\prime},k^{\prime},l^{\prime},m^{\prime}}(\tau,\rho, z,\bar{z})=\delta_{ll^{\prime}}\delta_{mm^{\prime}}\delta(\omega-\omega^{\prime}) \delta(k-k^{\prime}) \tag{12}\]
with proper weight function \(w\) deduced form the equation of motion. Sometimes it is convenient to do the sum over discrete variables and then absorb the coefficient term into the mode function therefore define the \((\omega,k)\) mode \(f(\tau,\rho,z,\bar{z};k,\omega)\). In relation (11) we express the integrals abstractly; we will discuss how the domain of \((\omega,k)\) relates to boundary and regularity conditions below.
We can also define a basis on spatial slices
\[F_{k,l,m}(\rho,z,\bar{z})=\phi_{l}(\rho,k)Y_{m}^{l}(z,\bar{z}) \tag{13}\]
Figure 1: The Milne wedge \(\mathcal{A}^{\pm}\) are sliced by AdS surfaces while the Rindler wedge \(\mathcal{D}\) is foliated by dS surfaces.
Any scalar satisfying the equation of motion can be expressed as
\[\Phi(\tau,\rho,z,\bar{z})=\sum_{l,m}\int\,dk\;F_{k,l,m}(\rho,z,\bar{z})\bar{\Phi}( \tau,k,l,m), \tag{14}\]
where we have imposed the on-shell condition \(\omega=k\) and reorganize the product of \(\tilde{\Phi}(k,k,l,m)\psi(\tau,k)\) into \(\bar{\Phi}(\tau,k,l,m)\).
Analogously we can transform only in the time direction i.e.
\[\Phi(\tau,\rho,z,\bar{z})=\int\,d\omega\;\psi(\tau,\omega)\hat{\Phi}(\omega, \rho,z,\bar{z}), \tag{15}\]
where again we rewrite the data and make \(\sum_{lm}\tilde{\Phi}(\omega,\omega,l,m)F_{k,l,m}(\rho,z,\bar{z})\) into \(\hat{\Phi}(\omega,\rho,z,\bar{z})\). We will see later that those two are the most natural ways to read off the holographic data.
### Explicit Modes
In this section, we turn to the explicit solution of the differential equations above. These have been discussed in the literature [58, 59, 60, 29, 31], but here we will consider in further detail the role of regularity and boundary conditions. Together with stability, we will see that those requirements will impose constraints on the parameter \(\omega\) or \(k\).
#### Massless Fields
Let us consider the differential equation in time. It is useful to consider first the case of a massless field, so that the equation reduces to
\[\left(-3\frac{\partial_{\tau}}{\tau}-\partial_{\tau}^{2}+\frac{\omega^{2}}{ \tau^{2}}\right)\psi(\tau,\omega)=0 \tag{16}\]
after setting \(M=0\) in (8). The generic solution takes the form
\[\psi(\tau,\omega)=\psi(\alpha_{+})\tau^{-1+\alpha_{+}}+\psi(\alpha_{-})\tau^{ -1+\alpha_{-}} \tag{17}\]
where \(\alpha_{*}\) are the two roots of
\[\alpha^{2}=1+\omega^{2}. \tag{18}\]
Solutions are bounded \(|\psi|<\infty\) at the null infinity \(\tau\rightarrow\infty\) if either \(\text{Re}(\alpha_{+})\leq 1\) or \(\text{Re}(\alpha_{-})\leq 1\). More precisely, states that are localised in the center and vanish at the boundary take the value \(|\text{Re}(\alpha)|<1\), called bound states. States which could propagate to the infinity and have non-zero contribution at null boundary are called scattering states. We will study them in a separate way.
_Scattering States_
For scattering states, we should have either \(\text{Re}(\alpha_{+})=1\) or \(\text{Re}(\alpha_{-})=1\). Thus all solutions that are finite at the infinity have the form
\[\alpha=1+ip \tag{19}\]
with \(p\) real. We can write a general scattering state as
\[\psi(\tau,p)=\psi(p)e^{ip\ln\tau} \tag{20}\]
where \(p\) is real, \(\alpha^{2}=(1-p^{2})+2ip\) and \(\omega^{2}=2ip-p^{2}\). Clearly each such mode is not real. If \(\alpha_{+}=1+ip\), then the corresponding second root of (18) is \(\alpha_{-}=-(1+ip)\); the latter mode is bounded as \(\tau\rightarrow\infty\) but is not bounded as \(\tau\to 0\). Thus for a given real value of \(p\) the general solution takes the form
\[\psi(\tau,p)=\psi_{+}(p)\tau^{ip}+\psi_{-}(p)\tau^{-ip-2}\equiv\psi_{+}(p)f_{+ }(\tau,p)+\psi_{-}(p)f_{-}(\tau,p) \tag{21}\]
To understand the orthogonality relation it is useful to first recall the standard relations for exponentials i.e.
\[\int_{-\infty}^{\infty}d(\ln\tau)e^{i(p-q)\ln\tau}=\int_{0}^{\infty}\frac{d\tau} {\tau}e^{i(p-q)\ln\tau}=2\pi\delta(p-q) \tag{22}\]
The latter is equivalent to
\[\int_{0}^{\infty}d\tau w(\tau)f_{+}(\tau,p)f_{-}(\tau,q)=\delta(p-q) \tag{23}\]
where the weight function \(w(\tau)=\tau\) is derived by expressing (16) in standard Sturm-Liouville form i.e.
\[\partial_{\tau}\left(P(\tau)\partial_{\tau}\psi\right)+Q(\tau)\psi=-\lambda w( \tau)\psi \tag{24}\]
where \(\lambda\) is the eigenvalue i.e. \(\omega^{2}\) and the coefficient functions \((P(\tau),Q(\tau))\) follow from (16).
_Bound States on Principal Series_
For bound states \(|\psi|\to 0\) when \(\tau\to\infty\), as we have mentioned, \(\alpha\) should satisfy \(\text{Re}(\alpha_{\pm})<1\). Here we are just interested in the special case such that \(\alpha_{\pm}\) are chosen to be
\[\alpha_{\pm}=\pm ip \tag{25}\]
for \(p\in\mathbb{R}\), then \(\tau\) modes \(f_{\pm}\) will become
\[f_{+}(\tau,p)=\tau^{-1+\alpha^{+}}=\frac{e^{ip\ln\tau}}{\tau},\qquad f_{-}( \tau,p)=\tau^{-1+\alpha^{-}}=\frac{e^{-ip\ln\tau}}{\tau}. \tag{26}\]
Now we impose further restriction on \(p\) so that make \(p\geq 0\). This could always be done since \(f_{+}(\tau,p)=f_{-}(\tau,-p)\) and one can treat such restriction as the reduction of the redundancy of the basis or the decomposition of the mode into positive and negative frequency components. For a generic function \(\psi(\tau,p)\), we have the decomposition
\[\psi(\tau,p)=\psi(p)f_{+}(\tau,p)+\psi^{*}(p)f_{-}(\tau,p), \tag{27}\]
in which \(\psi(p)\) are complex coefficients and \(\psi(\tau,p)\) is now real. Given the weight function \(w(\tau)=\tau\), one can check that
\[\int_{0}^{\infty}d\tau w(\tau)f_{+}(\tau,p)f_{-}(\tau,q)=2\pi\;\delta(p-q) \tag{28}\]
and the relation
\[\int_{0}^{\infty}d\tau w(\tau)f_{+}(\tau,p)f_{+}(\tau,q)=\int_{0}^{\infty}d \tau w(\tau)f_{-}(\tau,p)f_{-}(\tau,q)=2\pi\;\delta(p+q)=0. \tag{29}\]
Later, we will see that those states are dual to operators on the celestial sphere with scale dimension \(\Delta\) satisfying
\[\Delta=1+\alpha_{+}=1+ip, \tag{30}\]
which is half of the principal series that forms the unitary representation of \(SO(1,3)\)[61]. It is also worthwhile to note that the mode expansion (15) will become inverse Mellin transform if the \(\tau\) modes take the form in (27).
Here we should note that the concept of bound and scattering are not absolute in a given physical theory. For example, one can also classify all the physical modes by the flux at the null boundary which behaves like \(\tau^{2}\psi^{2}\). If the scattering modes are defined as the modes that have non-zero flux at the boundary, the previous bound principal states will then become scattering states according to the new definition. The point is that, like quantum mechanics, we would like to emphasis that the behaviour of the field configuration at null infinity are related to \(\alpha\), which could have explicit physical meaning based on the question we are interested in.
#### Massive Fields
For non-zero mass the generic solution takes the form
\[\psi(\tau,\omega)=\psi(\alpha_{\pm})\frac{J_{\alpha_{\pm}}(M\tau)}{\tau}+\psi( \alpha_{-})\frac{J_{\alpha_{-}}(M\tau)}{\tau} \tag{31}\]
where \(\alpha_{\pm}\) are again the roots of (18) and the first kind Bessel function is denoted as \(J_{\alpha}\). Here we assume that \(\alpha_{\pm}\) are generic complex numbers, in which case the two Bessel functions expressed in this form are manifestly linearly independent. For integer \(\alpha\) the second solution will be expressed in the form of the second Bessel function \(Y_{\alpha}\). Solutions that are bounded as \(\tau\to 0\) have \(\mathrm{Re}(\alpha)\geq 1\) since \(J_{\alpha}(M\tau)\sim\tau^{\alpha}\) as \(\tau\to 0\). Using this limit of \(J_{\alpha}(x)\) as \(x\to 0\), the mode functions clearly reduce to those above as \(M\to 0\). At large \(\tau\), the Bessel function will be regular \(J_{\alpha}\sim 1/\sqrt{\tau}\) at the null boundary thus it fits out intuition that the trajectory of massive particle will start at \(i^{-}\) and end at \(i^{+}\), which is the main difference from the massless case. However the Bessel function will have the same orthogonality relations as (28) (29) provided one has done the proper analytic continuation from the orthogonality relation for real \(\alpha\). In the following sections, we will mainly use the massless solution as the example to perform the calculation while one should note that the results could be generalised to the massive case without conceptual obstacle.
As we have seen, the value of \(\alpha(k)\) or equivalently \(\Delta\) are often related to the behaviour of the solution near the light cone or null boundary. For example in section 3 of [29], they argued that the onshell action should be regular around the light cone thus, for the modes behaves as \(\tau^{-1+\alpha}\), one requires
\[\mathrm{Re}(\alpha)\geq 0 \tag{32}\]
which is a weaker restriction than the boundedness condition \(\mathrm{Re}(\alpha)\geq 1\).
In section 2 of [58], the regularity of the solution is studied from the normalization point of view. The behaviour of the field at null boundary and light cone are both studied and it is argued that the solution should be oscillatory in order to make the mode normalizable. In our context, the oscillatory condition means that \(\alpha\) should be complex, i.e. \(\mathrm{Im}(\alpha)\neq 0\). Furthermore, Marolf also argued that the oscillatory fields should be separated into two parts. One is dynamical and it is normalizable according to the Klein-Gordon norm while the other part is not normalizable and is used to specify the boundary condition of the system. In additional to the Klein-Gordon norm, the other kind of paring between the oscillatory modes is also introduced in order to study the inner product structure between all the modes.
More rigorous study of the asymptotic behaviour of the solution for Klein-Gordon equation in math literature are shown in [62, 63, 64]. The boundedness of the solution for Schwarzschild case is shown in the gravity literature, so called Kay-Wald boundedness theorem [65] and one can see the review in [66]. For Minkowski case, stability for the Einstein equation is first shown in [67]. Then for scalar-Einstein case when the matter field propagating on the asymptotically Minkowski background, the stability is also proofed provided the decay of the fields is under well controlled at the boundary [68], which leads to the constraints on the real part of the scale dimension i.e. \(\mathrm{Re}(\alpha)\). This is similar to the study of Breitenlohner-Freedman bound for AdS spacetime [69, 70]. For the stability of Minkowski, a sharp bound for \(\alpha\) is not found yet while, according to the above discussion, we summarise the possible range as 1
Footnote 1: We should note that the upper bound comes from the boundedness of the modes \(|\psi|\lesssim\infty\) at the null infinity while for the stability of Minkowski space the condition will usually be stronger than the boundedness. For example, in the work [68], the decay behaviour of the field is required to be \(|\psi|<\tau^{-1}\) thus the real part of \(\alpha\) could only be zero after taking the lower bound into consideration. Here we choose to present the wider range for \(\alpha\) although it not clear to us whether the value \(0<|\mathrm{Re}(\alpha)|\leq 1\) are physical and stable or not.
\[0\leq\mathrm{Re}(\alpha)\leq 1,\qquad\mathrm{Im}(\alpha)\neq 0. \tag{33}\]
However, given the fact that \(\alpha\) has two solutions satisfying \(\alpha_{+}+\alpha_{-}=0\), we will have \(\mathrm{Re}(\alpha_{\pm})=0\) if one requires both of the modes \(f_{\pm}(\tau,p)\) live in the bound (33). For other choices of \(\alpha\), one of the two modes will be stable while the other will not.
For the massive particles, we should note that the upper bound \(\mathrm{Re}(\alpha)\leq 1\) will be relaxed since the requirement of regularity at the null boundary will not impose restrictions on the Bessel functions while it still not known whether the stability condition will introduce extra restrictions on the upper bound. The lower bound will not change since we have seen that the Bessel function will approach to the massless solution when \(\tau\to 0\).
### Radial Equation
Now let us turn to the radial equation (8). It is important to distinguish between solutions to the equation for all radial values, and the asymptotic expansions from which the holographic dictionaries are constructed. The general solution to the radial equation can be written as
\[\phi_{l}(\rho;k)=\phi(k)\mathrm{csch}\eta\;P_{l}^{\beta}(\mathrm{coth}\eta)+ \varphi(k)\mathrm{csch}\eta\;Q_{l}^{\beta}(\mathrm{coth}\eta), \tag{34}\]
in which \(\rho=\sinh\eta\) and \((P,Q)\) are associated Legendre functions. Note that the range of \(\eta\) is the same as that for \(\rho\) i.e. \(0\leq\eta<\infty\). The order of the function is given by
\[\beta^{2}=1+k^{2} \tag{35}\]
where here we do not assume that \(\beta\) is real. In fact, since \(\mathrm{coth}(\eta)\geq 1\) over the domain of interest, it is more useful to write the general solution in terms of the hypergeometry functions as shown in the appendix B thus here it is convenient to choose the basis as
\[\phi_{l}(\rho;k)=\phi_{l}^{+}(k)\mathrm{csch}\eta\;P_{l}^{\beta_{+}}(\mathrm{ coth}\eta)+\phi_{l}^{-}(k)\mathrm{csch}\eta\;P_{l}^{\beta_{-}}(\mathrm{coth} \eta), \tag{36}\]
where \(\beta_{\pm}\) are the two (complex) roots of (35), with \((\beta_{+}+\beta_{-})=0\).
To understand the regularity and orthogonality relations, it is useful to consider first the \(l=0\) solutions which can be written in terms of elementary functions as
\[\phi_{0}(\rho;k)=\phi^{+}(k)\frac{1}{\rho}(\rho+\sqrt{\rho^{2}+1})^{\beta_{+} }+\phi^{-}(k)\frac{1}{\rho}(\rho+\sqrt{\rho^{2}+1})^{\beta_{-}} \tag{37}\]
where \((\beta_{+}+\beta_{-})=0\). A mode is bounded as \(\rho\to\infty\) provided that \(\mathrm{Re}(\beta)\leq 1\). However, no single mode is bounded as \(\rho\to 0\). One can combine modes in a proper way to obtain fields \(\phi_{r}(\rho;k)\) that are bounded as \(\rho\to 0\):
\[\phi_{r}(\rho;k)=\frac{1}{\sqrt{\pi}\rho}\sinh\left(\beta_{+}\ln(\rho+\sqrt{ \rho^{2}+1})\right). \tag{38}\]
The orthogonality condition for \(l=0\) is obtained as above from writing the radial equation in Sturm-Liouville form (24), so that the coefficient and weight functions are given by
\[P(\rho)=\rho^{2}(\rho^{2}+1)^{\frac{1}{2}}\qquad w(\rho)=\frac{\rho^{2}}{( \rho^{2}+1)^{\frac{1}{2}}}. \tag{39}\]
Therefore we have
\[\int_{0}^{\infty}d\rho w(\rho)\mathcal{F}^{*}(\rho;q)\mathcal{F}(\rho;p)= \delta(p-q),\qquad p,\;q>0 \tag{40}\]
where \(p\in\mathbb{R}\), \(\beta=ip\) and
\[\mathcal{F}(\rho;p)=\frac{1}{\sqrt{2\pi}\rho}(\rho+\sqrt{\rho^{2}+1})^{ip}. \tag{41}\]
Moreover, using the relation (40) one can also obtain the relation between the regular solutions written as
\[\int_{0}^{\,\infty}d\rho\:w(\rho)\:\phi_{r}^{\,*}(\rho;q)\phi_{r}(\rho\,,;p)= \delta(p-q). \tag{42}\]
For the \(l>0\) modes, the analysis of regularity is shown in the Appendix B while the orthogonality relation will be hard to check and one can see the discussion in [71; 72].
Till now, we have discussed the modes with various choices of the value of \(\alpha\) or \(\beta\) and their corresponding physical interpretation while we should note that it is not clear which of them will form the necessary complete basis for the bulk fields and a generic principle to find out such a basis is still absent. Later we will see that different \(k\)-modes contribute to the correlation function living on the boundary celestial sphere in a different way according to the detail of the interaction. Here, we assume that given the detail of the theory a proper subset \(\mathcal{P}\) of \(k\) always exists that enable us to perform the mode decomposition thus the modes form a complete basis and the superposition principle will work. In the rest of this article, we will focus on the study of onshell fields therefore the condition \(\alpha=\beta\) is automatically imposed.
Holography
The purpose of this section is to develop a detailed holographic dictionary between the bulk theory in asymptotically Minkowski spacetimes and the putative dual theory, associated with null infinity. We will develop the dictionary using the example of a test scalar field in the fixed Minkowski background. Our approach will be based on the principles of AdS/CFT (1), i.e, writing a defining holographic relation of the form
\[\exp\left(iS^{\rm ren}(\Phi)\right)=\Big{\{}\exp\;\int_{\partial M}\mathcal{J} \;\mathcal{O}\;\Big{\}}_{QFT}. \tag{10}\]
Here \(S(\Phi)\) is the action of the bulk theory with scalar field \(\Phi\). Taking into account IR divergences, we will need to renormalise this action and \(S^{\rm ren}(\Phi)\) is the renormalised version of \(S(\Phi)\); an important part of this section will be establishing the principles underlying the renormalisation procedure. On the right hand side of (10) we denote \(\mathcal{J}\) and \(\mathcal{O}\) as the source and operator in the quantum field theory at the boundary. Again this should be viewed as a renormalised expression.
Following the story of the construction of the dictionary for AdS/CFT [13, 73, 74], here we are also going to specify the source and operator by decomposing the data of the bulk field \(\Phi\) into coefficients when doing the expansion at the boundary. It turns out that we need two series of operators \(\{\mathcal{J},\mathcal{O}\}\) and \(\{\tilde{\mathcal{J}},\tilde{\mathcal{O}}\}\) on the celestial sphere in order to reconstruct the bulk field by checking the renormalised action specifically and the way that they are coupled is determined by the causal and dynamical structure of the bulk theory. The new feature for the flat/CFT dictionary is that we are reducing two spacetime dimensions at once and the dictionary is built between the bulk theory with the notion of time and the boundary Euclidean theory on the sphere thus the factor \(i\) plays an important role here when considering the emergence of time and the unitarity of the CFT.
### Holographic Dictionary
We begin by reviewing the usual holographic dictionary for scalar fields on Euclidean AdS\({}_{3}\). Using the same coordinates for Euclidean AdS\({}_{3}\) as shown in (7) i.e.
\[ds^{2}_{\rm AdS_{3}}=g_{ij}dx^{i}dx^{j}=\left(\frac{d\rho^{2}}{1+\rho^{2}}+2 \rho^{2}\gamma_{z\bar{z}}dzd\bar{z}\right), \tag{11}\]
the boundary is at \(\rho\to\infty\) and the boundary metric is manifestly spherical. Now consider a massive scalar field with action
\[S_{\rm AdS_{3}}=\frac{1}{2}\int\,d^{3}x\sqrt{g}\left((\partial\varphi)^{2}+m^ {2}\varphi^{2}\right), \tag{12}\]
where \(g\) is the determinant of the Euclidean AdS\({}_{3}\) metric above. The onshell action is thus
\[S^{\rm onshell}_{\rm AdS_{3}}=\frac{1}{2}\int_{\partial AdS_{3}}d\Sigma^{i} \varphi\partial_{i}\varphi \tag{13}\]
where \(d\Sigma^{i}=dS\frac{1}{\sqrt{g}}\partial_{j}\sqrt{g}g^{ij}\) and \(dS\) is the volume form on the surface at the cut off \(\rho=R\). In terms of the coordinate (11), it takes the form \(d\Sigma^{\rho}=d^{2}z\;\gamma_{z\bar{z}}R^{2}\sqrt{1+R^{2}}\) while the other two components will vanish \(d\Sigma^{z}=d\Sigma^{\bar{z}}=0\) since the sphere is compact. The asymptotic expansion of an onshell field takes the form
\[\varphi(\rho,z)=\rho^{\Delta-2}\left(\varphi(z)+\cdots\right)+\rho^{-\Delta} \left(\bar{\varphi}(z)+\cdots\right) \tag{14}\]
where \(\varphi(z)\) is the source for the dual operator \(\mathcal{O}_{\varphi}(z)\) of dimension \(\Delta\), where \(m^{2}=\Delta(\Delta-2)\). When \(\Delta\) is integral the asymptotic expansions contain logarithmic terms, which are related to the contact terms in two point functions discussed below.
One uses the asymptotic expansion of the onshell field to compute the explicit value of the regulated onshell action, from which one can construct covariant counterterms and the renormalised action
\[S^{\text{ren}}_{\text{AdS}_{3}}=\mathcal{L}_{\rho\to\infty}\left(S^{\text{onshell} }_{\text{AdS}_{3}}+S^{\text{ct}}_{\text{AdS}_{3}}\right) \tag{3.6}\]
The covariant counterterms are of the form
\[S^{\text{ct}}_{\text{AdS}_{3}}=-\frac{1}{2}(\Delta-2)\int_{\partial AdS_{3}}d^{ 2}x\sqrt{h}\varphi^{2}+... \tag{3.7}\]
where \(h\) is the determinant of the induced metric at the boundary. In the chosen coordinates (3.2), it takes the form \(h=R^{2}\gamma_{z\bar{z}}=R^{2}\Omega_{2}(z)\).
In terms of the complex AdS coordinate (3.2), the AdS\({}_{3}\)/CFT\({}_{2}\) dictionary the can be written as
\[\exp\Big{(}-S_{\text{AdS}_{3}}(\Phi)\Big{)}=\Big{\langle}\,\exp\,-\int_{S^{2} }d^{2}z\Omega_{2}(z)\mathcal{J}(z)\;\mathcal{O}(z)\;\Big{\rangle}, \tag{3.8}\]
where \(\mathcal{J}(z)\sim\Omega_{2}^{\frac{\Delta-2}{2}}(z)\varphi(z)\)2 is the corresponding source and the expectation value of the dual operator is then defined as the variation of the renormalised action with respect to the source
Footnote 2: It is easier to keep track of the weight if one chooses to use the expansion \(\phi(\rho,z)=(\rho\Omega_{2}^{\frac{1}{2}}(z))^{\Delta-2}\varphi(z)+(\rho\Omega _{2}^{\frac{1}{2}}(z))^{-\Delta}\tilde{\varphi}(z)\) and one should note the relation \(\rho(\frac{\Omega(z)}{2})^{\frac{1}{2}}=\frac{1}{t}\) between the complex and Poincaré coordinates.
\[\langle\mathcal{O}_{\varphi}(z)\rangle_{S^{2}}=\frac{1}{\Omega_{2}(z)}\frac{ \delta S^{\text{ren}}(\Phi)}{\delta\mathcal{J}(z)}, \tag{3.9}\]
in which \(\langle\cdots\rangle_{S^{2}}\) means that the operator is inserted on the celestial sphere. From the above definition, one can deduce that the one-point function is then proportional to the function \(\Omega_{2}^{-\frac{\Delta}{2}}(z)\tilde{\varphi}(z)\), i.e. \(\langle\mathcal{O}_{\varphi}\rangle_{S^{2}}\sim\Omega_{2}^{-\frac{\Delta}{2}} (z)\tilde{\varphi}(z)\). In this article, we are interested in the operator on the two dimensional plane denoted as \(M_{2}\) thus the correlations functions are related by the conformal weight \(\langle\mathcal{O}_{\varphi}(z)\rangle\sim\Omega_{2}^{\frac{\Delta}{2}}(z) \langle\mathcal{O}_{\varphi}(z)\rangle_{S^{2}}\). Therefore, after taking the renormalisation factor into consideration, one has the relation
\[\langle\mathcal{O}_{\varphi}(z)\rangle=2(1-\Delta)\tilde{\varphi}(z)+C(\varphi) \tag{3.10}\]
for the operator on the complex plane and the source now becomes \(\mathcal{J}(z)=\varphi(z)\). Here the function \(C(\varphi)\) denotes contributions to the one point correlation function that are expressed in terms of the source; such contributions arise whenever \(\Delta\) is integral and its exact form depends on the regularization scheme. As usual the two point function can be obtained by functionally differentiating with respect to the source \(\varphi(z)\) i.e.
\[\langle\mathcal{O}_{\varphi}(z)\mathcal{O}_{\varphi}(z^{\prime})\rangle=-2(1- \Delta)\frac{\delta\tilde{\varphi}(z)}{\delta\varphi(z^{\prime})}+... \tag{3.11}\]
where the ellipses contribute only to contact terms in the correlation function and the renormalisation factor \(2(\Delta-1)\) can be deduced by the study of bulk-boundary propagator which is briefly reviewed in the Appendix A.
Given the bulk-boundary propagator \(K(\rho,z;z^{\prime})\), a generic regular field in the bulk with boundary behaviour \(\varphi(\rho,z)\sim\rho^{\Delta-2}\varphi(z)\) can be expressed as
\[\varphi(\rho,z)=\int_{M_{2}}dz^{\prime}d\bar{z}^{\prime}\;K(\rho,z;z^{\prime}) \varphi(z^{\prime}), \tag{3.12}\]
in which we have transformed the boundary to the plane and the source becomes \(\mathcal{J}(z)=\varphi(z)\). With the help of the AdS/CFT propagator, one can deduce the CFT two-point function in a quick
way. For example, the AdS\({}_{3}\) onshell action can be written as
\[S^{\rm{onshell}}_{\rm{AdS}_{3}} = \frac{1}{2}\int_{M_{2}}d^{2}z\:R^{2}\sqrt{1+R^{2}}\:(\varphi(\rho, z)\partial_{\rho}\varphi(\rho,z))_{\rho=R} \tag{3.13}\] \[= -\frac{\Delta}{2\pi}\int_{M_{2}}\int_{M_{2}}d^{2}zd^{2}z^{\prime} \ \frac{\varphi(z)\varphi(z^{\prime})}{|z-z^{\prime}|^{2\Delta}}, \tag{3.14}\]
in which in the second line we have used the expression (3.12) and the contraction relation (A.24) for the propagators. Following the similar procedure, one can also deduce \(S^{\rm{ct}}_{\rm{AdS}_{3}}\) and we have
\[S^{\rm{ct}}_{\rm{AdS}_{3}} = -\frac{1}{2}(\Delta-2)\int_{M_{2}}d^{2}z\:R^{2}\:(\varphi(\rho,z )\varphi(\rho,z))_{\rho=R} \tag{3.15}\] \[= -\frac{\Delta-2}{2\pi}\int_{M_{2}}\int_{M_{2}}d^{2}zd^{2}z^{ \prime}\ \frac{\varphi(z)\varphi(z^{\prime})}{|z-z^{\prime}|^{2\Delta}}, \tag{3.16}\]
therefore, according to the AdS/CFT dictionary, the renormalised two-point function now becomes
\[\langle\mathcal{O}(z)\mathcal{O}(z^{\prime})\rangle=-\frac{\delta^{2}S^{\rm{ ren}}_{\rm{AdS}_{3}}}{\delta\varphi(z)\delta\varphi(z^{\prime})}=\frac{c_{ \Delta}}{|z-z^{\prime}|^{2\Delta}}, \tag{3.17}\]
where \(c_{\Delta}\) takes the value 3
Footnote 3: For the massive case, we have \(c_{\Delta}=\frac{2(\Delta-1)N_{\Delta}}{\pi}\).
\[c_{\Delta}=\frac{2(\Delta-1)}{\pi}. \tag{3.18}\]
### Holography Dictionary for Milne
In this section we turn to scalar fields in the Milne coordinates then proceed to perform the holography renormalisation for Minkowski spacetime. The action for the massive scalar field is
\[S=\frac{1}{2}\int_{0}^{\infty}d\tau\int_{0}^{\infty}d\rho\int dzd\bar{z}\sqrt{ -G}\left(\left(\partial\Phi\right)^{2}+M^{2}\Phi^{2}\right), \tag{3.19}\]
in which \(G\) is given by (2.7) together with scalar fields \(\Phi\) and we have restricted the integration region to \(\mathcal{A}^{+}\). As usual we can express the onshell action as the exact term
\[S^{\rm{onshell}}=\frac{1}{2}\int_{0}^{\infty}d\tau\int_{0}^{\infty}d\rho\int dzd \bar{z}\sqrt{-G}D^{\mu}(\Phi\partial_{\mu}\Phi), \tag{3.20}\]
which can be expressed as boundary terms thus we have \(D^{\mu}=\frac{1}{\sqrt{-G}}\partial_{\nu}\sqrt{-G}G^{\nu\mu}\). The philosophy of the celestial holography approach is to foliate the spacetime with spacelike surfaces, and throughout this section we will work in this approach, analysing divergences at the spatial boundaries of each slice.
Accordingly, let us focus on the radial boundary as \(\rho\to\infty\). Using the Milne form of the metric the onshell boundary terms are
\[S^{\rm{onshell}}=\frac{1}{2}\int_{0}^{\infty}d\tau\tau\int_{\partial AdS_{3}}d \Sigma^{i}\Phi(\tau,x^{i})\partial_{i}\Phi(\tau,x^{i}) \tag{3.21}\]
where the second integral is expressed in terms of the boundary of the Euclidean AdS\({}_{3}\) metric (3.2). Here we should note that, strictly speaking, the value of onshell action shown in (3.20) and (3.21) are not the same since we have ignored the integral over spatial direction at the fixed hyperboloids \(\tau=0\) and \(\tau=+\infty\). After taking the other Milne wedge \(\mathcal{A}^{-}\) into consideration, the difference is then determined by the integral over \(\Phi(\tau=\pm\infty,\rho,z,\bar{z})\partial_{\tau}\Phi(\tau=\pm\infty,\rho,z, \bar{z})\) in which \(\Phi(\tau=\pm\infty,\rho,z,\bar{z})\) are the initial and final data imposed for a given physical system since \(\tau=\pm\infty\) are null boundaries
of Minkowski space. If one proposes that the initial and final states for the physical system are vacuum, then we have \(\Phi(\tau=\pm\infty,\rho,z,\bar{z})=0\) thus there will be no difference between (3.20) and (3.21). For scattering processes, the initial and final states are the in and out going states while one can assume the difference will contribute to the action in a small and finite way therefore leads to a proper \(i\epsilon\) perscription of the quantum theory [75]. One can see the formal treatment of the integral along the null boundaries and the renormalisation of the phase space in the work [76, 77, 78]. Here we will only study the onshell action in the form of (3.21) and the explicit expression for this is
\[S^{\rm onshell}=\frac{1}{2}\int_{0}^{\infty}d\tau\tau\int_{\partial AdS_{3}}d \Omega_{2}R^{2}(1+R^{2})^{\frac{1}{2}}\left(\Phi(\tau,\rho,z,\bar{z})\partial_ {\rho}\Phi(\tau,\rho,z,\bar{z})\right)_{\rho=R} \tag{3.22}\]
where the boundary is regulated at \(\rho=R\) and \(d\Omega_{2}\) is the integration measure over the unit two sphere.
Given the onshell solution, following the expansion (2.11), we can further decompose it into the \(k\) mode components by introducing the \(k\) mode function \(f(\tau,\rho,z,\bar{z};k)\delta(\omega-k)=f(\tau,\rho,z,\bar{z};k,\omega)\) thus we have
\[\Phi(\tau,\rho,z,\bar{z})=\int_{\mathcal{P}}dkf(\tau,\rho,z,\bar{z};k) \tag{3.23}\]
and then we can use such decomposition of fields to transform the onshell action into \(k\) mode space after rewriting all the fields in the action in terms of \(f\). More precisely, we can define the \((k,k^{\prime})\) mode of the action
\[S^{\rm onshell}(k,k^{\prime}):=\frac{1}{2}\int_{0}^{\infty}\tau d\tau\int_{ \partial AdS_{3}}d\Omega_{2}\:R^{3}(f(\tau,\rho,z,\bar{z};k)\partial_{\rho}f( \tau,\rho,z,\bar{z};k^{\prime}))_{\rho=R} \tag{3.24}\]
and one can check at large \(R\) we have
\[S^{\rm onshell}=\int_{\mathcal{P}}dk\int_{\mathcal{P}^{\prime}}dk^{\prime}\:S ^{\rm onshell}(k,k^{\prime}), \tag{3.25}\]
where the double integral over the set \(\mathcal{P}\) come from the fact that the onshell action for free particles are quadratic in terms of \(\Phi\). Moreover, we can treat the \((k,k^{\prime})\) mode of the action \(S^{\rm onshell}(k,k^{\prime})\) as the onshell action which describes the interaction between a pair of modes \((k,k^{\prime})\). Later we will see that \(S^{\rm onshell}(k,k^{\prime})\) is proportional to the delta function if the domain of the integral over \(k\) takes the value such that \(\beta_{+}(k)=i\mathbb{R}^{+}\) thus we have
\[S^{\rm onshell}(k,k^{\prime})=\delta(k-k^{\prime})S^{\rm onshell}(k,k) \tag{3.26}\]
and for simplicity we denote \(S^{\rm onshell}(k,k)\) as \(S^{\rm onshell}(k)\). In such convention, the onshell action then can be expressed as
\[S^{\rm onshell}=\int_{\mathcal{P}}dk\:S^{\rm onshell}(k), \tag{3.27}\]
which will be used as the standard form of the \(k\) mode decomposition of the action for free particles.
Before performing the renormalisation on \(S^{\rm onshell}(k)\), let us consider asymptotic solutions of the equation (2.8) as \(\rho\to\infty\). The generic form for the asymptotic solution
\[\phi_{l}(\rho;k) = \phi_{l}(\rho;\beta_{+}(k))+\phi_{l}(\rho;\beta_{-}(k))\] \[\equiv \rho^{\beta_{+}-1}\left(\phi_{l}^{+}(k)+\mathcal{O}\left(\frac{1} {\rho^{2}}\right)\right)+\rho^{\beta_{-}-1}\left(\phi_{l}^{-}(k)+\mathcal{O} \left(\frac{1}{\rho^{2}}\right)\right)\]
where \((\beta_{+}+\beta_{-})=0\) and without loss of generality we will assume that \(\mathrm{Re}(\beta_{+})\geq\mathrm{Re}(\beta_{-})\). Instead of using \(l\) modes on the sphere we can express a general solution for the spatial part of the scalar for fixed \(k\) as
\[\phi(\rho,z,\bar{z};k)=\phi(\rho,z,\bar{z};\beta_{+})+\phi(\rho,z,\bar{z};\beta _{-}) \tag{3.29}\]
where the asymptotics of each solution are of the form
\[\phi\big{(}\rho,z,\bar{z};\beta_{\pm}\big{)}=\rho^{\beta_{\pm}-1}\left(\phi^{\pm} \big{(}z,\bar{z};k\big{)}+\mathcal{O}\left(\frac{1}{\rho^{2}}\right)\right). \tag{3.30}\]
Combining modes of a fixed value of \(k\) we obtain
\[f(\tau,\rho,z,\bar{z};k) = f_{+}(\tau,k)\phi(\rho,z,\bar{z};k)+f_{-}(\tau,k)\tilde{\phi}( \rho,z,\bar{z};k) \tag{3.31}\] \[= \tau^{\beta_{+}-1}\phi(\rho,z,\bar{z};\beta_{+})+\tau^{\beta_{-}- 1}\tilde{\phi}(\rho,z,\bar{z};\beta_{+})\] \[+\tau^{\beta_{+}-1}\phi(\rho,z,\bar{z};\beta_{-})+\tau^{\beta_{-} -1}\tilde{\phi}(\rho,z,\bar{z};\beta_{-})\]
where the fields \(\tilde{\phi}(\rho,z,\bar{z};\beta_{\pm})\) have the properties (3.29) and (3.30) and well see the explicit expression for them in the next section.
Now let us return to the four-dimensional \(k\) mode action. The regulated action for modes of fixed \(k\) contains the terms
\[S^{\rm onshell}(k) = \frac{1}{2}\int_{0}^{\infty}d\tau\tau\int_{\partial AdS_{3}}d \Omega_{2}\left((\beta_{+}-1)R^{2\beta_{+}}\Phi_{s}(\tau,z,\bar{z};k)^{2}+( \beta_{-}-1)R^{2\beta_{-}}\Phi_{v}(\tau,z,\bar{z};k)^{2}\right. \tag{3.33}\] \[\left.-2\Phi_{s}(\tau,z,\bar{z};k)\Phi_{v}(\tau,z,\bar{z};k)+ \cdots\right),\]
where the boundary of the AdS slice is regulated as \(\rho=R\) and the ellipses denote terms that are suppressed by at least \(1/R^{2}\). We introduce a shorthand notation for the combinations of terms in the asymptotic radial expansions:
\[\Phi_{s}(\tau,z,\bar{z};k) = \tau^{\beta_{+}-1}\phi^{+}(z,\bar{z};k)+\tau^{\beta_{-}-1}\tilde{ \phi}^{+}(z,\bar{z};k) \tag{3.34}\] \[\Phi_{v}(\tau,z,\bar{z};k) = \tau^{\beta_{+}-1}\phi^{-}(z,\bar{z};k)+\tau^{\beta_{-}-1}\tilde {\phi}^{-}(z,\bar{z};k)\]
Let us suppose that \({\rm Re}(\beta_{+})>0\), in which case \({\rm Re}(\beta_{-})<0\). In this case the first term in (3.33) will be divergent as \(R\to\infty\), but the second term will vanish; all power law divergences will be of the form \(R^{2\beta_{+}-2n}\) with \(n\) an integer.
As above, we can remove divergences with counterterms. These counterterms should be expressed in terms of quantities that are intrinsic to the regulated boundary, and they should be covariant with respect to the bulk diffeomorphism at \(\rho=0\) thus make \(f(\tau,R,z,\bar{z})\) transform as a scalar field. Here in fact, the background metric already uses a preferred slicing of the four-dimensional metric, i.e. a specific coordinate choice for time, and therefore we would not expect the counterterms to preserve full three-dimensional covariance of the boundary. In practice this means that the counterterms are expressed in the form
\[S^{\rm ct}(k) = \int_{0}^{\infty}d\tau\frac{1}{\tau}\int_{\partial AdS_{3}}d^{2} z\sqrt{-\bar{\gamma}}\left(a_{1}(k)f(\tau,R,z,\bar{z})^{2}+a_{2}(k)(\partial_{z} \partial_{\tau}f(\tau,R,z,\bar{z})\right)^{2}+\cdots\right) \tag{3.35}\] \[= \int_{0}^{\infty}d\tau\tau\int_{\partial AdS_{3}}R^{2}d\Omega_{2 }\left(a_{1}(k)f(\tau,R,z,\bar{z})^{2}+a_{2}(k)(\partial_{z}\partial_{\tau}f( \tau,R,z,\bar{z})\right)^{2}+\cdots\right)\]
where \(\bar{\gamma}_{\tau\tau}=-1,\ \bar{\gamma}_{z\bar{z}}=R^{2}\tau^{2}\gamma_{z\bar{z}}\) is the induced metric on the boundary of Milne wedge at \(\rho=R\) (with the curvature radius being independent of \(\tau\)) and the derivative \(\partial_{z}\) only acts on the celestial sphere. As we can see, the covariant of the bulk diffeomorphism at the surface \(\rho=R\) shown in the first line is broken by fixing the gauge of coordinates in the second line. By construction these counterterms will remove the divergences because the analytic structure on the celestial sphere is precisely as described above for AdS\({}_{3}\)/CFT\({}_{2}\). Indeed, matching with the dictionary above one obtains
\[\Delta_{k}=1+\beta_{+} \tag{3.36}\]
and furthermore one can deduce the factor corresponding to the first term in the counterterm to be
\[a_{1}(k)=\frac{2-\Delta_{k}}{2} \tag{3.37}\]
thus the finite terms in the renormalized action include
\[S^{\rm ren}(k)=-\beta_{+}\int_{0}^{\infty}d\tau\tau\int_{\partial AdS_ {3}}d\Omega_{2}\,R^{2}\;\Phi_{s}(\tau,z,\bar{z};k)\Phi_{v}(\tau,z,\bar{z};k) \tag{3.38}\] \[=-\beta_{+}\int_{0}^{\infty}d\tau\tau\int_{\partial AdS_{3}}d \Omega_{2}(\tau^{\beta_{+}-1}\phi^{+}(z,\bar{z};k)+\tau^{\beta_{-}-1}\tilde{ \phi}^{+}(z,\bar{z};k))(\tau^{\beta_{+}-1}\phi^{-}(z,\bar{z};k)+\tau^{\beta_{-} -1}\tilde{\phi}^{-}(z,\bar{z};k)).\]
As we have discussed, there would be additional finite terms in the action if \(\beta_{+}\) were to be real and integer valued, but this is not the case of interest here. According to the dictionary given in (3.1), one can see that \(\beta_{+}\) should be a pure imaginary number \(\beta_{+}\in i\mathbb{R}\) in order to make sure that CFT correlation functions given by the right hand side of (3.1) are real. Moreover, given the relation (3.36), we know that the scale dimension of the operator on the celestial sphere should take the value on the principal series \(\Delta_{k}=1+i\mathbb{R}\)4.
Footnote 4: In fact \(\Delta_{k}=1+i\mathbb{R}^{+}\) if \(\beta_{+}\) takes the value in \(i\mathbb{R}^{+}\) and we assume that such \(k\) modes will form the necessary complete basis when one performs the mode decomposition following the discussion in section 2. There are also shadow operators given by the shadow transformation \(\Delta_{k}\to 2-\Delta_{k}\) so that the value of scale dimension will cover the whole principal series. From the bulk side, this corresponds to doing the Legendre transformation of the action so that the role of source and vev are switched.
Using the orthogonality relations for the \(\tau\) eigenfunctions (2.28), (2.29) we can explicitly compute the \(\tau\) integrals as
\[S^{\rm ren}(k)=-\beta_{+}\int_{\partial AdS_{3}}d\Omega_{2}(\phi^{+}(z,\bar{z };k)\tilde{\phi}^{-}(z,\bar{z};k)+\tilde{\phi}^{+}(z,\bar{z};k)\phi^{-}(z,\bar {z};k))+\cdots \tag{3.39}\]
and one can also see that the \(\delta(k-k^{\prime})\) will come out if one choose to use \(S^{\rm onshell}(k,k^{\prime})\) rather than \(S^{\rm onshell}(k)\). From this expression we can read off that there are two operators of dimension \(\Delta_{k}\) with corresponding expectation values and sources:
\[\langle{\cal O}(z,\bar{z};k)\rangle = -2i\beta_{+}\phi^{-}(z,\bar{z};k)\qquad{\cal J}(z,\bar{z};k)= \tilde{\phi}^{+}(z,\bar{z};k) \tag{3.40}\] \[\langle\tilde{\cal O}(z,\bar{z};k)\rangle = -2i\beta_{+}\tilde{\phi}^{-}(z,\bar{z};k)\qquad\tilde{\cal J}(z, \bar{z};k)=\phi^{+}(z,\bar{z};k)\]
These two operators have the same two dimensional CFT scaling dimension, but are associated with different evolution in the \(\tau\) direction.
A generic massless field \(\Phi\) will be expressed as an integral over \(k\), with the corresponding renormalized action being
\[S^{\rm ren} = \int_{\cal P}dkS^{\rm ren}(k)\] \[= -\int_{\cal P}dk\;\beta_{+}(k)\int_{\partial AdS_{3}}d\Omega_{2}( \phi^{+}(z,\bar{z};k)\tilde{\phi}^{-}(z,\bar{z};k)+\tilde{\phi}^{+}(z,\bar{z}; k)\phi^{-}(z,\bar{z};k))\]
The field \(\Phi\) is thus dual to two continuous series of operators, labelled by \(k\), whose sources and expectation values are given above in (3.40).
### Correlation Functions
In this section we are going to study correlation functions in the context of flat/CFT in a more precise way. Propagators for free fields (\({\cal O}\cal O\)), (\(\tilde{\cal O}\tilde{\cal O}\)) are deduced and also represented in the language of diagrams. For higher point correlation functions in an interacting theory, the interactions are described by internal vertices of the diagrams. We use the \(\Phi^{3}(X)\) interaction as an example to see how the operators of different scale dimensions are coupled with each other.
Following the previous approach in the context of AdS/CFT, we again choose to decompose the bulk fields into the bulk-boundary propagator
\[\Phi(\tau,\rho,z,\bar{z}) = \frac{1}{2\sqrt{2}}\int_{\cal P}dk\int_{M_{2}}dz^{\prime}d\bar{z} ^{\prime}\left(\tau^{\beta_{+}-1}K(\rho,z;\,z^{\prime},\beta_{+})\phi^{+}(z, \bar{z};k)\right. \tag{3.42}\] \[\left.+\tau^{\beta_{-}-1}K(\rho,z;\,z^{\prime},\beta_{+})\tilde{ \phi}^{+}(z,\bar{z};k)\right),\]
in which \(\partial AdS_{3}=M_{2}\) and \(\phi^{+}(z,\bar{z};k)\), \(\tilde{\phi}^{+}(z,\bar{z};k)\) can be treated as a pair of sources on the boundary as introduced in (3.40). The extra factor \(1/\sqrt{2}\) is introduced here to make the renormalisation factor the same as AdS/CFT case, since we have two modes from the decomposition, while one can also treat it as the rescaling of the propagator. Given such expression, from the onshell action
\[S^{\rm onshell}(\Phi)=-\frac{1}{2}\int_{0}^{\infty}d\tau\int_{M_{2}}d^{2}z\;R^ {3}\;(\Phi(\tau,\rho,z,\bar{z})\partial_{\rho}\Phi(\tau,\rho,z,\bar{z}))_{\rho=R} \tag{3.43}\]
we have the \(k\)-mode component
\[S^{\rm onshell}(k)=\frac{\Delta_{k}}{2\pi}\int_{M_{2}}\int_{M_{2}}d^{2}zd^{2}z ^{\prime}\frac{\phi^{+}(z,\bar{z};k)\tilde{\phi}^{+}(z,\bar{z};k)}{|z-z^{ \prime}|^{2\Delta_{k}}}, \tag{3.44}\]
in which we have integrated out the \(\tau\) variable and the orthogonal relations for the \(\tau\)-modes are also applied. After performing the holographic renormalisation introduced in the previous section, the counterterm is then deduced to be
\[S^{\rm ct}(k)=-\frac{1}{2}(\Delta_{k}-2)\int_{0}^{\infty}d\tau\int_{M_{2}}d^{ 2}z\;R^{2}\;(f(\tau,\rho,z,\bar{z})f(\tau,\rho,z,\bar{z}))_{\rho=R}+\cdots \tag{3.45}\]
with \(k\)-mode
\[S^{\rm ct}(k)=-\frac{1}{2\pi}(\Delta_{k}-2)\int_{M_{2}}\int_{M_{2}}d^{2}zd^{2 }z^{\prime}\frac{\phi^{+}(z,\bar{z};k)\tilde{\phi}^{+}(z,\bar{z};k)}{|z-z^{ \prime}|^{2\Delta_{k}}}. \tag{3.46}\]
Given the flat/CFT dictionary, in order to obtain the two-point function of the operator \({\cal O}\), we need to do the variation with respect to the corresponding source \({\cal J}=\tilde{\phi}^{+}\) twice therefore get
\[\langle{\cal O}(z,\bar{z};k){\cal O}(z^{\prime},\bar{z}^{\prime};k)\rangle= \frac{i\delta^{2}S^{\rm ren}(k)}{\delta\tilde{\phi}^{+}(z)\tilde{\phi}^{+}(z ^{\prime})}=\int_{M_{2}}d^{2}z^{\prime\prime}\frac{\delta\phi^{+}(z^{\prime \prime},\bar{z}^{\prime\prime};k)}{\delta\tilde{\phi}^{+}(z,\bar{z};k)}\;\frac {c_{k}}{|z^{\prime\prime}-z^{\prime}|^{2\Delta_{k}}}, \tag{3.47}\]
in which \(c_{k}=2i(1-\Delta_{k})/\pi\). The variation between two functions are not well defined while at least we should note that such value could not be zero since \(\phi^{+}\) and \(\tilde{\phi}^{+}\) are not independent. Expanding them in terms of spherical harmonics, one will see that the variation of the two sources with respect to the basis can be written as
\[\delta\phi^{+}(z,\bar{z};k)=\sum_{l\leqslant 0,m}a^{+}_{lm}(k)\delta Y^{l}_{m}( z,\bar{z})\qquad\delta\tilde{\phi}^{+}(z,\bar{z};k)=\sum_{l\leqslant 0,m}a^{-}_{lm}(k) \delta Y^{l}_{m}(z,\bar{z}) \tag{3.48}\]
Physically, one can treat the deviation of the basis \(\delta Y^{l}_{m}\) from the spherical harmonics as the deformation of the background geometry away from purely flat case. For the coefficients \(a^{+}_{lm}(k)\), they come from the decomposition of the bulk fields \(\Phi\) and they are determined by assigning data on the Cauchy hypersurface chosen as the initial time 5. One can see more discussions on the coefficients
Figure 2: Propagators between two copies of operators are illustrated in the figure. Each disk represents one copy of AdS\({}_{3}\) hyperboloid with \(S^{2}\) boundary drawn as a circle. \(+\) and - represent that the operators are obtained from the decomposition of the \(\tau^{\beta_{s}-1}\) or \(\tau^{\beta_{s}-1}\) modes.
in the appendix C or section 4. Therefore one can define the variation between two sources as
\[\frac{\delta\phi^{+}(z,\bar{z};k)}{\delta\tilde{\phi}^{+}(z^{\prime},\bar{z}^{ \prime};k)}:=\frac{1}{N_{k}}\sum_{l\neq 0,m}\frac{a_{lm}^{+}(k)}{a_{lm}^{-}(k)} \delta(z-z^{\prime}), \tag{3.49}\]
in which the factor \(N_{k}=\sum_{l\neq 0,m}1\) is introduced for normalization and one can interpret it as the measure of the discrete parameter space \((l,m)\). Following such convention, then we obtain the two-point function
\[\langle{\cal O}(z,\bar{z};k){\cal O}(z^{\prime},\bar{z}^{\prime};k)\rangle= \frac{1}{N_{k}}\sum_{l\neq 0,m}\frac{a_{lm}^{+}(k)}{a_{lm}^{-}(k)}\frac{c_{k}} {|z-z^{\prime}|^{2\Delta_{k}}}, \tag{3.50}\]
and
\[\langle\tilde{\cal O}(z,\bar{z};k)\tilde{\cal O}(z^{\prime},\bar{z}^{\prime}; k)\rangle=\frac{1}{N_{k}}\sum_{l\neq 0,m}\frac{a_{lm}^{-}(k)}{a_{lm}^{+}(k)} \frac{c_{k}}{|z-z^{\prime}|^{2\Delta_{k}}}. \tag{3.51}\]
These two kinds of propagators carry the dynamical information of the physical system in the bulk. From the boundary point of view, they describe the coupling of the two series of operators and we represent such relation in Figure 2. For the higher point correlation functions, one needs to take the interaction of particles into consideration. Suppose that we have turned on the \(\Phi^{3}\) interaction of coupling constant \(\lambda_{3}\), then we can write the action as
\[\lambda_{3}\int d^{4}X\;\Phi^{3}(X) = \lambda_{3}\int_{AdS_{3}}d^{3}x\sqrt{g}\int_{\cal P}dk_{1}dk_{2} dk_{3}\int_{0}^{\infty}d\tau\frac{1}{\tau}(\tau^{\beta_{+}^{1}+\beta_{+}^{2}- \beta_{+}^{3}+1}\phi(z,\bar{z},\rho;k_{1})\] \[\times\phi(z,\bar{z},\rho;k_{2})\tilde{\phi}(z,\bar{z},\rho;k_{3} )+\tau^{\beta_{+}^{1}-\beta_{-}^{2}-\beta_{+}^{3}+1}\phi(z,\bar{z},\rho;k_{1} )\tilde{\phi}(z,\bar{z}\rho;k_{2})\tilde{\phi}(z,\bar{z}\rho;k_{3})+\cdots),\]
in which we have decomposed the fields into the integral over \(k\)-modes and collected all the \(\tau\)-modes. To discuss the integral over \(\tau\) modes in a more precise way, we write the value of \(\beta_{+}\) as a complex number into the real and imaginary part
\[\sqrt{1+k_{i}^{2}}\equiv\beta_{+}^{i}=\gamma_{i}+ip_{i}, \tag{3.53}\]
therefore the integral, taking the first one \(\phi_{1}\phi_{2}\tilde{\phi}_{3}\) for example, becomes
\[\int_{0}^{\infty}d\tau\frac{1}{\tau}\;\tau^{\gamma_{1}+\gamma_{2}-\gamma_{3} +1}e^{i(p_{1}+p_{2}-p_{3})\ln\tau}\sim\delta(p_{1}+p_{2}-p_{3}) \tag{3.54}\]
after imposing the condition for the real part
\[\gamma_{1}+\gamma_{2}-\gamma_{3}=-1. \tag{3.55}\]
The above relation tells us that, in order to describe the \(\Phi^{3}\) interaction, extra modes apart from principal series should be taken into consideration. In such the case, the interacting part in the action then can be reduced to
\[\delta(p_{1}+p_{2}-p_{3})\;\lambda_{3}\int_{AdS_{3}}d^{3}x\sqrt{g}\;\phi(z, \bar{z},\rho;k_{1})\phi(z,\bar{z},\rho;k_{2})\tilde{\phi}(z,\bar{z},\rho;k_{3}) \tag{3.56}\]
Figure 3: For \(\lambda_{3}\Phi^{3}(X)\) interaction, two kinds of \(k\)-mode contribution for the three point function (\({\cal O}{\cal O}{\cal O}\))are shown in the figure. The diagram on the left represents mode contribution like \(\phi_{1}\phi_{2}\tilde{\phi}_{3}\) while the mode contribution like \(\phi_{1}\tilde{\phi}_{2}\tilde{\phi}_{3}\) are shown on the right.
and its contribution to the three-point function is shown in the left hand side Figure 3. Moreover, one can see that such three-point functions on the boundary are extremal \(\Delta_{3}=\Delta_{1}+\Delta_{2}\)[79, 80] after employing the flat/CFT dictionary (3.36). For the \(\phi_{1}\tilde{\phi}_{2}\tilde{\phi}_{3}\) contribution in (3.53), if one imposes the condition
\[\gamma_{1}-\gamma_{2}-\gamma_{3}=-1 \tag{3.57}\]
therefore the interaction takes the form
\[\delta(p_{1}-p_{2}-p_{3})\;\lambda_{3}\int_{AdS_{3}}d^{3}x\sqrt{g}\phi(z,\bar{ z},\rho;k_{1})\tilde{\phi}(z,\bar{z},\rho;k_{2})\tilde{\phi}(z,\bar{z},\rho;k_{3}) \tag{3.58}\]
and the diagram is shown in the right hand side of Figure 3. In the figure, we have seen that the internal vertex is inserted in the \(-\) disk since we are calculating the three point function (\(\mathcal{O}\mathcal{O}\mathcal{O}\)) generated by the source \(\tilde{\phi}^{+}\) living on the - disk. For the three point function (\(\tilde{\mathcal{O}}\tilde{\mathcal{O}}\tilde{\mathcal{O}}\)) we can also show the \(\phi\phi\tilde{\phi}\) and \(\phi\tilde{\phi}\tilde{\phi}\) interaction in terms of diagrams but now the internal vertex is inserted on the \(+\) disk shown in Figure 4. One can also check such three-point functions are also extremal \(\tilde{\Delta}_{3}=\tilde{\Delta}_{1}+\tilde{\Delta}_{2}\) in the sense of shadow operators \(\tilde{\Delta}=2-\Delta\).
Following the similar procedure, one can also study the \(\Phi^{4}\) interaction written as
\[\lambda_{4}\int d^{4}X\Phi^{4}(X), \tag{3.59}\]
in which the coupling constant \(\lambda_{4}\) now becomes dimensionless and it is worthwhile to note that, not like the \(\Phi^{3}\) interaction, all the modes on the principal series could contribute to the interaction since the relation
\[\pm\gamma_{1}\pm\gamma_{2}\pm\gamma_{3}\pm\gamma_{4}=0 \tag{3.60}\]
is satisfied if one sets \(\gamma_{i}=0\), i.e. \(\Delta_{i}=1+ip_{i}\). Corresponding diagrams for four-point functions could also been drawn while it is interesting to see that the diagrams introduced for the \(\Phi^{3}\) and \(\Phi^{4}\) interaction can be treated as the intermediate between Feynman and Witten diagrams. If one collapses the two disks in the diagram, i.e. ignoring the dynamical or the causal structure of the system, then we will obtain the Witten diagram which is often illustrated as a single disk. From the other hand, if one tries to sum over all the diagrams of different \(k\) modes, then one will recover the Feynman diagrams which enable us to study the scattering amplitudes for particles.
### Holographic Dictionary for Onshell Scalar Fields
In this section we collate the results above and summarise the process for reading off the holographic data corresponding to an onshell scalar field \(\Phi(\tau,\rho,z,\bar{z})\). In general the flat/CFT dictionary is given by
\[\exp\left(iS^{\rm ren}(\Phi)\right)=\Big{\langle}\;\exp\;\int_{S^{2}}\int_{ \mathcal{P}}\left(\mathcal{J}_{\Delta}\;\mathcal{O}_{\Delta}+\tilde{\mathcal{ J}}_{\Delta}\tilde{\mathcal{O}}_{\Delta}\right)\;\Big{\rangle}_{CFT}. \tag{3.61}\]
To map the data between two sides, we first express the scalar field as a linear superposition of frequency modes, i.e.
\[\Phi(\tau,\rho,z,\bar{z})=\int_{\mathcal{P}}dk\tau^{\beta_{*}-1}\phi(\rho,z,\bar {z};k)+\int_{\mathcal{P}}dk\tau^{\beta_{*}-1}\tilde{\phi}(\rho,z,\bar{z};k) \tag{108}\]
where following (109) the two classes of modes can be expressed as
\[\phi(\rho,z,\bar{z};k) = \phi(\rho,z,\bar{z};\beta_{*})+\phi(\rho,z,\bar{z};\beta_{-}); \tag{109}\] \[\tilde{\phi}(\rho,z,\bar{z};k) = \tilde{\phi}(\rho,z,\bar{z};\beta_{+})+\tilde{\phi}(\rho,z,\bar{z };\beta_{-}).\]
These fields have asymptotic expansions
\[\phi(\rho,z,\bar{z};k) = \rho^{\beta_{*}-1}\phi^{+}(z,\bar{z};k)+\rho^{\beta_{*}-1}\phi^{ -}(z,\bar{z};k)+\cdots \tag{110}\] \[\tilde{\phi}(\rho,z,\bar{z};k) = \rho^{\beta_{*}-1}\tilde{\phi}^{+}(z,\bar{z};k)+\rho^{\beta_{*}- 1}\tilde{\phi}^{-}(z,\bar{z};k)+\cdots\]
from which one can read off expectation values and sources according to:
\[\langle\mathcal{O}(z,\bar{z};k)\rangle = -2i\beta_{+}\phi^{-}(z,\bar{z};k)\qquad\mathcal{J}(z,\bar{z}; \beta)=\tilde{\phi}^{+}(z,\bar{z};k) \tag{111}\] \[\langle\tilde{\mathcal{O}}(z,\bar{z};k)\rangle = -2i\beta_{+}\tilde{\phi}^{-}(z,\bar{z};k)\qquad\tilde{\mathcal{J} }(z,\bar{z};k)=\phi^{+}(z,\bar{z};k)\]
The decomposition of the field (108) follows from the orthogonality relations:
\[\phi(\rho,z,\bar{z};k) = \frac{1}{2\pi}\int_{0}^{\infty}d\tau\;\tau^{\beta_{-}}\Phi(\tau, \rho,z,\bar{z}) \tag{112}\] \[\tilde{\phi}(\rho,z,\bar{z};k) = \frac{1}{2\pi}\int_{0}^{\infty}d\tau\;\tau^{\beta_{+}}\Phi(\tau, \rho,z,\bar{z}).\]
To calculate the two-point function and reduce the data to single AdS surface, we need to check the expression of \(\phi(\rho,z,\bar{z};\beta_{\pm})\) and \(\tilde{\phi}(\tau,\rho,z,\bar{z};\beta_{\pm})\) explicitly. Given the AdS modes as the basis, \(\phi\), \(\tilde{\phi}\) are characterised by the coefficient \(a^{+}_{lm}(k)\)\(a^{-}_{lm}(k)\) written as
\[\phi^{\pm}(\rho,z,\bar{z};k) = \sum_{lm}a^{+}_{lm}(k)\;\phi_{l}(\rho;\beta_{\pm})Y^{l}_{m}(z, \bar{z}) \tag{113}\] \[\tilde{\phi}^{\pm}(\rho,z,\bar{z};k) = \sum_{lm}a^{-}_{lm}(k)\;\phi_{l}(\rho;\beta_{\pm})Y^{l}_{m}(z, \bar{z}), \tag{114}\]
where we have chosen the normalisation for the spatial function as \(\phi_{l}(\rho;\beta_{+})=\rho^{\beta_{*}-1}+\cdots\). In such case, one can then write the sources in terms of spherical harmonic functions as
\[\phi^{+}(z,\bar{z};k) = \sum_{l,m}a^{+}_{lm}(k)Y^{l}_{m}(z,\bar{z}) \tag{115}\] \[\tilde{\phi}^{+}(z,\bar{z};k) = \sum_{l,m}a^{-}_{lm}(k)Y^{l}_{m}(z,\bar{z}). \tag{116}\]
and the two copies of propagators are given by
\[\langle\mathcal{O}(z,\bar{z};k)\mathcal{O}(z^{\prime},\bar{z}^{ \prime};k)\rangle = \frac{1}{N_{k}}\sum_{l\neq 0,m}\frac{a^{+}_{lm}(k)}{a^{-}_{lm}(k)} \frac{c_{k}}{|z-z^{\prime}|^{2\Delta_{k}}} \tag{117}\] \[\langle\tilde{\mathcal{O}}(z,\bar{z};k)\tilde{\mathcal{O}}(z^{ \prime},\bar{z}^{\prime};k)\rangle = \frac{1}{N_{k}}\sum_{l\neq 0,m}\frac{a^{-}_{lm}(k)}{a^{+}_{lm}(k)} \frac{c_{k}}{|z-z^{\prime}|^{2\Delta_{k}}}. \tag{118}\]
Such results work for Minkowski spacetime. For asymptotically Minkowski spacetime, one needs to do the harmonics analysis and the results are shown in Appendix C as (103) and (104).
Moreover, for higher point functions and theories with interaction. It is convenient to represent the correlation functions using two copies of disks labeled by \(+\) and -. For the correlators constructed
out of the operator \({\cal O}\), the interactions are described by the internal vertices inserted on the - disk while for the \(\hat{\cal O}\) correlators, the points will be inserted on the + disk, i.e, all the internal vertices of each diagram can only exist in one of the disk. For the external legs, the one connects two points on a single disk is described by the standard AdS/CFT propagator. While for the ones that connect two disks, we need to take the extra factors constructed out of the coefficients \(a^{\pm}_{lm}(k)\) into consideration. If we assume that the legs between two disks have directions and they always flow into the internal points, the leg that starts from the \(i\)th external vertex on the + disk then ends at the internal point the on - disk will contribute to a factor
\[i(+)\longrightarrow\bullet(-)\hskip 42.679134pt\frac{1}{N_{k}}\sum_{l\delta 0,m}\frac{a^{+}_{lm}(k_{i})}{a^{+}_{lm}(k_{i})} \tag{3.73}\]
and the one starts from the \(i\)th external vertex on the - disk then ends at the internal point the on + disk will contribute to a factor
\[i(-)\longrightarrow\bullet(+)\hskip 42.679134pt\frac{1}{N_{k}}\sum_{l\delta 0,m}\frac{a^{-}_{lm}(k_{i})}{a^{+}_{lm}(k_{i})}. \tag{3.74}\]
Shock Waves and Their Holographic Interpretation
In this section we consider the holographic interpretation of a shock wave. Here in our context, the shock waves are scalar shocks that could either distribute on a spherical shell or localise along the null geodesic of a massless particle. For the spherical shock wave, it describes the wave caused by a point like source then propagates in spacetime following a homogeneous way. For the second kind of shock wave, it could be treated as the approximation for the signals traveling at the speed of light like the laser beam 6. In fact, to make photons trapped in the beam, one should take gravity effects into consideration and it turns out such shock wave will induce backreactions on the metric studied in [81, 82]. Then the massive case is also studied in a perturbative way. However, in our situation, the shape of shock waves will be less important while the ingoing and outgoing behaviour of the wave will be crucial. To construct the shock wave solutions, we start from the Minkowski metric written as
Footnote 6: We assume such method can be generalised to gauge fields or we are dealing with a high energy beam of bosonic particles with small mass. Maybe for massive particles, one should consider the ingoing and outgoing wavepackets.
\[ds^{2} = -dt^{2}+dr^{2}+r^{2}\;d\Omega_{2}^{2} \tag{4.1}\] \[= -dudv+r^{2}\;d\Omega_{2}^{2},\]
where \(t\), \(r\) are the time and radial directions and \(d\Omega_{2}^{2}\) is the standard 2 sphere metric. In the second line, the retarded and advanced coordinates \(u\), \(v\) are defined as
\[u=t-r,\qquad v=t+r. \tag{4.2}\]
A massless particle which is described by the field \(\Phi\) satisfies
\[-4\partial_{u}\partial_{v}\Phi+\frac{1}{r^{2}}\,\Box_{S^{2}}\,\Phi=0. \tag{4.3}\]
Let us consider a spherically symmetric solution so that the equation reduces to
\[\partial_{u}\partial_{v}\Phi=0, \tag{4.4}\]
and general solutions are given by
\[\Phi(u,v)=\phi(u)+\tilde{\phi}(v) \tag{4.5}\]
Figure 5: In and out going shock waves propagating in region \(\mathcal{A}^{+}\) are shown in the figure.
where \(\phi\) and \(\hat{\phi}\) are arbitrary functions of \(u\), \(v\).
An interesting physical solution is the spherical shock wave. A shock wave emitted from the boundary and propagating along the null ray as illustrated in Figure 5 is described by \(\Phi_{s}^{in}\)
\[\Phi_{s}^{in}(v)=\phi_{0}\;\delta(v-v_{0}),\qquad v_{0}>0. \tag{100}\]
We can view the shock wave solution as a specific linear combination of plane wave solutions. Furthermore, to make the wave really localise along the null ray, one could consider the gravitational shock wave \(\Phi_{g}^{in}=\phi_{0}\delta(v-v_{0})\delta(z-z_{0})\) while it is not a solution for KG equation in flat spacetime and it only exists when the gravitational effect is taken into consideration. Here, to study the flat/CFT dictionary in a simple way, we choose to use the spherical shock wave as an example to perform the calculation and the results for \(\Phi_{s}^{in}\) is obtained by inserting the factor \(\delta(z-z_{0})\) behind 7.
Footnote 7: Here we assume that the localised shock waves propagate on the background where the holography principle still works.
To express the shock wave in terms of modes adapted to the hyperbolic slicing, we need to transform the coordinates (101) into Milne coordinates using
\[t^{2}-r^{2}=\tau^{2},\qquad\rho\tau=r, \tag{101}\]
in which we should note that the Milne coordinates will only cover the region \(\mathcal{A}^{+}\) if both \((\tau,\rho)\) are required to be positive \(\rho,\tau\geq 0\). The near light cone region is described by \(\tau\to 0\) and the asymptotic region is given by \(\tau\to\infty\). In Milne coordinates, the shock wave can be expressed as
\[\Phi_{s}^{in}=\phi_{0}\;\delta(\rho\tau+\tau\sqrt{1+\rho^{2}}-v_{0})=\phi_{0} \delta(\tau e^{\eta}-v_{0}). \tag{102}\]
We can now decompose this solution into modes as described in the previous section, resulting in
\[\Phi^{in}(\rho,z,\bar{z};\beta_{+}) = \frac{\phi_{0}}{2\pi}v_{0}^{\beta_{-}}e^{-(1+\beta_{-})\eta}, \tag{103}\] \[\Phi^{in}(\rho,z,\bar{z};\beta_{-}) = \frac{\phi_{0}}{2\pi}v_{0}^{\beta_{+}}e^{-(1+\beta_{+})\eta}.\]
The fields are independent of the sphere coordinates. One can immediately read off the coefficients of the asymptotic expansion using the relation \(\rho=\sinh\eta\) as
\[\phi^{+}(z,\bar{z};k) = \frac{\phi_{0}}{2\pi}2^{\beta_{+}-1}v_{0}^{\beta_{-}}\qquad\phi^ {-}(z,\bar{z};k)=0; \tag{104}\] \[\tilde{\phi}^{+}(z,\bar{z};k) = 0\qquad\tilde{\phi}^{-}(z,\bar{z};k)=\frac{\phi_{0}}{2\pi}2^{- \Delta}v_{0}^{\beta_{+}}.\]
This means that the operators \(\mathcal{O}(z,\bar{z};k)\) has no source or expectation value, but the operators \(\tilde{\mathcal{O}}(z,\bar{z};k)\) have both: the sources are \(\phi^{+}(z,\bar{z};k)\) while
\[\langle\tilde{O}(z,\bar{z};k)\rangle=-i\beta_{+}\frac{\phi_{0}}{\pi}2^{- \Delta}v_{0}^{\beta_{+}}. \tag{105}\]
It is straightforward to repeat the same exercise for a shock wave propagating along an orthogonal null ray i.e.
\[\Phi_{s}^{out}(u)=\phi_{0}\;\delta(u-u_{0}),\qquad u_{0}>0. \tag{106}\]
This can be expressed in terms of the Milne coordinates as
\[\Phi_{s}^{out}=\phi_{0}\;\delta(\tau\sqrt{1+\rho^{2}}-\tau\rho-u_{0})=\phi_{0} \delta(\tau e^{-\eta}-u_{0}). \tag{107}\]
Decomposing into modes one finds
\[\Phi^{out}(\rho,z,\bar{z};\beta_{+}) = \frac{\phi_{0}}{2\pi}u_{0}^{\beta_{-}}e^{(1+\beta_{-})\eta}, \tag{4.14}\] \[\Phi^{out}(\rho,z,\bar{z};\beta_{-}) = \frac{\phi_{0}}{2\pi}u_{0}^{\beta_{+}}e^{(1+\beta_{+})\eta}.\]
One can then read off the coefficients of the asymptotic expansion using the relation \(\rho=\sinh\eta\) as
\[\phi^{+}(z,\bar{z};k) = 0\qquad\phi^{-}(z,\bar{z};k)=\frac{\phi_{0}}{2\pi}2^{\Delta}u_{0 }^{\beta_{+}}; \tag{4.15}\] \[\tilde{\phi}^{+}(z,\bar{z};k) = \frac{\phi_{0}}{2\pi}2^{1-\beta_{+}}u_{0}^{\beta_{-}}\qquad \tilde{\phi}^{-}(z,\bar{z};k)=0.\]
This means that the operators \(\tilde{\mathcal{O}}(z,\bar{z};k)\) has no source or expectation value, but the operators \(\mathcal{O}(z,\bar{z};k)\) have both: the sources are \(\tilde{\phi}^{*}(z,\bar{z};k)\) while
\[\langle O(z,\bar{z};k)\rangle=-i\beta_{+}\frac{\phi_{0}}{\pi}2^{\Delta}u_{0}^ {\beta_{+}}. \tag{4.16}\]
Thus we can understand the two sets of dual operators as describing modes propagating in \((u,v)\) directions respectively:
\[\Phi(u) \to\{\mathcal{O}(z,\bar{z};k),\tilde{\phi}^{+}(z,\bar{z};k)\}; \tag{4.17}\] \[\Phi(v) \to\{\tilde{\mathcal{O}}(z,\bar{z};k),\phi^{+}(z,\bar{z};k)\}.\]
As for the two point functions, the structure will become complicated and one needs to take the gravitational effect into consideration. For the spherical shock wave, it is the solution for KG equation in Minkowski but the two-point function will become trivial since the solution takes constant value on the sphere and the method we have introduced in the section 3 will not work. It does not mean that the dual theory on the boundary will become trivial while we need to take the gravity backreaction into consideration in order to investigate the correlation function at higher order if one treats the constant \(\phi_{0}\) as a small parameter. After backreaction from the matter, the metric then becomes
\[G^{\prime}_{\mu\nu}=G_{\mu\nu}+\delta G_{\mu\nu}, \tag{4.18}\]
which \(G\) is the metric for Minkowski and the deformation caused by the matter is denoted as \(\delta G\). They are governed by Einstein equation
\[R_{\mu\nu}-\frac{1}{2}R\:G^{\prime}_{\mu\nu}=T_{\mu\nu}, \tag{4.19}\]
in which \(R_{\mu\nu}\) is the Ricci curvature for \(G^{\prime}\) and \(T_{\mu\nu}\) is the stress tensor determined by the scalar profile and in our case it is the shock wave \(\Phi_{s}\) thus we can see that the stress tensor is of the order \(\phi_{0}^{2}\). One can treat it as the Newtonian constant \(\phi_{0}^{2}\sim G_{N}\) which is not explicitly shown in the equation. From the above equation, one can also see that the deformation \(\delta G\) also goes as the order of \(\phi_{0}^{2}\)8. Given the deformed background then the scalar fluctuation \(\delta\Phi\) on the shock wave profile is determined by the KG equation
Footnote 8: In fact we have \((T^{\text{CFT}}_{\mu\nu})\sim\delta G\) in which \(T^{\text{CFT}}_{\mu\nu}\) is the stress tensor of the dual CFT theory on the celestial sphere. The specific expression relies on the holographic renormalisation of Einstein-Hilbert action which has been done in Graham-Fefferman coordinates for AdS case [53].
\[\Box_{G^{\prime}}\Phi^{\prime}=0, \tag{4.20}\]
where \(\Phi^{\prime}=\Phi_{s}+\delta\Phi\). Here we should note that, although \(\Phi_{s}\) is constant on the celestial sphere while the fluctuation \(\delta\Phi\) is not necessary constant and it depends on the further specification of the data at the initial time. Therefore both of the vacuum expectation value and source will receive correction of order \(\phi_{0}^{2}\) coming from \(\delta\Phi\) and the two-point function now becomes
\[({\cal O}(z;k){\cal O}(z^{\prime};k))_{s}=({\cal O}(z;k){\cal O}(z^{\prime};k)) +\phi_{0}^{2}\;F(z,z^{\prime};k), \tag{4.21}\]
where \(\langle\cdots\rangle_{s}\) represents that the operators are now inserted on the shock wave background rather than the Minkowski vacuum \(\langle\cdots\rangle\) and the higher order correction is of the order \(\phi_{0}^{2}\), i.e. \(G_{N}\). Its specific form is given by the function \(F(z,z^{\prime};k)\) determined by the variation \(\delta\Phi\). From above discussion, we know that all the spherical solutions without considering gravity effect in Minkowski are degenerate from boundary point of view and one needs to consider the variation of the scalar field in order to distinguish all the spherical solutions. The broken of the spherical symmetry caused by the gravity effect will enable us to calculate two-point functions at leading order and then introduce subleading terms characterised by the function \(F(z,z^{\prime};k)\). For the localised shock wave, one needs to figure out the background and then check if the holography principle still works on such background, which depends on the definition of asymptotic flat as well as the ability of holography principle and such work goes beyond the scope of this article.
### Coefficients
After the study of the dual correlation functions on the boundary. Here we will use the shock wave model as an example to study the bulk field in a direct way following the mode analysis introduced in section 2 and try to determine the coefficients of those modes. It is easier for the spherical shock waves since they are constant on the sphere and only the zero mode will contribute when performing the mode expansion. The analysis for localised shock wave will be harder since the analysis for the \(l\geq 1\) mode will be difficult and we will leave the mode analysis for \(\Phi_{g}\) for further investigation.
_Massless Fields_
From the discussion in section 2.3 and appendix B, we have seen that the zero mode \(l=0\) on the AdS hyperboloid has two independent solutions at large radius \(\rho=\sinh\eta\to\infty\)
\[\phi_{0}(\eta;\beta_{+})=\frac{e^{\beta_{+}\eta}}{\sinh\eta},\qquad\phi_{0}( \eta;\beta_{-})=\frac{e^{\beta_{-}\eta}}{\sinh\eta}. \tag{4.22}\]
The regular solution at \(\rho=0\), denoted as \(\phi_{r}(\eta;k)\), is the linear combination of them with ratio \(C_{0}^{-}(k)/C_{0}^{+}(k)=-1\)9 thus it can be written as
Footnote 9: One can obtain this by the direct observation of the liner combination of \(\phi_{0}(\eta;\beta_{\pm})\) or by checking the formula of \(C_{0}^{\pm}(k)\) for odd \(\beta\) in the Appendix B.
\[\phi_{r}(\eta;k)=\frac{1}{\sqrt{\pi}}\;\frac{\sinh\beta_{+}\eta}{\sinh\eta}. \tag{4.23}\]
One can check that \(\phi_{r}\) is regular for arbitrary \(\beta_{-}\) since \(\phi_{r}\sim\beta_{+}\) at \(\eta=0\). Here we are interested in the principal series case \(\beta_{+}=ik\) for \(k\geq 0\) and we assume that the result for other value can be obtained by the analytic continuation of \(\beta_{+}\).
For the ingoing waves, one has the expansion
\[\Phi^{in}(v)=\int_{\cal P}dk\;(a_{in}^{+}(k)\;\tau^{-1+\beta_{+}}+a_{in}^{-}(k )\;\tau^{-1+\beta_{-}})\;\phi_{r}(\eta,k), \tag{4.24}\]
in which \(a^{*}_{in}(k)\) is the pair of coefficients that we are going to determine. To calculate these coefficients, one should first note the orthogonal relation
\[\int_{-\infty}^{+\infty}\sinh^{2}\eta\;\phi_{r}^{*}(\eta;k)\phi_{r}(\eta;k)= \delta(k-k^{\prime}), \tag{4.25}\]
in which \(\phi_{r}^{*}\) is the complex conjugate of \(\phi_{r}\). Given the above relation, one can project out the \(\eta\) dependent part by performing the integral
\[\phi_{0}\int_{-\infty}^{+\infty}d\eta\;\delta(\tau e^{\eta}-v_{0})\sinh^{2} \eta\;\phi_{r}^{*}(\eta;k) \tag{4.26}\]
therefore coefficients \(a^{\pm}(k)\) are then deduced to be
\[a^{+}_{in}(k)=\frac{\phi_{0}v_{0}^{-\beta_{+}}}{4\sqrt{\pi}},\qquad a^{-}_{in }(k)=-\frac{\phi_{0}v_{0}^{\beta_{+}}}{4\sqrt{\pi}}, \tag{4.27}\]
in which we have omitted the correction term of order \(\tau^{2}\). The fact that we get extra terms in additional to the modes \(\tau^{-1+\beta_{\pm}}\) implies that the basis we have chosen is not complete. Here we assume that the mode expansion is done near the Milne horizon thus \(\tau\to 0\) and the higher order term will be subleading. For the outgoing shock wave, following similar procedure, one has the expansion
\[\Phi^{out}(u)=\int_{\cal P}dk\;(a^{+}_{out}(k)\;\tau^{-1+\beta_{+}}+a^{-}_{out }(k)\;\tau^{-1+\beta_{-}})\;\phi_{r}(\eta,k), \tag{4.28}\]
in which the corresponding coefficients \(a^{\pm}_{out}(k)\) are determined to be
\[a^{+}_{out}(k)=\frac{\phi_{0}u_{0}^{\beta_{+}}}{4\sqrt{\pi}},\qquad a^{-}_{ out}(k)=-\frac{\phi_{0}u_{0}^{-\beta_{+}}}{4\sqrt{\pi}}. \tag{4.29}\]
_Massive Fields_
Now we turn to the study of massive particles. First we try to make the particle slightly massive and then investigate the perturbative behaviour of the solution around the spherical shock wave. Similar to the study of massive KG equation, we choose to write the equation of motion for massive particle as
\[(\partial_{u}\partial_{v}+\lambda M^{2})\Phi_{M}(X)=0, \tag{4.30}\]
in which \(M\) is a constant and \(\lambda\) is a small parameter that represents the mass of the particles is small. Then we can write a general solution for the massive equation up to the first order of \(\lambda\) as
\[\Phi_{M}(X)=\phi_{0}\delta(v-v_{0})+\lambda f(u,v), \tag{4.31}\]
in which \(f(u,v)\) is a function of \(u,v\). To determine \(f(u,v)\), one should substitute the solution into the equation and solve the equation by the order of \(\lambda\) then obtain
\[\Phi_{M}(X)=\phi_{0}\delta(v-v_{0})-u\;\lambda\;\phi_{0}M^{2}\;\theta(v-v_{0}), \tag{4.32}\]
in which \(\theta(v-v_{0})\) is the step function supported in the region \(v>v_{0}\). The step function correction term tells us that, by adding a small amount of mass, the shock wave will be no longer localised along some spherical shell and propagate along the null direction while it will have a tail spreading over the whole region \(v>v_{0}\). If we define the coefficients of massive field expanded by the modes on AdS surfaces as \(a^{\pm}_{M}(k)\), i.e.
\[\Phi_{M}(u,v)=\int_{\cal P}dk\;(a^{+}_{M}(k)\;\tau^{-1+\beta_{+}}+a^{-}_{M}(k) \;\tau^{-1+\beta_{-}})\;\phi_{r}(\eta,k). \tag{4.33}\]
Then one can write \(a_{M}^{*}(k)\) by the order \(\lambda\) as
\[a_{M}^{*}(k)=a_{0}^{*}(k)+\lambda\;a_{1}^{*}(k)+\lambda^{2}a_{2}^{*}(k)+\cdots, \tag{4.34}\]
in which \(a_{0}^{*}(k)\) are the coefficients for massless case we have discussed before
\[a_{0}^{*}(k)=a_{in}^{*}(k) \tag{4.35}\]
and \(a_{i}^{*}(k)\) are higher order terms. Taking the solution in (4.32) for example, to calculate \(a_{1}^{*}(k)\) one should evaluate the integral
\[\int_{-\infty}^{+\infty}d\eta\;\sinh^{2}\eta\;e^{-\eta}\tau\;\phi_{r}^{*}(\eta; k)\theta(\tau e^{\eta}-v_{0}), \tag{4.36}\]
in which we still use the massless solution as the basis when performing the perturbative expansion. The above integral will vanish when \(\tau\) goes to zero thus one can conclude that
\[a_{1}^{*}(k)=0, \tag{4.37}\]
which tells us coefficients are stable at the massless case. It shows that, for the modes we are interested in, the mass of particle will not play a crucial role and make significant contribution thus the shock wave model is till good approximation for particles with small mass.
### Cauchy Problem and Scattering
In section 3, we start from the holographic renormalisation for the onshell action in region \(\mathcal{A}^{+}\) then conclude that the theory in flat spacetime \(\mathcal{A}^{+}\) is dual to the CFT on the celestial sphere \(S_{2}^{+}\) located at the future null boundary. To study the whole Minkowski space, in principle, one should consider the action in the region \(\mathcal{A}^{+}\cup\mathcal{D}\cup\mathcal{A}^{-}\) while it was conjectured in the work [29] that all the information of Minkowski could be classified by specifying the data on the two copies of AdS hyperboloid in \(\mathcal{A}^{+}\) and \(\mathcal{A}^{-}\) therefore it is enough to fully reconstruct the bulk theory using the holographic CFT data on the celestial sphere \(S_{2}^{+}\) and \(S_{2}^{-}\). In particular, the scattering amplitudes in Minkowski can also be constructed by studying the states on these two AdS hyperboloid although we know the fact that they are not the standard Cauchy surface. Here based on the study of AdS
Figure 6: A single AdS surface together with part of null boundary form a Cauchy surface for the whole Minkowski spacetime shown the left hand side. For example, to determine the field configuration at the red point, one needs to specify the data on both of the AdS hyperboloid and null boundary. The shock wave will transform the data from the null boundary to the other AdS surface in region \(\mathcal{A}^{+}\) so that two copies of AdS surfaces are equivalent to a Cauchy surface, which is illustrated in the right figure.
and dS modes, we will reconsider the distribution of information in Minkowski and a physical proof of the above conjecture will also be illustrated by doing a thought experiment on the shock wave model.
Following the principle of the mode expansion, to study the local behaviour of the solution \(\Phi_{M}(X)\) in region \(\mathcal{A}\) denoted as \(\Phi_{M}^{A}(X)\), one can expand the solution in terms of modes propagating on the AdS slicing. As we have studied in the section 2, the solution \(\Phi_{M}^{A}\) can be represented by the linear combination of modes with effective mass \(k\) provided that there is a set \(\mathcal{P}\) of \(k\) in which all the modes together form a complete basis of the solution space, written as
\[\Phi_{M}^{A}(X)=\sum_{l}\int_{\mathcal{P}_{A}}dk\;a_{l}(k)\;\psi^{A}(\tau;k)F_ {kl}^{A}(\rho,z,\bar{z}), \tag{110}\]
in which \(a_{l}(k)\) are coefficients and the label \(l\) is used to represent the other internal variables. \(F_{kl}^{A}(\rho,z,\bar{z})=\phi_{l}(\rho;k)Y_{m}^{I}(z,\bar{z})\) are the spatial modes introduced before while \((\rho,z,\bar{z})\) is the coordinate of AdS hyperboloid. For the same reason, we can choose to decompose the solution in region \(\mathcal{D}\), denoted as \(\Phi_{M}^{D}(X)\), into \(dS\) modes \(\psi^{D}(\rho;k)F_{kl}^{D}(\tau,z,\bar{z})\) thus it can be written as
\[\Phi_{M}^{D}(X)=\sum_{l}\int_{\mathcal{P}_{D}}dk\;b_{l}(k)\;\psi^{D}(\rho;k)F_ {kl}^{D}(\tau,z,\bar{z}), \tag{111}\]
in which we should note that the position of variable \(\tau\) and \(\rho\) are switched since we are using them to label the timelike and spacelike direction.
Before imposing the initial condition of the solution \(\Phi_{M}(X)\), we first consider the analytic continuation of the field \(\Phi_{M}(X)\) from the region \(\mathcal{A}^{-}\) into the region \(\mathcal{D}\) via the null surface \(\mathcal{N}\) shown in Figure 6. Given the field configuration \(\Phi_{M}^{A}(X)\), one can perform the analytic continuation by making \(k\to ik\) across the null surface \(\mathcal{N}\) then obtain \(\Phi_{M}^{D}(X)\). In terms of the coefficients, that is to say
\[\{a_{l}(k)\}=\{b_{l}(k)\} \tag{112}\]
in which we use the notion \(\{\}\) to represent the information contained in the modes and the equal sign means that one can determine all the \(b_{l}(k)\) s given the set of \(a_{l}(k)\) or vice versa.
To study the initial condition, or to determine the coefficients, first we need to choose a proper codimension one surface to set up the initial data. For the field \(\Phi_{M}^{A}(X)\), one can choose the AdS slicing \(X^{2}=-\tau_{0}^{2}\), denoted by \(\Sigma_{\tau_{0}}\) as the Cauchy surface for region \(\mathcal{A}^{-}\) therefore the field in region \(\mathcal{A}^{-}\) is uniquely determined given the initial data \(f_{i}\), \(g_{i}\)
\[\Phi_{M}(\tau_{0},\rho,z,\bar{z})=f_{i}(\rho,z,\bar{z}),\qquad n^{i}\partial_ {i}\Phi_{M}=g_{i}(\rho,z,\bar{z}) \tag{113}\]
where \(n^{i}\) denotes the further unit normal of \(\Sigma_{\tau_{0}}\). For the field in region \(\mathcal{D}\), the data on the surface \(\Sigma_{\tau_{0}}\) is not enough for us to uniquely fix the field configuration \(\Phi_{M}^{D}(X)\). One also needs to specify the data along the null boundary so that they form the Cauchy surface of the whole Minkowski together with the surface \(\Sigma_{\tau_{0}}\), which means one needs more data to determine \(\Phi_{M}^{D}\) comparing to \(\Phi_{M}^{A}(X)\). Since we have already known that fields in the region \(\mathcal{A}\), \(\mathcal{D}\) are fully determined by \(\{a_{l}(k)\}\) and \(\{b_{l}(k)\}\), we conclude that
\[\{a_{l}(k)\}\subset\{b_{l}(k)\}, \tag{114}\]
where the symbol \(\subset\) means that one can determine all the coefficients \(a_{l}(k)\) given the set of \(b_{l}(k)\) while the other direction is not true anymore, which implies that there are modes not governed by the analytic continuation thus one has \(\mathcal{P}_{A}\subset\mathcal{P}_{D}\).
Furthermore, based on the calculation in the previous section, we see that, for the massless particle, one can construct the shock wave as the solution of the Klein-Gordon equation. The shock waves propagate along the null direction and they are localised around the trajectory of the
massless particles. Moreover, these shock waves that start from the null infinity then go through the AdS slicing surfaces in region \({\cal A}^{+}\) enable the exchange of information between the observer living in some particular AdS surface in region \({\cal A}^{+}\) and the observer on the null boundary. For example, the observer at the boundary can send the information of the initial position and momentum of the particle to the observer in region \({\cal A}^{+}\) via the shock wave and the observer in region \({\cal A}^{+}\) can read out these information by determining the coefficients \(a^{*}(k)\). Thus two copies of AdS surface in region \({\cal A}^{-}\) and \({\cal A}^{+}\), respectively, form a structure that is equivalent to the Cauchy surface since we know that one AdS Surface in region \({\cal A}^{-}\) together with half of the null boundary \(v>0\) carry a complete set of data for one to determine the field configuration in the whole space time.
Discussion and Conclusions
In this article, we have used the AdS/CFT dictionary to develop a holographic dictionary between flat space and celestial CFT. The key steps in our approach are transforming bulk fields from time to frequency representation, and using the usual AdS/CFT dictionary on spatial hyperbolic slices of the fields in mixed representation of frequency/hyperbolic spatial coordinates. We have shown that a single scalar field propagating in Minkowski is dual to two series of operators on the celestial sphere with scale dimensions on the principal series. One can physically interpret the two sets of operators as ingoing and outgoing modes.
Here we have focussed on the example of a scalar field but one would expect an analogous structure for the bulk metric. In particular, one would begin the construction of the holographic dictionary by expressing the 4d metric in a \((3+1)\) form, respecting covariance of the spatial slices and again transforming time dependence into frequency dependence. Working to linear order in the metric perturbations around a fixed background, one would thus obtain fields of the form \(\{h_{ab}(k),h_{a}(k),h(k)\}\) i.e. spin two, spin one and spin zero from the perspective of the three-dimensional spatial slices. The corresponding dual operators would then be expected to have the structure \(\{X^{\pm}_{ij}(k),X^{\pm}_{i}(k),X^{\pm}(k)\}\), towers of spin two, spin one and spin zero operators. It would be interesting to work out the detailed dictionary in future work. The renormalisation procedure for the gravity action is expected to be subtle but it will enable determination of central charges as well as facilitate the study of non-trivial gravitational backgrounds such as gravitational shock waves and black holes10.
Footnote 10: We would like to thank Kevin Nguyen for pointing out the work on the study of renormalised effective gravitational action on the celestial sphere obtained by reducing the dimension following the dS hyperboloid in the Rindler wedge [83, 84]. We also found out the discussion of effective gravity action and the central charge in the context of wedge holography [85].
Asymptotically (locally) flat spacetimes have as asymptotic symmetries the (extended) BMS groups at the null boundaries. Therefore the total symmetry for given observable quantities should be \(\mathrm{BMS}^{+}\times\mathrm{BMS}^{-}\) since we have the null boundaries at far past and far future and we proposed that such symmetry is manifested by these two series of operators. Moreover in the work [41], Strominger proposed that the symmetry which a quantum gravity scattering matrix should preserve is the subgroup \(\mathrm{BMS}^{0}\subset\mathrm{BMS}^{+}\times\mathrm{BMS}^{-}\) by matching two null boundaries at the spatial infinity \(i^{0}\). This fits with our observation that the two series operators are dual to the ingoing and outgoing shock waves in the bulk and they are related by physical process that occur in the center. From the boundary point of view, we can see that these two series of operators are coupled with each other.
There has recently been considerable discussion of the role of Carollian symmetry in flat space holography [86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96]. It would be interesting to explore how the structure of the holographic dictionary for the metric can be interpreted in term of Carollian structure. In the context of Carollian CFTs, one can introduce the notion of Carrollian time \(t_{c}\) as the dual of effective mass \(t_{c}\sim k\) and thus the series of correlation functions on the celestial sphere can be viewed as dual to a 3d correlation function, i.e.
\[\langle\mathcal{O}(z,\bar{z};k)\mathcal{O}(z^{\prime},\bar{z}^{\prime};k) \rangle\longleftrightarrow\langle\mathcal{O}(z,\bar{z},t_{c})\mathcal{O}(z^ {\prime},\bar{z}^{\prime},t_{c})\rangle, \tag{101}\]
and these are related by the integral transform
\[\mathcal{O}(t_{c},z,\bar{z})=\int_{\mathcal{P}}dk\;G(t_{c},k)\;\mathcal{O}(z, \bar{z};k), \tag{102}\]
in which the Green function \(G(t_{c},k)\) would be determined by the definition of Carollian time \(t_{c}\) together with the dynamical structure of the system. As we can see, it is easier to study the
distribution of the scale dimensions and construct the dictionary using the operators in \(k\) space while it may be more convenient to study the symmetries and the evolution of the system in the proposed \(3d\) spacetime. We will not go into detail of the integral transform and leave the explicit form of \(G(t_{c},k)\) for further investigation.
The key feature for the proposed flat/celestial CFT dictionary is that it reduces two dimensions from the bulk to boundary celestial sphere. The duality relates the bulk theory to a Euclidean CFT on the sphere, with the time dependence captured by the map of a single bulk 4d field to an infinite tower of CFT operators. Many subtle questions remain about the recovery of unitary from the dual perspective. The scale dimensions of the CFT operators are complex therefore the Euclidean CFT is not unitary, yet many of the standard results used extensively in two dimensional CFTs, such as Cardy's formula, rely on unitarity. Recovery of unitarity from the dual perspective would rely on understanding how the boundary data in \(k\) space can be reinterpreted in the \(t_{c}\) domain. In particular, this would be necessary to explore how black hole information is recovered at the quantum level.
In our construction of the flat/CFT dictionary, one can see that the boundary correlation functions are determined by the coefficients \(a_{lm}^{+}(k)\) which carry the information about the bulk solution. These coefficients are determined by specifying the data on the Cauchy surface of initial time and they govern the dynamical evolution of the system. To construct a proper defined quantum field theory, one should understand how constraints such as causality, Lorentz invariance and the cluster decomposition principle are related to this data. We will leave deeper exploration of such relations to further work.
We noted that one may use the data on two copies of the Euclidean AdS hyperboloid together with the equation of motion to reconstruct the linearised field in the whole Minkowski spacetime. However, one should note that these two AdS surfaces are not Cauchy surfaces according to the standard definitions [97; 98]. A deeper understanding of the underlying structure will be helpful to study scattering amplitudes and the causal properties of spacetime.
###### Acknowledgements.
MT is supported in part by the Science and Technology Facilities Council (Consolidated Grant "Exploring the Limits of the Standard Model and Beyond"). ZH would like to thank his father Qinghe Hao and mother Xiulan Xu for providing funding for the tuition and accommodation fees when studying his PhD at the University of Southampton. ZH would also like to thank Federico Capone, Enrico Parisini and Kostas Skenderis for various discussions on celestial holography throughout the development of this work.
## Appendix A Coordinates
In this section, we will introduce various kinds of coordinates for Minkowski space that are convenient for us to reduce the data to the AdS hyperboloid, which are used many times in this article. The flat space time is described by the metric \(\eta_{\mu\nu}\) for \(\mu,\nu=0,1,2,3\), with diagonal elements \(\eta_{00}=-1\)\(\eta_{11}=\eta_{22}=\eta_{33}=1\), written us
\[ds^{2}=\eta_{\mu\nu}dX^{\mu}dX^{\nu}=-(dX^{0})^{2}+(dX^{1})^{2}+(dX^{2})^{2}+( dX^{3})^{2},\] (A.1)
in which \((X^{0},X^{1},X^{2},X^{3})\) are the chosen coordinates. Here we just focus on the four dimensional spacetime and the codimension one AdS\({}_{3}\) hypersurface characterised by the radius \(\tau\) can be treated as the embedding
\[-(X^{0})^{2}+(X^{1})^{2}+(X^{2})^{2}+(X^{3})^{2}=-\tau^{2},\] (A.2)
where we should note that here the flat Minkowski space is the physical space and the AdS\({}_{3}\) surfaces are introduced for the decomposition of data. The timelike wedge in Minkowski which can be foliated by the AdS surfaces are so called Milne wedge.
Moreover, given such foliation, one can introduce global coordinates \((\tau,\eta,\theta,\phi)\) to cover the Milne wedge. The transformation is given by
\[X^{0}=\tau\,\cosh\eta,\] (A.3) \[X^{1}=\tau\,\sin\theta\,\sin\phi\,\sinh\eta,\] (A.4) \[X^{2}=\tau\,\sin\theta\,\cos\phi\,\sinh\eta,\] (A.5) \[X^{3}=\tau\,\cos\theta\sinh\eta,\] (A.6)
in which one can see the relation (A.2) is automatically satisfied. \((\theta,\phi)\) are coordinates on the sphere and \(\tau\) is the radius of the AdS surface. The spatial distance from the origin on the hyperboloid is described by \(\sinh\eta\). In the global coordinate, the metric now becomes
\[ds^{2}=-d\tau^{2}+\tau^{2}\left(d\eta^{2}+\sinh^{2}\eta\;d\Omega_{2}^{2}\right),\] (A.7)
in which the metric on the standard sphere \(S^{2}\) is given by
\[d\Omega_{2}^{2} = d\theta^{2}+\sin^{2}\theta d\phi^{2}\] (A.8) \[= \frac{4}{(1+z\bar{z})^{2}}\;dzd\bar{z}=2\gamma_{z\bar{z}}\;dzd \bar{z}.\] (A.9)
The complex coordinates \((z,\bar{z})\) on the plane are obtained by the stereographic projection from the sphere
\[z=e^{i\phi}\tan\frac{\theta}{2}\qquad\bar{z}=e^{-i\phi}\tan\frac{\theta}{2}.\] (A.10)
As we have mentioned, the value \(\sinh\eta\) makes more sense as a physical quantity thus one can define \(\rho=\sinh\eta\) then the metric now becomes
\[ds^{2}=-d\tau^{2}+\tau^{2}\left(\frac{d\rho^{2}}{1+\rho^{2}}+2\rho^{2}\gamma_ {z\bar{z}}dzd\bar{z}\right),\] (A.11)
which is the standard form of Milne coordinates in the literature.
To study a single AdS surface, sometimes it is more convenient to introduce Poincare coordinates \((t,x,y)\) defined as
\[t=\frac{1}{X^{0}+X^{3}},\qquad x=\frac{X^{1}}{X^{0}+X^{3}},\qquad y=\frac{X^{2 }}{X^{0}+X^{3}},\] (A.12)
and after setting \(\tau=1\), one can pull back the metric to the AdS surface then obtain
\[ds^{2}_{\text{AdS}_{3}}=\frac{dt^{2}+dx^{2}+dy^{2}}{t^{2}}=\frac{dt^{2}+d\omega d \bar{\omega}}{t^{2}}, \tag{119}\]
in which \(\omega=x+iy\). In terms of global coordinates, the Poincare coordinates can be written as
\[t = \frac{1}{\cosh\eta+\cos\theta\sinh\eta}, \tag{120}\] \[x = \frac{\sin\phi\sin\theta\sinh\eta}{\cosh\eta+\cos\theta\sinh\eta},\] (121) \[y = \frac{\cos\phi\sin\theta\sinh\eta}{\cosh\eta+\cos\theta\sinh\eta}, \tag{122}\]
and \((\omega,\bar{\omega})\) takes the form of
\[\omega=\frac{e^{i\phi}\sin\theta\sinh\eta}{\cosh\eta+\cos\theta\sinh\eta}, \qquad\bar{\omega}=\frac{e^{-i\phi}\sin\theta\sinh\eta}{\cosh\eta+\cos\theta \sinh\eta}. \tag{123}\]
One should note that, in the large \(\eta\) limit, \((\omega,\bar{\omega})\) will tend to \((z,\bar{z})\) thus it becomes complex coordinates of celestial sphere on the boundary.
In terms of Poincare coordinates, the boundary-bulk propagator \(K_{\Delta}(t,x,y;x^{\prime},y^{\prime})\) for massless fields is given by
\[K(t,x,y;x^{\prime},y^{\prime})=\frac{\Gamma(\Delta)}{\pi\Gamma(\Delta-1)} \left(\frac{t}{t^{2}+(x-x^{\prime})^{2}+(y-y^{\prime})^{2}}\right)^{\Delta}= \frac{N_{\Delta}}{\pi}\left(\frac{t}{t^{2}+(\omega-z^{\prime})(\bar{\omega}- \bar{z}^{\prime})}\right)^{\Delta}, \tag{124}\]
in which \((x^{\prime},y^{\prime})\) are the points on the boundary and \(z^{\prime}=x^{\prime}+iy^{\prime}\). We set \(N_{\Delta}=\Gamma(\Delta)/\Gamma(\Delta-1)\) for simplicity. Following the dictionary for the particles of mass \(M\), we have \(\Delta=1+\sqrt{1+M^{2}}\). The propagator could also be written in terms of global coordinates and at large \(\rho\), one can check it takes the form
\[K^{\rho=\infty}(\rho,z;z^{\prime})=\frac{(1+z\bar{z})^{\Delta}}{\pi\rho^{ \Delta}}\;\frac{N_{\Delta}}{|z-z^{\prime}|^{2\Delta}}=\frac{2^{\frac{\Delta}{2 }}}{\pi\Omega_{2}(z)^{\frac{\Delta}{2}}\;\rho^{\Delta}}\;\frac{N_{\Delta}}{|z -z^{\prime}|^{2\Delta}}, \tag{125}\]
in which \(\Omega_{2}(z)dzd\bar{z}=d\Omega_{2}\) is the volume form of the standard sphere in terms of complex coordinates. From the distribution point of view, the boundary-bulk propagators are in fact equivalent to the delta function between boundary points [13], i.e, we have
\[\delta(z-z^{\prime})=t^{\Delta-2}K(\rho,z;z^{\prime})=\frac{2^{\Delta-1}}{\pi \rho^{2\Delta-2}\Omega_{2}(z)^{\Delta-1}}\frac{N_{\Delta}}{|z-z^{\prime}|^{2 \Delta}}+\cdots, \tag{126}\]
in which we have done the expansion of \(K(\rho,z;z^{\prime})\) at large radius \(\rho\). In this article, we are interested in the correlation functions on the plane which is related to the sphere correlation functions by the conformal transformation and the bulk-boundary propagator is then given by
\[K(\rho,z;z^{\prime})=\frac{1}{\pi\rho^{\Delta}}\frac{N_{\Delta}}{|z-z^{\prime }|^{2\Delta}}. \tag{127}\]
Therefore, a generic field in the bulk with boundary behaviour \(\varphi(\rho,z)\sim\rho^{\Delta-2}\varphi(z)\) can be expressed as
\[\varphi(\rho,z)=\int_{M_{2}}dz^{\prime}dz^{\prime}\;K(\rho,z;z^{\prime})\varphi (z^{\prime}), \tag{128}\]
in which the integral is over the two-dimension plane and we have used the relation \(\delta(z-z^{\prime})\sim\frac{1}{\rho^{2\Delta-2}}\frac{1}{|z-z^{\prime}|^{2 \Delta}}\). Here we should note that, by considering the property of the Green function
\[\int_{M_{2}}d^{2}z^{\prime}\delta(z-z^{\prime})\delta(z^{\prime}-z^{\prime \prime})=\delta(z-z^{\prime\prime}) \tag{129}\]
we have the contracting relation for the propagator
\[\int_{M_{2}}dz^{\prime}dz^{\prime}\frac{N_{\Delta}}{\rho^{2\Delta-2}}\frac{1}{|z- z^{\prime}|^{2\Delta}}\frac{1}{|z^{\prime}-z^{\prime\prime}|^{2\Delta}}=\frac{\pi}{|z-z^{ \prime\prime}|^{2\Delta}},\] (A.24)
which turns out to be useful in simplifying the calculation.
Solutions
In this section, we will present the solution of equation on the AdS hyperboloid (2.8) written as
\[\left(-\partial_{\eta}^{2}-2(\coth\eta)\partial_{\eta}+l(l+1)\text{csch}^{2}\eta+ k^{2}\right)\phi_{l}(\eta;k)=0,\] (B.1)
for \(\rho=\sinh\eta\). Solutions can be found at the boundary and origin respectively and written as the expansion of proper basis. We should note that the basis at the origin and boundary are dependent and they are related via transformation, which we will see in the end of this section. Mode solutions for Lorentzian AdS have been studied in [99] while solutions for dS modes haven been studied in [60; 72].
#### Behaviour at the boundary
For the solution at the boundary, we first choose to write them in terms of hypergeometric functions and then transform them into the associated Legendre functions. In order to transform the equation into the standard form for hypergeometric functions, we write the solution into the form of
\[\phi_{lk}(\eta)=\frac{f_{\beta l}(\frac{1}{\sinh^{2}\eta})}{\sinh^{\beta+1} \eta},\] (B.2)
in which \(\beta^{2}=1+k^{2}\) and \(f\) depends on \(\eta\) for \(\eta\geq 0\). Now the equation (B.1) becomes
\[4x(x+1)f^{\prime\prime}_{\beta l}(x)+2(2(1+\beta)+(3+2\beta)x)f^{\prime}_{ \beta l}(x)-(l(l+1)-\beta(\beta+1))f_{\beta l}(x)=0,\] (B.3)
in which \(x\) is defined as
\[x=\frac{1}{\sinh^{2}\eta}.\] (B.4)
Here we should note that the above equation is still not in the form of hypergeometric equation because of the \(x(x+1)\) term in front of \(f^{\prime\prime}_{\beta l}(\eta)\). Thus we further do the transformation \(x\to x-1\) then obtain the equation
\[x(1-x)p^{\prime\prime}_{\beta l}(x)-\frac{1}{2}((3+2\beta)x-1)p^{\prime}_{ \beta l}(x)+\frac{1}{4}(l(l+1)-\beta(1+\beta))p_{\beta l}(x)=0,\] (B.5)
in which \(p_{\beta l}(x)\) is defined as
\[p_{\beta l}(x)=f_{\beta l}(x-1).\] (B.6)
Given the equation (B.5), one can write down the solution at \(x=1\) as
\[p_{\beta l}(x)={}_{2}F_{1}(\frac{1}{2}+\frac{l}{2}+\frac{\beta}{2},-\frac{l}{ 2}+\frac{\beta}{2}\;;\,1+\beta\;;\,1-x)\] (B.7)
therefore the \(f_{\beta l}(\eta)\) is then deduced to be
\[f_{\beta l}\left(\frac{1}{\sinh^{2}\eta}\right)={}_{2}F_{1}\left(\frac{1}{2}+ \frac{l}{2}+\frac{\beta}{2},-\frac{l}{2}+\frac{\beta}{2}\;;\,1+\beta\;;\, -\frac{1}{\sinh^{2}(\eta)}\right).\] (B.8)
Furthermore, after applying the transformation for hypergeomtric functions
\[{}_{2}F_{1}(\frac{a+c-1}{2},\frac{c-a}{2};\,\,c\;;4z(1-z))=(1-z)^{1-c}{}_{2}F _{1}(1-a,a\;;\,\,c\;;z)\] (B.9)
for
\[c=1+\beta,\qquad a=l+1,\qquad z=\frac{1}{2}(1-\coth\eta)\] (B.10)
to the solution (B.8), we get
\[\phi_{l\beta}(\eta)=2^{\beta}\frac{e^{-\beta\eta}}{\sinh(\eta)}\;{}_{2}F_{1}(-l,l+1 \;;\;1+\beta\;;\;\frac{1}{2}(1-\coth\eta)).\] (B.11)
Noting that \(\beta\) could take both of the value \(\beta_{\pm}=\pm\sqrt{1+k^{2}}\), one finally concludes the two independent solutions are
\[\phi_{l}(\eta;\beta_{+})=\frac{\Gamma(1-\beta_{+})}{(-2)^{\beta_{+}}}\frac{P_{ l}^{\sqrt{1+k^{2}}}(\coth\eta)}{\sinh\eta},\qquad\phi_{l}(\eta;\beta_{-})=\frac{ \Gamma(1-\beta_{-})}{(-2)^{\beta_{-}}}\frac{P_{l}^{-\sqrt{1+k^{2}}}(\coth\eta )}{\sinh\eta},\] (B.12)
in which we have taken the factor \(\Gamma(1\pm\beta)\) into consideration.
**Behaviour at the origin**
To study the behaviour of the solution at the origin, denoted as \(\chi_{l}(\eta;k)\), we choose to write the function into the form of
\[\chi_{l}(\eta;k)=\sinh^{a}\eta\;f_{\beta l}(\sinh^{2}\eta)\] (B.13)
in which \(a\) should satisfy the relation
\[a(a+1)=l(l+1)\] (B.14)
so that the equation can be recast into the hypergeometric form
\[x(1-x)q_{\beta l}^{\prime\prime}(x)-\frac{1}{2}(-1+2(2+a)x)q_{\beta l}^{ \prime}(x)+\frac{1}{4}(\beta^{2}-a^{2}-2a)q_{\beta l}(x)=0,\] (B.15)
in which again \(\beta^{2}\) takes the value \(1+k^{2}\) and the function \(q_{\beta}(x)\) is defined as
\[q_{\beta l}(x)=f_{\beta l}(x-1).\] (B.16)
Given the hypergeometric equation, solutions are then deduced to be
\[f_{\beta l}(\sinh^{2}\eta)={}_{2}F_{1}(\frac{1}{2}+\frac{a}{2}+\frac{\beta}{2},\frac{1}{2}+\frac{a}{2}-\frac{\beta}{2}\;;\;\frac{1}{2}\;;\;\cosh^{2}\eta),\] (B.17)
in which we have set \(\beta=\sqrt{1+k^{2}}\). More precisely, for \(a=l\) we have
\[\chi_{l}^{1}(\eta;k)=\sinh^{l}(\eta)\;{}_{2}F_{1}(\frac{1}{2}+\frac{l}{2}+ \frac{\beta}{2},\frac{1}{2}+\frac{l}{2}-\frac{\beta}{2}\;;\;\frac{1}{2}\;;\; \cosh^{2}\eta)\] (B.18)
while for \(a=-1-l\) the solution becomes
\[\chi_{l}^{2}(\eta;k)=\sinh^{-l-1}(\eta)\;{}_{2}F_{1}(-\frac{l}{2}-\frac{\beta} {2},-\frac{l}{2}+\frac{\beta}{2}\;;\;\frac{1}{2}\;;\;\cosh^{2}\eta).\] (B.19)
Here, we are just interested in the solution \(\chi_{l}^{l}(\eta;k)\) since it is the regular solution around the origin for \(l\geq 0\) and one can verify that, by using the transformation rule for hypergeometric function
\[{}_{2}F^{1}(a,b;c;z) = \frac{(1-z)^{-a}\Gamma(c)\Gamma(b-a)}{\Gamma(b)\Gamma(b-a)}{}_{2}F ^{1}\left(a,c-b;a-b+1;\frac{1}{1-z}\right)\] (B.20) \[+ (1-z)^{-b}\frac{\Gamma(c)\Gamma(a-b)}{\Gamma(a)\Gamma(c-b)}{}_{2} F^{1}\left(b,c-a;b-a+1;\frac{1}{1-z}\right),\] (B.21)
it can be written in terms of the solution at the boundary as
\[\chi_{l}^{1}(\eta;k)=C_{l}^{+}(k)\phi_{l}(\eta;\beta_{+})+C_{l}^{-}(k)\phi_{l}( \eta;\beta_{-}),\] (B.22)
in which \(C_{l}^{\ast}(k)\) are the coefficients given by
\[C_{l}^{+}(k)=(-i)^{1+l+\beta}\frac{\Gamma(\frac{1}{2})\Gamma(-\beta)}{\Gamma( \frac{1}{2}+\frac{l}{2}-\frac{\beta}{2})\Gamma(-\frac{l}{2}-\frac{\beta}{2})}\] (B.23)
and
\[C_{l}^{-}(k)=(-i)^{1+l-\beta}\frac{\Gamma(\frac{1}{2})\Gamma(\beta)}{\Gamma( \frac{1}{2}+\frac{l}{2}+\frac{\beta}{2})\Gamma(-\frac{l}{2}+\frac{\beta}{2})}.\] (B.24)
#### Ratio
In order to obtain the CFT two-point function on the celestial sphere, one should calculate the functional derivative of the one-point function with respective to the source. Moreover, with the help of AdS/CFT dictionary, the functional derivative is given by the ratio of coefficients, written as
\[\frac{C_{l}^{-}(k)}{C_{l}^{+}(k)}=(-1)^{\beta}\frac{\Gamma(\beta)\Gamma(\frac{1} {2}+\frac{l}{2}-\frac{\beta}{2})\Gamma(-\frac{l}{2}-\frac{\beta}{2})}{\Gamma(- \beta)\Gamma(\frac{1}{2}+\frac{l}{2}+\frac{\beta}{2})\Gamma(-\frac{l}{2}+\frac {\beta}{2})}. \tag{110}\]
To simplify above expression, we first use the recurrence formula for Gamma function given by
\[\Gamma(-\frac{l}{2}-\frac{\beta}{2})(-\frac{l}{2}-\frac{\beta}{2})(-\frac{l}{ 2}+1-\frac{\beta}{2})\cdots(\frac{l}{2}-1-\frac{\beta}{2})=\Gamma(\frac{l}{2}- \frac{\beta}{2}) \tag{111}\]
and
\[\Gamma(-\frac{l}{2}+\frac{\beta}{2})(-\frac{l}{2}+\frac{\beta}{2})(-\frac{l}{ 2}+1+\frac{\beta}{2})\cdots(\frac{l}{2}-1+\frac{\beta}{2})=\Gamma(\frac{l}{2} +\frac{\beta}{2}) \tag{112}\]
to transform \(\Gamma(-\frac{l}{2}\pm\frac{\beta}{2})\) into \(\Gamma(\frac{l}{2}\pm\frac{\beta}{2})\). Therefore, one can write the ratio (110) into the form of
\[(-1)^{\beta}\ \frac{\Gamma(\beta)\Gamma(\frac{1}{2}+\frac{l}{2}-\frac{\beta }{2})\Gamma(\frac{l}{2}-\frac{\beta}{2})}{\Gamma(-\beta)\Gamma(\frac{1}{2}+ \frac{l}{2}+\frac{\beta}{2})\Gamma(\frac{l}{2}+\frac{\beta}{2})}\times\frac{( -\frac{l}{2}+\frac{\beta}{2})(-\frac{l}{2}+1+\frac{\beta}{2})\cdots(\frac{l}{ 2}-1+\frac{\beta}{2})}{(-\frac{l}{2}-\frac{\beta}{2})(-\frac{l}{2}+1-\frac{ \beta}{2})\cdots(\frac{l}{2}-1-\frac{\beta}{2})}. \tag{113}\]
After applying the Legendre duplication formula
\[\Gamma(z)\Gamma(z+\frac{1}{2})=2^{1-2z}\sqrt{\pi}\Gamma(2z) \tag{114}\]
for \(z=\frac{l}{2}\pm\frac{\beta}{2}\), we have
\[\Gamma(\frac{1}{2}+\frac{l}{2}\pm\frac{\beta}{2})\Gamma(\frac{l}{2}\pm\frac{ \beta}{2})=\sqrt{\pi}2^{1-(l\pm\beta)}\Gamma(l\pm\beta). \tag{115}\]
For the part on right of (113), one should notice that
\[\frac{(-\frac{l}{2}+\frac{\beta}{2})(-\frac{l}{2}+1+\frac{\beta}{2})\cdots( \frac{l}{2}-1+\frac{\beta}{2})}{(-\frac{l}{2}-\frac{\beta}{2})(-\frac{l}{2}+1 -\frac{\beta}{2})\cdots(\frac{l}{2}-1-\frac{\beta}{2})}=(-1)^{l}\frac{(-\frac{ l}{2}+1-\frac{\beta}{2})(-\frac{l}{2}+2-\frac{\beta}{2})\cdots(\frac{l}{2}-\frac{ \beta}{2})}{(-\frac{l}{2}-\frac{\beta}{2})(-\frac{l}{2}+1-\frac{\beta}{2}) \cdots(\frac{\beta}{2}-1-\frac{\beta}{2})}=(-1)^{l}\frac{\beta-l}{\beta+l}. \tag{116}\]
After substituting (115) and (116) into (113), one has
\[\frac{C_{l}^{-}(k)}{C_{l}^{+}(k)} = (-1)^{l+\beta}\ 2^{2\beta}\ \frac{\Gamma(\beta)\Gamma(l-\beta)(\beta-l)}{ \Gamma(-\beta)\Gamma(l+\beta)(\beta+l)} \tag{117}\] \[= (-1)^{l+\beta+1}\ 4^{\beta}\ \frac{\Gamma(\beta)\Gamma(l-\beta+1)}{ \Gamma(-\beta)\Gamma(l+\beta+1)}\] (118) \[= (-1)^{l+\beta+1}\ 4^{\beta}\ \frac{B(\beta,l-\beta+1)}{B(-\beta,l+\beta+1)}, \tag{119}\]
where we have written the result in terms of Beta function \(B(x,y)=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}\) in the third line.
Harmonic Modes
In this section we will present an alternative way to calculate the two-point correlation functions different from the bulk-boundary approach used in section 3. The strategy here is that one can study the function on the sphere in terms of the discrete harmonic variables rather than the continuous complex coordinates and the functional variation in complex coordinates becomes ratio between functions in the discrete coordinates. First we will review the calculation in the context of AdS/CFT then generalise this into flat/CFT and at the end argue the equivalence between the propagator approach and the approach for doing the ratio. Such method allows us to calculate the two point function for asymptotic flat space when finding the solution of the propagator becomes hard.
Like the Fourier transform between the spacetime and momentum, the transformation between the discrete mode variables \((l,m)\) and complex coordinates \((z,\bar{z})\) on the sphere
\[(l,m)\longleftrightarrow(z,\bar{z}) \tag{108}\]
are related by the spherical harmonics \(Y_{m}^{l}(z,\bar{z})\). More precisely, as shown in (13), the transformation is realised via the expansion
\[F_{k}(\rho,z,\bar{z}):=\sum_{lm}F_{k,l,m}(\rho,z,\bar{z})=\sum_{l,m}\phi_{l}( \rho;k)Y_{m}^{l}(z,\bar{z}) \tag{109}\]
in which \(F_{k}(\rho,z,\bar{z})\) is the spatial \(k\) mode that depends on \((\rho,z,\bar{z})\) and \(\phi_{l}(\rho;k)\) is the associated expression in the mode variables \((\rho,l,m)\). The \(m\) dependence is suppressed since the equation of motion on the AdS hyperboloid does not depend on \(m\)11. Given the solution \(\phi_{l}(\rho;\beta_{*})\) and their asymptotic expansion at infinity
Footnote 11: In fact, it is more appropriate to use the notion \(\phi_{lm}(\rho;k)\) here even though the solution dose not depend on \(m\) explicitly.
\[\phi_{l}(\rho;\beta_{*})=\rho^{\beta_{*}-1}(\phi_{l}^{*}(k)+\mathcal{O}(\frac{ 1}{\rho^{2}})), \tag{110}\]
one can immediately obtain the dictionary for AdS/CFT in the form of mode variables \((m,l)\), written as
\[\hat{\mathcal{J}}_{lm}(k)=\phi_{l}^{*}(k)\qquad(\hat{\mathcal{O}}_{lm}(k))=-2 i\beta_{+}\;\phi_{l}^{-}(k)\qquad\text{for}\qquad-l\leq m\leq l, \tag{111}\]
in which \(\hat{\mathcal{J}}_{lm}(k)\) and \((\hat{\mathcal{O}}_{lm}(k))\) are the corresponding source and one-point function that lives on the boundary celestial sphere. Here they are not required to be physical operators and sources thus we can treat them as virtual particles by construction. In terms of \((z,\bar{z})\) coordinates, they should have the form of
\[\hat{\mathcal{J}}(z,\bar{z};k)=\sum_{l,m}\phi_{l}^{*}(k)Y_{m}^{l}(z,\bar{z}), \qquad(\hat{\mathcal{O}}(z,\bar{z};k))=-\sum_{lm}2i\beta_{+}\;\phi_{l}^{-}(k)Y _{m}^{l}(z,\bar{z}). \tag{112}\]
Here we should note that \(\phi_{l}(\rho;\beta_{*})\) are two independent solutions at the boundary and they are singular at the origin. The regular solution can be obtained by directly solving the equation at the origin so called \(\chi^{1}(\eta;k)\). They are solutions of the same equation at different singular points so \(\phi_{l}(\rho;\beta_{*})\), \(\chi^{1}(\eta;k)\) are not independent. The transformation between them are given by
\[\chi_{l}^{1}(\eta;k)=C_{l}^{*}(k)\phi_{l}(\eta;\beta_{+})+C_{l}^{-}(k)\phi_{l }(\eta;\beta_{-}), \tag{113}\]
in which we have chosen solutions at the boundary as the basis and \(C_{l}^{*}(k)\) are coefficients determined in (107) and (108). Given the above relation, one can get the functional derivative
between the source and one-point function thus higher point functions can be determined. For the two-point function, we have
\[\langle\hat{\mathcal{O}}_{lm}(k)\;\hat{\mathcal{O}}_{l^{\prime},m^{\prime}}(k) \rangle=\frac{\delta(\hat{\mathcal{O}}_{lm}(k))_{J}}{\delta\hat{\mathcal{J}}_{ l^{\prime}m^{\prime}}(k)}\bigg{|}_{J=0}=-\delta^{l}_{l^{\prime}}\delta^{m}_{m^{ \prime}}\;2i\beta_{+}\frac{C^{-}_{l}(k)}{C^{+}_{l}(k)}, \tag{100}\]
in which the two-point function is written in terms of mode variables \((l,m)\). Here, we should note the value of \(\langle\hat{\mathcal{O}}\rangle_{J}\) is scheme dependent and we assume that proper regularization procedure in the mode space \((l,m)\) exists so that (100) is true for two-point function, like what has been done in momentum space [54, 73, 100]. To go back to the complex coordinates on the celestial sphere, one can do the sum over spherical harmonics \(Y^{l}_{m}(z,\bar{z})\) then obtain
\[(\hat{\mathcal{O}}(z,\bar{z};k)\;\hat{\mathcal{O}}(z^{\prime},\bar{z}^{ \prime};k))=-2i\beta_{+}\sum_{lm}\frac{C^{-}_{l}(k)}{C^{+}_{l}(k)}Y^{l}_{m}(z, \bar{z})Y^{l}_{m}(z^{\prime},\bar{z}^{\prime}). \tag{101}\]
Here we should note that the two-point function on the sphere is obtained by summing over two discrete variables \((l,m)\) while one can also just do the sum over variable \(m\) and obtain the \(l\)-mode source, one-point function
\[\hat{\mathcal{J}}_{l}(z,\bar{z};k)=\sum_{m}\phi^{+}_{l}(k)Y^{l}_{m}(z,\bar{z} )\qquad\langle\hat{\mathcal{O}}_{l}(z,\bar{z};k)\rangle=-2i\beta_{+}\sum_{m} \phi^{-}_{l}(k)Y^{l}_{m}(z,\bar{z}), \tag{102}\]
and the corresponding two-point function is given by
\[\langle\hat{\mathcal{O}}_{l}(z,\bar{z};k)\;\hat{\mathcal{O}}_{l}(z^{\prime}, \bar{z}^{\prime};k)\rangle=-2i\beta_{+}\sum_{m}\frac{C^{-}_{l}(k)}{C^{+}_{l}( k)}Y^{l}_{m}(z,\bar{z})Y^{l}_{m}(z^{\prime},\bar{z}^{\prime}). \tag{103}\]
#### Two-point Function
To study the dictionary for flat space in a more precise way, we consider a generic \(k\) mode \(f(\tau,\rho,z,\bar{z};k)\) for on-shell field \(\Phi(\tau,\rho,z,\bar{z})\) defined in (3.23), or equivalently
\[f(\tau,\rho,z,\bar{z};k)=\sum_{lm}\int\,d\omega f_{w,k,l,m}(\tau,\rho,z,\bar{ z})\tilde{\Phi}(w,k,l,m). \tag{104}\]
Following the notion in (2.14), we choose to decompose the \(k\) mode into the spatial modes therefore \(f(\tau,\rho,z,\bar{z};k)\) now takes the form
\[f(\tau,\rho,z,\bar{z};k) = \sum_{lm}\bar{\Phi}(\tau,k,l,m)\phi_{l}(\rho;k)Y^{l}_{m}(z,\bar{z}) \tag{105}\] \[= \sum_{lm}(a^{+}_{lm}(k)f_{+}(\tau,k)+a^{-}_{lm}(k)f_{-}(\tau,k) )\;(\phi_{l}(\rho;\beta_{+})+\phi_{l}(\rho;\beta_{-}))Y^{l}_{m}(z,\bar{z})\] (106) \[= f_{+}(\tau,k)\phi(\rho,z,\bar{z};\beta_{+})+f_{-}(\tau,k)\tilde{ \phi}(\rho,z,\bar{z};\beta_{+})\] (107) \[\quad f_{-}(\tau,k)\phi(\rho,z,\bar{z};\beta_{-})+f_{-}(\tau,k) \tilde{\phi}(\rho,z,\bar{z};\beta_{-}), \tag{108}\]
in which \(\tilde{\Phi}(\tau,k,l,m)\) and \(\phi_{l}(\rho,k)\) are modes that depend on \(\tau\) and \(\rho\). In the first line, we have summed over the two discrete variables \(l,m\) and also made the \(\tau\)-mode \(l,m\) dependent by introducing the coefficients \(a^{\pm}_{lm}(k)\)12. They are determined by the initial data. In the third line, we rearrange them into the \(\tau\) mode functions and highlight their asymptotic behaviour according to \(\beta_{\pm}\). The function \(\phi(\rho,z,\bar{z},\beta_{\pm})\), \(\tilde{\phi}(\rho,z,\bar{z},\beta_{\pm})\) are given by
Footnote 12: Here, we should note that coefficients \(a^{\pm}_{lm}(k)\) play the same role as \(\psi_{\pm}(p)\) in (2.21) or \(\psi(p)\) in (2.27) for fixed \(l,m\).
\[\phi(\rho,z,\bar{z};\beta_{\pm}) = \sum_{lm}a^{+}_{lm}(k)\;\phi_{l}(\rho;\beta_{\pm})Y^{l}_{m}(z,\bar {z}) \tag{109}\] \[\tilde{\phi}(\rho,z,\bar{z};\beta_{\pm}) = \sum_{lm}a^{-}_{lm}(k)\;\phi_{l}(\rho;\beta_{\pm})Y^{l}_{m}(z,\bar {z}). \tag{110}\]
Moreover, using the asymptotic expansion (C.3) for \(\phi_{l}(\rho;\beta_{\pm})\) we obtain the leading contribution for \(\phi(\rho,z,\bar{z};\beta_{\pm})\) and \(\tilde{\phi}(\rho,z,\bar{z};\beta_{\pm})\) written as
\[\phi^{\pm}(z,\bar{z};k) =\sum_{lm}a^{+}_{lm}(k)\;\phi^{+}_{l}(k)Y^{l}_{m}(z,\bar{z})\] (C.18) \[\tilde{\phi}^{\pm}(z,\bar{z};k) =\sum_{lm}a^{-}_{lm}(k)\;\phi^{+}_{l}(k)Y^{l}_{m}(z,\bar{z}).\] (C.19)
Now, given the above asymptotic expansion, we rewrite the flat/CFT dictionary (3.40) into
\[\mathcal{J}(z,\bar{z};k) =\sum_{lm}a^{-}_{lm}(k)\mathcal{J}_{lm}(k)Y^{m}_{l}(z,\bar{z}) \langle\mathcal{O}(z,\bar{z};k)\rangle =\sum_{lm}a^{+}_{lm}(k)\langle\mathcal{O}_{lm}(k)\rangle Y^{l}_{ m}(z,\bar{z}),\] (C.20) \[\tilde{\mathcal{J}}(z,\bar{z};k) =\sum_{lm}a^{+}_{lm}(k)\hat{\mathcal{J}}_{lm}(k)Y^{m}_{l}(z,\bar{ z}) \langle\tilde{\mathcal{O}}(z,\bar{z};k)\rangle =\sum_{lm}a^{-}_{lm}(k)\langle\hat{\mathcal{O}}_{lm}(k)\rangle Y ^{l}_{m}(z,\bar{z}),\] (C.21)
from which we can see there is a pair of source and one-point function \(\{\mathcal{J},\mathcal{O}\}\), \(\{\tilde{\mathcal{J}},\tilde{\mathcal{O}}\}\) and they are combination of the source and one-point functions introduced in the AdS/CFT dictionary. Here we should note that the source and one-point functions \(\{\mathcal{J},\mathcal{O}\}\), \(\{\tilde{\mathcal{J}},\tilde{\mathcal{O}}\}\) now become physical and their existence does not rely on the AdS/CFT dictionary i.e., one could study them without writing them in terms of AdS modes \(\{\tilde{\mathcal{J}}_{lm},\tilde{\mathcal{O}}_{lm}\}\). Given the above dictionary, one can deduce the two-point function
\[\langle\tilde{\mathcal{O}}(z,\bar{z};k)\tilde{\mathcal{O}}(z^{ \prime},\bar{z}^{\prime};k)\rangle =\frac{1}{N_{k}}\sum_{lm}\frac{a^{-}_{lm}(k)}{a^{+}_{lm}(k)} \langle\hat{\mathcal{O}}_{lm}(k)\hat{\mathcal{O}}_{l,m}(k)\rangle Y^{l}_{m}(z,\bar{z})Y^{l}_{m}(z^{\prime},\bar{z}^{\prime}),\] (C.22) \[\langle\mathcal{O}(z,\bar{z};k)\mathcal{O}(z^{\prime},\bar{z}^{ \prime};k)\rangle =\frac{1}{N_{k}}\sum_{lm}\frac{a^{+}_{lm}(k)}{a^{-}_{lm}(k)} \langle\hat{\mathcal{O}}_{lm}(k)\hat{\mathcal{O}}_{l,m}(k)\rangle Y^{l}_{m}(z,\bar{z})Y^{l}_{m}(z^{\prime},\bar{z}^{\prime}),\] (C.23)
in which the \((l.m)\) mode two-point functions are given in(C.7). Here we should note that \(a^{-}_{lm}/a^{+}_{lm}=a^{+}_{lm}/a^{-}_{lm}=0\) if \(a^{-}_{lm}=0\) or \(a^{+}_{lm}=0\).
During the calculation, we assume the coefficients \(a_{lm}(k)\) determined by the initial data are \((l,m)\) dependent. In fact, we can simplify the coefficients if there is a rotating symmetry for the solution on the sphere thus the coefficients will be \(m\) independent and we label them as \(a_{l}(k)\). In this case, the \(k\) mode will be written as
\[f(\tau,\rho,z,\bar{z};k) =\sum_{lm}(a^{+}_{l}(k)f_{+}(\tau,k)+a^{-}_{l}(k)f_{-}(\tau,k)) \;(\phi_{l}(\rho;\beta_{+})+\phi_{l}(\rho;\beta_{-}))Y^{l}_{m}(z,\bar{z}).\] (C.24)
The flat/CFT dictionary remains the same while the source and one-point function will be written in terms of the shorter form
\[\mathcal{J}(z,\bar{z};k) =\sum_{l}a^{-}_{l}(k)\hat{\mathcal{J}}_{l}(z,\bar{z};k) \langle\mathcal{O}(z,\bar{z};k)\rangle =\sum_{l}a^{+}_{l}(k)\langle\hat{\mathcal{O}}_{l}(z,\bar{z};k)\rangle,\] (C.25) \[\tilde{\mathcal{J}}(z,\bar{z};k) =\sum_{l}a^{+}_{l}(k)\hat{\mathcal{J}}_{l}(z,\bar{z};k) \langle\tilde{\mathcal{O}}(z,\bar{z};k)\rangle =\sum_{l}a^{-}_{l}(k)\langle\hat{\mathcal{O}}_{l}(z,\bar{z};k)\rangle,\] (C.26)
in which \(\{\mathcal{J}_{l},\mathcal{O}_{l}\}\) are the \(l\) mode source and one-point function defined in (C.9). As for the two-point function, following the standard functional derivative procedure, we have
\[\langle\tilde{\mathcal{O}}(z,\bar{z};k)\tilde{\mathcal{O}}(z^{ \prime},\bar{z}^{\prime};k)\rangle =\frac{\delta\langle\tilde{\mathcal{O}}(z,\bar{z};k)\rangle_{J}}{ \delta\tilde{\mathcal{J}}_{k}(z^{\prime},\bar{z}^{\prime};k)\rangle}\bigg{|}_{ J=0}=\frac{1}{N_{k}}\sum_{l}\frac{a^{-}_{l}(k)}{a^{+}_{l}(k)}\langle\hat{ \mathcal{O}}_{l}(z,\bar{z};k)\hat{\mathcal{O}}_{l}(z^{\prime},\bar{z}^{ \prime};k)\rangle,\] (C.27) \[\langle\mathcal{O}(z,\bar{z};k)\mathcal{O}(z^{\prime},\bar{z}^{ \prime};k)\rangle =\frac{\delta\langle\mathcal{O}(z,\bar{z};k)\rangle_{J}}{\delta \mathcal{J}(z^{\prime},\bar{z}^{\prime};k)}\bigg{|}_{J=0}=\frac{1}{N_{k}}\sum_{l} \frac{a^{+}_{l}(k)}{a^{-}_{l}(k)}\langle\hat{\mathcal{O}}_{l}(z,\bar{z};k)\hat{ \mathcal{O}}_{l}(z^{\prime},\bar{z}^{\prime};k)\rangle,\] (C.28)
in which the \(l\)-mode two-point function on the right hand side are given by (C.10). From the above discussion, one can see that it is not possible to simplify the coefficients \(a_{l}(k)\) further and make them \(l\) independent otherwise the pair of source and one-point function will become linearly dependent and be reduced to one copy.
Boundary-Bulk Propagator
Following the study of the mode expansion of the fields, we know that the source of the fields can be expanded by the spherical harmonics on the sphere with coefficients \(a^{\pm}_{lm}(k)\), written as
\[\mathcal{J}(z,\bar{z};k) =\tilde{\phi}^{+}(z,\bar{z};k)=\sum_{lm}a^{-}_{lm}(k)Y^{l}_{m}(z, \bar{z}),\] (C.29) \[\tilde{\mathcal{J}}(z,\bar{z};k) =\phi^{+}(z,\bar{z};k)=\sum_{lm}a^{+}_{lm}(k)Y^{l}_{m}(z,\bar{z}).\] (C.30)
Therefore, with the help of the bulk-boundary propagator \(K(\rho,z;z^{\prime})\), one can then write the spatial mode \(\phi(\rho,z,\bar{z};k)\) and \(\phi(\rho,z,\bar{z};k)\) into the form of
\[\phi(\rho,z,\bar{z};k) =\int dz^{\prime}d\bar{z}^{\prime}K(\rho,z;z^{\prime})\phi^{+}(z^{ \prime},\bar{z}^{\prime};k)=\sum_{lm}a^{+}_{lm}(k)\int dz^{\prime}d\bar{z}^{ \prime}K(\rho,z;z^{\prime})Y^{l}_{m}(z^{\prime},\bar{z}^{\prime}),\] \[\tilde{\phi}(\rho,z,\bar{z};k) =\int dz^{\prime}d\bar{z}^{\prime}K(\rho,z;z^{\prime})\tilde{\phi }^{+}(z^{\prime},\bar{z}^{\prime};k)=\sum_{lm}a^{-}_{lm}(k)\int dz^{\prime}d \bar{z}^{\prime}K(\rho,z;z^{\prime})Y^{l}_{m}(z^{\prime},\bar{z}^{\prime}).\]
Given such expression, together with the flat/CFT dictionary, the one-point functions are now deduced to be
\[\mathcal{O}(z,\bar{z};k) =-2i\beta_{+}\phi^{-}(z,\bar{z};k)=-\frac{2i\beta_{+}}{\pi}\sum_{ lm}a^{+}_{lm}(k)\int dz^{\prime}d\bar{z}^{\prime}\frac{1}{|z-z^{\prime}|^{2 \Delta_{k}}}Y^{l}_{m}(z^{\prime},\bar{z}^{\prime}),\] (C.31) \[\tilde{\mathcal{O}}(z,\bar{z};k) =-2i\beta_{+}\tilde{\phi}^{-}(z,\bar{z};k)=-\frac{2i\beta_{+}}{ \pi}\sum_{lm}a^{-}_{lm}(k)\int dz^{\prime}d\bar{z}^{\prime}\frac{1}{|z-z^{ \prime}|^{2\Delta_{k}}}Y^{l}_{m}(z^{\prime},\bar{z}^{\prime}).\] (C.32)
Moreover, by doing the functional variation with respect to the source \(\mathcal{J}(z,\bar{z};k)\) and \(\tilde{\mathcal{J}}(z,\bar{z};k)\), one should be able to obtain the two point functions. The functional variation between the one-point function and the source can be transformed into variation between spherical harmonics since both of the operator \(\mathcal{O}\) and the source \(\mathcal{J}\) are now written in terms of harmonic function \(Y^{l}_{m}\). At first, as a kind of approximation, we assume that the boundary-bulk propagator is a function that do not depend on the spherical harmonics, then the two-point functions become
\[(\mathcal{O}(z,\bar{z};k)\mathcal{O}(z^{\prime},\bar{z}^{\prime}; k)) =\frac{1}{N_{k}}\sum_{l\neq 0,m}\frac{a^{+}_{lm}(k)}{a^{-}_{lm}(k)}\frac{c_{k} }{|z-z^{\prime}|^{2\Delta_{k}}},\] \[(\tilde{\mathcal{O}}(z,\bar{z};k)\tilde{\mathcal{O}}(z^{\prime}, \bar{z}^{\prime};k)) =\frac{1}{N_{k}}\sum_{l\neq 0,m}\frac{a^{-}_{lm}(k)}{a^{+}_{lm}(k)} \frac{c_{k}}{|z-z^{\prime}|^{2\Delta_{k}}},\] (C.33)
in which \(c_{k}=-2i\beta_{+}/\pi\) is the renormalised factor. Now, we will determine the functional variation in a more precise way by decomposing the bulk-boundary into the harmonics modes
\[K(\rho,z;z^{\prime})=\sum_{lm}K_{lm}(\rho)Y^{l}_{m}(z,\bar{z})Y^{l}_{m}(z^{ \prime},\bar{z}^{\prime})\] (C.34)
in which the function \(K(\rho)\) can be treated as the coefficients. Given such decomposition, the one-point function can be written into the form
\[\mathcal{O}(z,\bar{z};k) =-2i\beta_{+}\sum_{lm}a^{+}_{lm}(k)\int dz^{\prime}d\bar{z}^{ \prime}K_{lm}Y^{l}_{m}(z,\bar{z})\left(Y^{l}_{m}(z^{\prime},\bar{z}^{\prime} )\right)^{2},\] (C.35) \[\tilde{\mathcal{O}}(z,\bar{z};k) =-2i\beta_{+}\sum_{lm}a^{-}_{lm}(k)\int dz^{\prime}d\bar{z}^{ \prime}K_{lm}Y^{l}_{m}(z,\bar{z})\left(Y^{l}_{m}(z^{\prime},\bar{z}^{\prime} )\right)^{2},\] (C.36)
in which \(K_{lm}=\rho^{\Delta}K_{lm}(\rho)\). Therefore, by calculating the functional variation with respect to the spherical harmonics \(Y^{l}_{m}\), one then obtain the two point function
\[(\mathcal{O}(z,\bar{z};k)\mathcal{O}(z^{\prime},\bar{z}^{\prime}; k)) =-\frac{4i\beta_{+}}{N_{k}}\sum_{lm}\frac{a^{+}_{lm}(k)}{a^{-}_{lm }(k)}K_{lm}Y^{l}_{m}(z,\bar{z})Y^{l}_{m}(z^{\prime},\bar{z}^{\prime})\] (C.37) \[(\tilde{\mathcal{O}}(z,\bar{z};k)\tilde{\mathcal{O}}(z^{\prime}, \bar{z}^{\prime};k)) =-\frac{4i\beta_{+}}{N_{k}}\sum_{lm}\frac{a^{-}_{lm}(k)}{a^{+}_{ lm}(k)}K_{lm}Y^{l}_{m}(z,\bar{z})Y^{l}_{m}(z^{\prime},\bar{z}^{\prime}).\] (C.38)
One can check such expression is equivalent to the result obtained from the mode analysis calculation by making
\[4i\beta_{+}K_{lm}\equiv\delta^{l}_{l^{\prime}}\delta^{m}_{m^{\prime}}(\hat{\cal O }_{lm}(k)\hat{\cal O}_{l^{\prime}m^{\prime}}(k)),\] (C.39)
or equivalently we have
\[2K_{lm}=\frac{C_{l}^{-}(k)}{C_{l}^{+}(k)}.\] (C.40) |
2309.14114 | Hybrid Strangeon Stars | It was conjectured that the basic units of the ground state of bulk strong
matter may be strange-clusters called strangeons, and they can form self-bound
strangeon stars that are highly compact. Strangeon stars can develop a strange
quark matter (SQM) core at high densities, particularly in the
color-flavor-locking phase, yielding a branch of hybrid strangeon stars. We
explore the stellar structure and astrophysical implications of hybrid
strangeon stars. We find that hybrid strangeon stars can meet various
astrophysical constraints on pulsar masses, radii, and tidal deformabilities.
Finally, we show that the strangeon-SQM mixed phase is not preferred if the
charge-neutrality condition is imposed at the strangeon-SQM transition region. | Chen Zhang, Yong Gao, Cheng-Jun Xia, Renxin Xu | 2023-09-25T13:13:03Z | http://arxiv.org/abs/2309.14114v2 | # Hybrid Strangeon Stars
###### Abstract
It was conjectured that the basic units of the ground state of bulk strong matter may be strange-clusters called strangeons, and they can form self-bound strangeon stars that are highly compact. Strangeon stars can develop a strange quark matter (SQM) core at high densities, particularly in the color-flavor-locking phase, yielding a branch of hybrid strangeon stars. We explore the stellar structure and astrophysical implications of hybrid strangeon stars. We find that hybrid strangeon stars can meet various astrophysical constraints on pulsar masses, radii, and tidal deformabilities. Finally, we show that the strangeon-SQM mixed phase is not preferred if the charge-neutrality condition is imposed at the strangeon-SQM transition region.
## I Introduction
The detection of gravitational waves (GWs) from the coalescence of compact binaries by LIGO/Virgo collaborations [1; 2; 3; 4; 5; 6; 7] has greatly improved our knowledge of black holes and compact stars. They offer unique opportunities to probe unconventional QCD matter phases, such as quark matter and strangeon matter.
Quark matter (QM), a state comprised of deconfined free-flowing quarks, can possibly exist inside the neutron star core (i.e. conventional hybrid stars [8; 9]). If they are stable at zero pressure, either in form of strange quark matter (SQM) [10; 11; 12; 13] or up-down quark matter (\(ud\)QM) [14], they can constitute entire quark star [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27] or the crust (i.e. inverted hybrid stars [28]), both with potentially distinct astrophysical implications [29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43]. Effects from QCD interactions such as color-superconductivity and perturbative QCD (pQCD) corrections can help quark stars meet various astrophysical constraints [44; 45; 46]. Generally, it is expected that at very high densities quark matter should be in the color-flavor-locking phase (CFL), where \(u,d,s\) quarks form cooper pairs antisymmetrically in color-flavor space with equal fractions by the attractive one-gluon exchange channel, providing a lowered energy state.
Strangeon matter (SM) is similar to strange quark matter where both are composed of a nearly equal number of \(u,d,s\) quarks [47; 48; 49; 50]. However, strangeon matter has quarks localized as clusters in a globally solid state due to the large masses of and the strong coupling between strangeons. Strangeon stars [47; 48; 49; 50; 51; 52; 53; 54; 55; 56] composed of strangeon matter have intrinsic stiff equation of state (EOS) and large compactness, and they had already been proposed to support massive pulsars (\(\gtrsim 2M_{\odot}\)[50]) before the announcement of the first massive pulsar PSR J1614-2230 [57]. Recently, we have shown that all strangeon stars are compact enough to feature a photonsphere that is essential to the generation of GW echoes [58].
The transition from strangeon matter to strange quark matter is likely to occur, considering such "deconfinement" originates from a shrinking of strangeon lattice spacing as density or pressure increases so that the lattice constant becomes smaller than the radius of individual quark bags, as described by the linked bag model in Ref. [49]. This gives rise to a new type of stellar objects, the _Hybrid Strangeon Stars_, consisting of a strangeon crust and a strange quark matter core. Pure strangeon stars can form from neutron stars absorbing strangeon nuggets, or quantum nucleation in the interior. If SQM is more stable than SM at some density, then the same process can take place and form hybrid strangeon stars directly or through the SQM quantum nucleation inside strangeon stars. Such
first-order phase transition needs the center pressure to be larger than some critical value at the corresponding central chemical potential. Such lift of center pressure beyond critical point can happen from spin-down, accretion or merger of strangeon stars.
As for the organization of this paper, we first introduce the EOSs of SM and SQM, and constrain the EOS parameters from the stability considerations. Then, with Maxwell constructions where a sharp interface is assumed, we solve the hybrid stellar structures and study their compatibility with astrophysical constraints. Finally, we explore the possibility of mixed phase (Gibbs construction) where the transition region is with mixed SM and SQM rather than a sharp interface.
## II Equations of states
For the quark matter sector, we adopt the unified treatment of interacting quark matter that recently developed in [46] and later applied in several studies [59; 60; 61; 62; 63; 64].
Referring to [46], we first rewrite the thermodynamic potential \(\Omega\) of the superconducting quark matter [65; 66; 67; 68; 68] in a general form with the pQCD correction [69] included:
\[\begin{split}\Omega=&-\frac{\xi_{4}}{4\pi^{2}}\mu^{4 }+\frac{\xi_{4}(1-a_{4})}{4\pi^{2}}\mu^{4}-\frac{\xi_{2a}\Delta^{2}-\xi_{2b}m_ {s}^{2}}{\pi^{2}}\mu^{2}\\ &-\frac{\mu_{e}^{4}}{12\pi^{2}}+B,\end{split} \tag{1}\]
where \(\mu\) and \(\mu_{e}\) are the respective average quark and electron chemical potentials. The first term represents the unpaired free quark gas contribution. The second term with \((1-a_{4})\) represents the pQCD contribution from one-gluon exchange for gluon interaction to \(O(\alpha_{s}^{2})\) order. To phenomenologically account for higher-order contributions, we can vary \(a_{4}\) from \(a_{4}=1\), corresponding to a vanishing pQCD correction, to very small values where these corrections become large [69; 8; 45]. The term with \(m_{s}\) accounts for the correction from the finite strange quark mass if applicable, where \(m_{s}=95\pm 5\,\mathrm{MeV}\)[70], and we choose \(m_{s}=95\,\mathrm{MeV}\) as its benchmark value. The term with the gap parameter \(\Delta\) represents the contribution from color superconductivity. \((\xi_{4},\xi_{2a},\xi_{2b})\) represents different state of color-superconducting phases. \(B\) is the effective bag constant that accounts for the non-perturbative contribution from the QCD vacuum.
The corresponding equation of state was derived in Ref. [46]:
\[P=\frac{1}{3}(\rho-4B)+\frac{4\lambda^{2}}{9\pi^{2}}\left(-1+\mathrm{sgn}( \lambda)\sqrt{1+3\pi^{2}\frac{(\rho-B)}{\lambda^{2}}}\right), \tag{2}\]
where
\[\lambda=\frac{\xi_{2a}\Delta^{2}-\xi_{2b}m_{s}^{2}}{\sqrt{\xi_{4}a_{4}}}. \tag{3}\]
Note that \(\mathrm{sgn}(\lambda)\) represents the sign of \(\lambda\). The chemical potential (per baryon number) has the following form:
\[\mu_{\mathrm{QM}}=\frac{3\sqrt{2}}{(a_{4}\xi_{4})^{1/4}}\sqrt{[(P+B)\pi^{2}+ \lambda^{2}]^{1/2}-\lambda}\,. \tag{4}\]
Taking the zero pressure limit of \(\mu_{\mathrm{QM}}\), we obtain the energy per baryon number, which can be converted into the following form:
\[\left(\frac{E}{A}\right)_{\mathrm{QM}}=\frac{3\sqrt{2}\pi}{(\xi_{4}a_{4})^{1 /4}}\frac{B^{1/4}}{\sqrt{(\lambda^{2}/B+\pi^{2})^{1/2}+\lambda/\sqrt{B}}}, \tag{5}\]
where we see a larger \(\lambda\) lowers the energy as expected.
We have examined that hybrid strangeon star with a core of unpaired strange quark matter (\(\Delta=0\)) can not support \(2M_{\odot}\) while retaining radial stability that requires \(\partial M/\partial P_{c}>0\). This is not a surprise considering strangeon EOS is much stiffer than that of unpaired SQM, and a transition to a much softer EOS is likely to induce radial instabilities due to insufficient degenerate pressure to resist the gravitational pulling. We can thus stabilize the hybrid strangeon stars by introducing color-superconductivity effects to stiffen the SQM EOS 1. Therefore, in the following discussions, we specify the SQM phase to be CFL (\(\xi_{4}=3,\xi_{2a}=3,\xi_{2b}=3/4\)), considering the shared flavor composition and the fact that color superconductivity stiffens the EOSs. Besides, we set \(a_{4}=1\) (no extra QCD corrections) for simplicity.
Footnote 1: Such instabilities can also be cured by considering the scenario of slow SM-SQM conversions (with respect to radial-oscillation timescale) [71; 72]. Considering both SM and CFL have the three-flavor symmetry, we expect that the surface tension of SM-SQM is not large, thus a fast conversion is preferred and correspondingly the stability criteria should retain to be \(\partial M/\partial P_{c}>0\), i.e. the star mass increases with center pressure.
Following previous studies [47; 48; 49; 50; 51; 52; 53; 54; 55; 56], we assume the interaction potential between two strangeons is described
by the Lennard-Jones potential [73]:
\[U(r)=4\epsilon\left[\left(\frac{\sigma}{r}\right)^{12}-\left(\frac{\sigma}{r} \right)^{6}\right], \tag{6}\]
where \(r\) is the distance between two strangeons, and \(\sigma\) is the distance when \(U(r)=0\). The parameter \(\epsilon\) describes the depth of the interaction potential between strangeons. A larger \(\epsilon\) will then indicate a larger repulsive force at short range and thus maps to a stiffer EOS.
The mass density \(\rho\) and pressure density \(p\) of zero-temperature dense matter composed of strangeons derived from Lennard-Jones potential [50] reads
\[\rho = 2\epsilon\left(A_{12}\sigma^{12}n^{5}-A_{6}\sigma^{6}n^{3} \right)+nN_{\rm q}m_{\rm q}\,, \tag{7}\] \[P = n^{2}\frac{{\rm d}(\rho/n)}{{\rm d}n}=4\epsilon\left(2A_{12} \sigma^{12}n^{5}-A_{6}\sigma^{6}n^{3}\right)\,, \tag{8}\]
where \(A_{12}=6.2\), \(A_{6}=8.4\), and \(n\) is the number density of strangeons. \(N_{\rm q}m_{\rm q}\) is the mass of a strangeon with \(N_{\rm q}\) being the number of quarks in a strangeon and \(m_{q}\) being the average constituent quark mass. The contributions from degenerate electrons and vibrations of the lattice are neglected due to their expected smallness.
At the surface of strangeon stars, the pressure becomes zero, and we obtain the surface number density of strangeons as \(\left[A_{6}/(2A_{12}\sigma^{6})\right]^{1/2}\). For convenience, it is transformed into baryon number density, i.e.,
\[n_{\rm s}=\left(\frac{A_{6}}{2A_{12}}\right)^{1/2}\frac{N_{\rm q}}{3\sigma^{3}}\,, \tag{9}\]
so that the EOS can be rewritten into the following simpler form
\[\begin{split}\frac{\rho}{n_{s}}&=\frac{a}{9}\tilde {\epsilon}\left(\frac{1}{18}\bar{n}^{5}-\bar{n}^{3}\right)+m_{q}\bar{n},\\ \frac{P}{n_{s}}&=\frac{2}{9}\tilde{\epsilon}\left( \frac{1}{9}\bar{n}^{5}-\bar{n}^{3}\right),\end{split} \tag{10}\]
where \(a=A_{6}^{2}/A_{12}=8.4^{2}/6.2\approx 11.38\), \(\tilde{\epsilon}=\epsilon/N_{q}\) and \(\bar{n}=N_{q}\,n/n_{s}\). Note that \(\bar{n}=3\) at star surface where \(P=0\).
The chemical potential of strangeon matter can be derived via the thermodynamic relation \(\mu=(\rho+P)/n\). Note that to study its crossings with \(\mu_{\rm QM}\), one needs to further convert it to the chemical potential per baryon number
\[\mu_{\rm strangeon}=\frac{3\mu}{N_{q}}=3\frac{\rho/n_{s}+P/n_{s}}{\bar{n}}=3 m_{q}+a\tilde{\epsilon}(\frac{5}{54}\bar{n}^{4}-\bar{n}^{2}). \tag{11}\]
Referring to Eq. (10), we see that both the EOS \(P(\rho)\) and \(\mu_{B}(P)\) for strangeons only depends on parameters \(n_{s}\) and \(\tilde{\epsilon}\) with the dependence on \(N_{q}\) absorbed. Taking the zero pressure limit of \(\mu_{\rm strangeon}\), we obtain the energy per baryon number:
\[\left(\frac{E}{A}\right)_{\rm strangeon}=3m_{q}-\frac{3a}{2}\tilde{\epsilon}, \tag{12}\]
where we see that \(E/A\) has no dependence on \(n_{s}\), decreases as \(\tilde{\epsilon}\) increases, and is always smaller than that of normal nucleons. Requiring a positive \(E/A\) (or a non-negative \(\rho\) at zero pressure) sets a theoretical bound: \(\epsilon/N_{q}\leq 2m_{q}/a\approx 54.5\,\)MeV.
The transition pressure or density can be determined by the crossings of their chemical potentials. A necessary condition for such chemical potential crossing is that the zero-pressure chemical potential (i.e., energy per baryon number \(E/A\)) of strangeon (Eq. (12)) is smaller than that of CFL (Eq. (5)). We show the related parameter space as the blue-shaded bands of Fig. 1. We see that overall, the existence of such a hybrid configuration prefers a relatively stiff strangeon EOS (large \(\tilde{\epsilon}\)) but a relatively soft CFL phase (large \(B\) or small \(\Delta\)).
On the other hand, the hybrid configuration would become radially unstable (\(\partial M/\partial P_{\rm c}<0\)) if the transition pressure \(P_{\rm trans}\) is too large [9], as we have also examined explicitly. Referring to Eq. (4) and Eq. (11), we see that the strangeon matter to SQM transition is more likely to occur at smaller \(P_{\rm trans}\) in the case of a smaller \(B\), a larger \(\Delta\) (stiffer SQM EOS), a smaller \(\tilde{\epsilon}\) (softer SM EOS) or smaller \(n_{s}\). These conditions compete with those from the stability condition mentioned in the previous paragraph, constraining the allowed parameter space.
## III Astrophysical Implications
The stellar structure can be solved via the Tolman-Oppenheimer-Volkoff (TOV) equation [74; 75],
\[\begin{split}\frac{dm}{dr}&=4\pi\rho r^{2}\,,\\ \frac{dP}{dr}&=(\rho+P)\frac{m+4\pi Pr^{3}}{2mr-r^{ 2}},\end{split} \tag{13}\]
where the profiles \(P(r)\) and \(m(r)\) are solved as functions of the center pressure \(P_{\rm c}\). The radius \(R\) and physical mass \(M\) of the compact stars are determined by \(P(R)=0\) and
\(M=m(R)\), respectively. One then obtains the mass-radius relation \(M(R)\) of hybrid strangeon stars by solving the TOV equations together with the EOSs of the two matter phases, where the transition point is determined by the crossing of their chemical potentials, as introduced in the last section.
To compare with gravitational wave observations, we can further compute the dimensionless tidal deformability \(\Lambda=2k_{2}/(3C^{5})\), where \(C=M/R\) is the compactness and \(k_{2}\) is the Love number that characterizes the stars' response to external disturbances [76; 77; 78]. The Love number \(k_{2}\) can be determined by solving a function \(y(r)\) from a specific differential equation [79] and the TOV equation Eq. (13), with the boundary condition \(y(0)=2\). For hybrid configurations, the matching condition [80; 81]\(y(r_{d}^{+})-y(r_{d}^{-})=-4\pi r_{d}^{3}\Delta\rho_{d}/(m(r_{d})+4\pi r_{d}^{3 }P(r_{d}))\) should be imposed at \(r_{d}\) (i.e., the core radius and the star radius), where an energy density jump \(\Delta\rho_{d}\) occurs.
For illustration, we show various benchmark TOV solutions and corresponding tidal deformabilities in Fig. 2 for \(B=60,80\) MeV/fm\({}^{3}\) with \(\tilde{\epsilon}\) and \(\Delta\) choices satisfying \((E/A)_{\rm Strangeon}<(E/A)_{\rm CFL}\) (shaded bands in Fig. 1).
We see that all the benchmark examples shown in Fig. 2 satisfy NICER constraints 2, while the GW170817 constraints (\(\Lambda_{1.4M_{\odot}}\leq 800\)) can be met by hybrid strangeon stars with \((\tilde{\epsilon},n_{s}/{\rm fm}^{-3},\Delta/{\rm MeV})=(80/9,0.22,80)\) for \(B=60\,\)MeV/fm\({}^{3}\) (upper panels), and \((\tilde{\epsilon},n_{s}/{\rm fm}^{-3},\Delta/{\rm MeV})=(80/9,0.22,120)\), \((80/9,0.3,120)\) for \(B=80\,\)MeV/fm\({}^{3}\) (lower panels).
Footnote 2: Note that we show the NICER X-ray constraints in the graph but neglect them in the table considering the X-ray analyses of hybrid strangeon stars may be different from those of neutron stars.
The general features of correlations between constraints and parameters are summarized in Table 1. For example, as the second row of Table 1 summarizes, hybrid strangeon stars with small \(\tilde{\epsilon}\) (black lines) and \(n_{s}\) (thin lines), or large \(\Delta\) (darker colored lines) and small B (such as upper panels) tend to be radially unstable (\(\partial M/\partial P_{\rm c}<0\)), which means radial stabilities require CFL to be not too soft compared to the stiffness of strangeon EOS, considering \(\Delta\) and \(\epsilon/N_{q}\) signal the stiffness of each of the two matter phases. However, we also see that the \(M_{\rm TOV}\gtrsim 2M_{\odot}\) constraint [57] prefers overall stiff EOSs for both two matter phases (a large \(\tilde{\epsilon}\) or \(\Delta\)), while GW170817 tidal deformability constraint (\(\Lambda_{1.4M_{\odot}}\leq 800\)[4]) prefers the opposite at low center densities. These together set bounds on the allowed parameter space.
Figure 1: Allowed parameter space (blue-shaded) for the existence of hybrid strangeon stars from stability consideration \((E/A)_{\rm Strangeon}<(E/A)_{\rm CFL}\). (top) CFL bag constant \(B\) and (bottom) CFL superconductivity gap \(\Delta\) versus parameter \(\epsilon/N_{q}\) of strangeon matter. For the bottom sub-figure, the shaded region with lighter-colored contour lines represents larger bag constant, sampling \(B=60,80,100,120\,\)MeV/fm\({}^{3}\) (bottom to top).
To elaborate on the explicit layer structure, we dissect hybrid strangeon stars by showing the masses and radii of their CFL cores as functions of the centre pressure in Fig. 3. Here, we choose two benchmark examples of bag constant \(B=60,80\,\mathrm{MeV}/\mathrm{fm}^{3}\) with different CFL gaps and a fixed strangeon phase (\(n_{s}=0.22/\mathrm{fm}^{3},\tilde{\epsilon}=80/9\,\mathrm{MeV}\)). We find that, as a general feature, the compact stars are pure strangeon stars at low \(P_{c}\), and then develop a CFL core as \(P_{c}\) increases. At the maximum mass points, the strangeon crusts have widths of \(1\sim 5\) km and masses of \(0.1\sim 1\,M_{\odot}\), where a smaller bag constant or a smaller \(\Delta\) maps to a thicker crust. At the \(M=2\,M_{\odot}\) point, all cases map to hybrid strangeon stars, with a core of mass \(1.13\,(1.86)\,M_{\odot}\) for \(\Delta=100\,(120)\) MeV case when \(B=80\) MeV/fm\({}^{3}\), and a core of mass \(0.27\,(1.50)\,M_{\odot}\) for \(\Delta=60\,(80)\) MeV case when \(B=60\,\mathrm{MeV}/\mathrm{fm}^{3}\). At the \(M=1.4\,M_{\odot}\) point, for \(B=60\,(80)\,\mathrm{MeV}/\mathrm{fm}^{3}\), the \(\Delta=60\,(100)\) MeV case is a pure strangeon star, while the \(\Delta=80\,(120)\) MeV case is a hybrid strangeon star with a core of mass \(0.64\,(1.16)M_{\odot}\).
## III Mixed phase
A strangeon-quark mixed phase is possible around the intersurface of the strangeon crust and quark matter
\begin{table}
\begin{tabular}{c c} \hline \hline \(M\)-\(R\) (left) and \(\Lambda\)-\(M\) (right) of hybrid strangeon stars (solid lines) with \(\epsilon/N_{q}=80/9\approx 8.9\) (black), \\ \(120/9\approx 13.3\) (blue) MeV, \(n_{s}=0.22\) (thin), \(0.30\) (thick) for the strangeon composition, and \(B=60\) (top), \(80\) (bottom) MeV/fm\({}^{3}\) for the CFL composition. Lines with darker colors denote larger \(\Delta\), sampling \(60\), \(80\) MeV for top panels and \(60\), \(100\), \(120\) MeV for bottom panels, respectively. (no large-\(\Delta\) lines in top panels due to stability constraints referring to Fig. 1.) Dashed lines are pure strangeon star configurations. Shaded regions are constraints with \(90\%\) credibility from the NICER mission PSR J0030+0451 (green colored) [82; 83], PSR J0740+6620 (cyan colored)[84; 85]. The cyan-dotted vertical line in the right panels denotes the GW170817’s \(\Lambda(1.4M_{\odot})\leq 800\) constraint [4]. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Correlations of constraints and the EOS parameters. Plus(minus) sign means positive (negative) correlation, while slash sign means no correlation.
core, in analogy to the hadron-quark mixed phase in the conventional hybrid neutron stars.
To construct mixed phase of hybrid strangeon stars, we need \(\mu_{e}\neq 0\), thus the strange quark matter sector should be in either the normal unpaired phase (\(\Delta=0\))3 or the charged CFL phase [8], where \(s\) quarks no longer have an equal fraction as \(u,d\) quarks. We keep the flavor-symmetry in the strangeon sector considering its solid state with charge-neutrality being enforced, since the Compton wavelength of dilute electrons is much larger the scale of a strangeon.
Footnote 3: As aforementioned for the sharp transition, normal strange quark matter core with a strangeon crust are not likely to be radially-stable. We expect the situation may be alleviated in the mixed phase scenario.
We adopt here the Gibbs construct for the mixed phase as outlined in Ref. [89]. In this case, one may achieve the charge neutrality where the pressures of both strangeon and quark matter are functions of the baryon and electron chemical potentials \(\mu_{B}\) and \(\mu_{e}\). The Gibbs condition for the equilibrium between the two phases (at zero temperature) is
\[P_{\rm SnP}(\mu_{B},\mu_{e})=P_{\rm QkP}(\mu_{B},\mu_{e})=P_{\rm MxP}(\mu_{B}, \mu_{e}), \tag{14}\]
where the pressure function for strangeon phase \(P_{\rm SnP}(\mu_{B})\) can be inferred from Eq. (10) and Eq. (11) with addition of background electrons \(P_{\rm SnP}(\mu_{B},\mu_{e})=P_{\rm SnP}(\mu_{B})+\mu_{e}^{4}/(12\pi^{2})\). Besides, for quark matter phase \(P_{\rm QkP}\) can be inferred from Eq. (1) with the identities \(p=-\Omega\), \(\mu=\mu_{B}/3\). Their intersection yields the mixed phase \(P_{\rm MxP}(\mu_{B},\mu_{e})\), as shown in Fig. 4.
The global charge neutrality condition reads
\[(1-\chi)\ n_{c}^{\rm SnP}+\chi\ n_{c}^{\rm QkP}=0, \tag{15}\]
where, \(n_{c}^{\rm SnP}\) and \(n_{c}^{\rm QkP}\) denote the total charge densities in
Figure 3: Mass (left) and radius (right) versus center pressure, \(P_{c}\), for hybrid strangeon stars with strangeon crusts of \(\{n_{s}=0.22/{\rm fm}^{3},\tilde{\epsilon}=80/9\,{\rm MeV}\}\), and CFL cores of \(\{B=60\,{\rm MeV}/{\rm fm}^{3},\)\(\Delta=60,80\) MeV\(\}\) (top) and \(\{B=80\,{\rm MeV}/{\rm fm}^{3},\)\(\Delta=100,120\) MeV\(\}\) (bottom). Darker color denotes larger \(\Delta\) values. Dashed lines denote pure strangeon stars. Solid lines denote hybrid strangeon stars. The dot-dashed lines denote the CFL cores. The right ends of the solid and dot-dashed lines are truncated at the corresponding maximum mass points.
strangeon phase and quark matter phase (either unpaired SQM or CFL) respectively, with
\[n_{c}^{\rm SnP} = -n_{c}^{\rm e}, \tag{16}\] \[n_{c}^{\rm QkP} = \frac{2}{3}n_{u}-\frac{1}{3}n_{d}-\frac{1}{3}n_{s}-n_{c}^{\rm e}, \tag{17}\]
where \(n_{c}^{\rm e}=\mu_{e}^{3}/(3\pi^{2})\), \(n_{u,d}=\mu_{u,d}^{3}/\pi^{2}\), \(n_{s}=(\mu_{s}^{2}-m_{s}^{2})^{3}/\pi^{2}\) with \(\mu_{i}\) the quark chemical potential of flavor \(i\). \(\chi\) defines the volume fraction of the quark matter in mixed phase defined as \(\chi=V_{\rm QkP}/(V_{\rm QkP}+V_{\rm SnP})\).
We have examined various combinations of parameter sets for SQM in either the normal unpaired phase or CFL phase, finding that the mixed phase \(P_{\rm Msp}(\mu_{B},\mu_{e})\) that satisfies the global charge neutrality condition only resides in a very tiny segment of the intersection line, with variations of \(\mu_{B}\) smaller than 1 MeV range near the zero \(\mu_{e}\) point, where \(\mu_{e}\) lift to 6 \(\sim\) 8 MeV. Thus the mixed-phase region for hybrid strangeon stars is negligible, and all results should approximately be the same as those obtained from the Maxwell construction studied in the last section, i.e., the system is effectively reduced into one conserved charge due to the negligible contribution of electrons. As we have examined, introducing QCD corrections (\(a_{4}<1\)) will lift the intersecting \(\mu_{B}\) but does not help enlarge the charge-neutral region of the mixed phase. This matches the expectation that the flavor-symmetry breaking effects are small in both strangeon and SQM sectors, resulting in a very small \(\mu_{e}\) and its limited variation range when considering charge neutrality.
## IV Summary and Discussion
We have explicitly shown the new possibility of the hybrid configuration of strangeon stars with a strange quark matter core and a thick strangeon crust. We also demonstrated their compatibility with astrophysical constraints with selected benchmark examples. It is shown that mixed phase is not preferred for hybrid strangeon stars with a CFL core.
Hybrid strangeon stars can naturally accommodate the pulsar glitch phenomena as a result of the star quakes in the thick strangeon crust [96; 97; 98; 99]. Besides, the large shear modulus change and density continuities at the crust-core interface are likely to result in large and distinct crust-core interfacial modes that can be probed by gravitational-wave observations [100]. We leave these for future studies.
**Acknowledgments.** C. Zhang is supported by the Institute for Advanced Study at The Hong Kong University of Science and Technology. C.J Xia is supported by the National Natural Science Foundation of China (Grant No. 12275234 and No. 12342027) and the National SKA Program of China (No. 2020SKA0120300). R.-X Xu
Figure 4: Pressure is plotted as a function of \(\mu_{B}\) and \(\mu_{e}\) for strangeon phase (green) and strange quark matter (red) of normal unpaired (top panel) and charged CFL phase (bottom panel) of \(\Delta=100\) MeV. The mixed phase sits in the intersection of the two surfaces. For illustration, here \(B=80\,{\rm MeV/fm^{3}}\), \(m_{s}=95\,{\rm MeV}\) for the SQM phase and \(\epsilon/N_{q}=80/9,n_{s}=0.3\,/{\rm fm^{3}}\) for strangeon phase.
is supported by the National SKA Program of China (2020SKA0120100).
|
2302.14266 | Precise measurements of $D$ meson lifetimes | We report the result of $D^0$ and $D^+$ lifetime measurement using $D^0\to
K^-\pi^+$ and $D^+\to K^-\pi^+\pi^+$ decays reconstructed using $72~{\rm
fb^{-1}}$ of data collected by the Belle II experiment at SuperKEKB
asymmetric-energy $e^{+}e^{-}$ collider. The results,
$\tau(D^0)=410.5\pm1.1({\rm stat})\pm0.8({\rm syst})~{\rm fs}$ and
$\tau(D^+)=1030.4\pm4.7({\rm stat})\pm 3.1({\rm syst})~{\rm fs}$, are the most
precise to date and are consistent with previous measurements. | N. K. Nisar | 2023-02-28T02:59:03Z | http://arxiv.org/abs/2302.14266v1 | # Precise measurements of \(D\) meson lifetimes
###### Abstract
We report the result of \(D^{0}\) and \(D^{+}\) lifetime measurement using \(D^{0}\to K^{-}\pi^{+}\) and \(D^{+}\to K^{-}\pi^{+}\pi^{+}\) decays reconstructed using 72 fb\({}^{-1}\) of data collected by the Belle II experiment at SuperKEKB asymmetric-energy \(e^{+}e^{-}\) collider. The results, \(\tau(D^{0})=410.5\pm 1.1(\rm stat)\pm 0.8(\rm syst)\) fs and \(\tau(D^{+})=1030.4\pm 4.7(\rm stat)\pm 3.1(\rm syst)\) fs, are the most precise to date and are consistent with previous measurements.
**Particles and Nuclei International Conference - PANIC2021 ***
*** 5 - 10 September, 2021 ***
*** Online ***
Introduction
Accurate predictions of charm meson lifetimes are challenging due to the contributions of strong-interaction to the decay amplitudes and it is an important ingredient to many theoretical calculations as well as experimental measurements. The predictions must resort to effective models, such as the heavy-quark expansion [1, 2, 3, 4, 5, 6] and precise lifetime measurements provide excellent tests of such models. The lifetime measurement with early Belle II data will demonstrate the excellent vertexing capability of the Belle II detector which is essential for future analyses of decay-time-dependent effects.
In this paper, we report the measurement of \(D^{0}\) and \(D^{+}\) lifetimes by reconstructing \(D^{*+}\to(D^{0}\to K^{-}\pi^{+})\pi^{+}\) and \(D^{*+}\to(D^{+}\to K^{-}\pi^{+}\pi^{+})\pi^{0}\) decays using 72 fb\({}^{-1}\) of data collected by Belle II detector [7](Charge-conjugate decays are implied throughout). The \(D^{*+}\) tagging is requested to suppress the combinatoric background. In SuperKEKB [8], the \(D^{*+}\) mesons are produced with a boost that displace the \(D^{0}\) and \(D^{+}\) mesons. The decay time is estimated from the projection of this displacement on to the direction of momentum, \(\vec{p}\), as \(t=m_{D}\vec{L}\cdot\vec{p}/|\vec{p}|^{2}\), where \(m_{D}\) is the known mass of the relevant \(D\) meson [9]. The uncertainty in decay time, \(\sigma_{t}\), is estimated by propagating the uncertainties in \(\vec{L}\) and \(\vec{p}\), including their correlations.
## 2 Belle II detector
The Belle II detector is built around the interaction region of SuperKEKB \(e^{+}e^{-}\) collider. The inner most part is a two-layer silicon-pixel detector (PXD) followed by a four-layer double-sided silicon-strip detector (SVD) and a central drift chamber (CDC) together form the tracking system. A time-of-propagation counter and an aerogel ring-imaging Cherenkov counter that cover the barrel and forward end-cap regions of the detector, respectively, are used for charged-particle identification. An electromagnetic calorimeter is used to reconstruct photons and electrons. All these components are kept inside a 1.5 T magnetic field. A dedicated system to identify \(K^{0}_{L}\) mesons and muons is installed in the outermost part of the detector.
## 3 Reconstruction
\(D^{0}\to K^{-}\pi^{+}\) and \(D^{+}\to K^{-}\pi^{+}\pi^{+}\) candidates are reconstructed using charged tracks identified as kaons and pions. Each track is required to have at-least one hit in the first layer of PXD, one hit in the SVD. Tracks from \(D^{0}(D^{+})\) need to have at least 20 (30) hits in the CDC. The low momentum \(\pi^{+}\) from the \(D^{*+}\) decay are tracks consistent with originating from the interaction region that have at least one hit in the SVD and one hit in the CDC. Low momentum \(\pi^{0}\) is reconstructed from two photons as \(\pi^{0}\to\gamma\gamma\). The \(D^{*+}\) momentum in \(e^{+}e^{-}\) centre-of-mass frame is required to be greater than 2.5(2.6) \(\mathrm{Ge\kern-1.0ptV}/c\) to suppress \(D^{0}(D^{+})\) mesons coming from bottom mesons. A global decay-chain vertex fit [10] constraining the tracks according to the decay topology is applied and only candidates with fit \(\chi^{2}\) probabilities larger than 0.01 are retained for further analysis. The mass of \(D^{0}\) and \(D^{+}\) candidate is required to be \(1.75<m(K^{-}\pi^{+})<2.00\)\(\mathrm{Ge\kern-1.0ptV}/c^{2}\). The difference between the \(D^{*+}\) and \(D\) candidate masses, \(\Delta M\), must satisfy \(144.94<\Delta M<145.90\)\(\mathrm{Me\kern-1.0ptV}/c^{2}\) and \(138<\Delta M<143\)\(\mathrm{Me\kern-1.0ptV}/c^{2}\) for \(D^{0}\) and \(D^{+}\) candidates, respectively. By applying these selections,
approximately \(171\times 10^{3}\) signal \(D^{0}\) candidates with signal purity of 99.8% is observed in the signal region, defined as \(1.815<m(K^{-}\pi^{+})<1.878\)\(\mbox{GeV}/c^{2}\). The signal region in \(m(K^{-}\pi^{+}\pi^{+})\) is defined as \(1.855<m(K^{-}\pi^{+}\pi^{+})<1.883\)\(\mbox{GeV}/c^{2}\) and contains approximately \(59\times 10^{3}\) signal candidates with a background contamination of 9%. Mass distributions of \(D^{0}\to K^{-}\pi^{+}\) and \(D^{+}\to K^{-}\pi^{+}\pi^{+}\) candidates are shown in Fig. 1.
## 4 Lifetime extraction
Lifetimes are extracted by using unbinned maximum-likelihood fits to the \((t,\sigma_{t})\) distributions of candidates populating the signal regions. The signal probability-density function (PDF) is the convolution of an exponential function in \(t\) with a resolution function that depends on \(\sigma_{t}\), multiplied by the PDF of \(\sigma_{t}\). The time constant of the exponential function will return the lifetime. The PDF of \(\sigma_{t}\) is a histogram template derived directly from the signal region of the data. In the \(D^{0}\) case, the PDF of \(\sigma_{t}\) is obtained assuming that all candidates in the signal region are signal decays. In the \(D^{+}\) case, instead, the template is obtained from the candidates in the signal region after having subtracted the distribution of the sideband data. Simulation shows that a double (single) Gaussian with common mean will describe the resolution function for \(D^{0}(D^{+})\). The mean of the resolution function is allowed to float in the fit to account for a possible bias in the determination of the decay time; the width is the per-candidate \(\sigma_{t}\) scaled by a free parameter \(s\) to account for a possible misestimation of the decay-time uncertainty.
In the \(D^{0}\) case, the per-mille-level fraction of background candidates in the signal region is neglected and a systematic uncertainty is assigned for this. A sizable background contamination is accounted for in the \(D^{+}\) case using the data sideband: \(1.758<m(K^{-}\pi^{+}\pi^{+})<1.814,1.936<m(K^{-}\pi^{+}\pi^{+})<1.992\)\(\mbox{GeV}/c^{2}\). The background PDF consists of a zero-lifetime component and two
Figure 1: Mass distributions of (top) \(D^{0}\to K^{-}\pi^{+}\) and (bottom) \(D^{+}\to K^{-}\pi^{+}\pi^{+}\) candidates with fit projections overlaid. The vertical dashed and (for the bottom plot) dotted lines indicate the signal regions and the sideband, respectively.
exponential components, all convoluted with a Gaussian resolution function having a free mean and a width corresponding to \(s\sigma_{t}\). To better constrain the background parameters, a simultaneous fit to the candidates in the signal region and sideband is performed by constraining the background fraction obtained from a fit to \(m(K^{-}\pi^{+}\pi^{+})\).
The lifetime fits are tested on simulated samples and the returned lifetimes are consistent with the true values. The decay-time distributions of the data, with fit projections overlaid, are shown in Fig. 2. The \(D^{0}\) and \(D^{+}\) lifetimes are measured to be \(410.5\pm 1.1(\mathrm{stat})\pm 0.8(\mathrm{syst})\) fs and \(1030.4\pm 4.7(\mathrm{stat})\pm 3.1(\mathrm{syst})\) fs, respectively [11]. The results are consistent with their respective world average values [9]. The systematic uncertainties are summarized in Table 1 and the total systematic uncertainty is the sum in quadrature of the individual components.
## 5 Systematic Uncertainty
A small correlation between \(t\) and \(\sigma_{t}\) is neglected in our nominal fitting model. In order to quantify the effect 1000 signal-only samples of simulated events with same statistics as data are fitted with the nominal PDF. Upper bounds of 0.16 fs and 0.39 fs on the average absolute deviation
\begin{table}
\begin{tabular}{l c c} \hline Source & \(\tau(D^{0}\to K^{-}\pi^{+})\) [fs] & \(\tau(D^{+}\to K^{-}\pi^{+}\pi^{+})\) [fs] \\ \hline Resolution model & 0.16 & 0.39 \\ Backgrounds & 0.24 & 2.52 \\ Detector alignment & 0.72 & 1.70 \\ Momentum scale & 0.19 & 0.48 \\ \hline Total & 0.80 & 3.10 \\ \hline \end{tabular}
\end{table}
Table 1: Systematic uncertainties.
Figure 2: Decay-time distributions of (top) \(D^{0}\to K^{-}\pi^{+}\) and (bottom) \(D^{+}\to K^{-}\pi^{+}\pi^{+}\) candidates in their respective signal regions with fit projections overlaid.
of measured lifetimes from their true value is assigned as a systematic uncertainty due to imperfect resolution for \(D^{0}\to K^{-}\pi^{+}\) and \(D^{+}\to K^{-}\pi^{+}\pi^{+}\), respectively.
A background contamination of 0.2% is neglected in the signal region of \(D^{0}\to K^{-}\pi^{+}\). To estimate the effect on our result, 500 simulated samples of \(e^{+}e^{-}\) events with same size and signal-to-background ratio as data are fitted with the nominal model. The average absolute deviation of fitted lifetime, after subtracting the uncertainty due to resolution modeling, from the true value is 0.24 fs and assigned as systematic uncertainty due to background contamination.
The background in \(D^{+}\to K^{-}\pi^{+}\pi^{+}\) signal region is modeled using data sideband. A mismatch between data and simulation in the sideband may be indicating an imperfect description of background components in signal region by the sideband. 1000 samples prepared using pseudo experiments in signal region and simulated data in sideband reproduce the same level of disagreement is fitted and the absolute average difference between the measured and simulated lifetime, 2.52 fs, is assigned as the systematic uncertainty due to background modeling.
Misalignment of tracking detectors may cause bias in the decay-length determination and hence the lifetime. Two sources of uncertainties associated with the alignment are considered: the statistical precision and a possible systematic bias. The day-to-day difference between alignments in real data is used for the statistical contribution. Samples of same statistics as data are simulated by introducing realistic misalignment effects and the difference in lifetime residual for a given misalignment configuration and that from a perfectly aligned sample is assigned as systematic uncertainty.
## 6 Conclusions
In conclusion, the \(D^{0}\) and \(D^{+}\) lifetimes are measured using the data collected by the Belle II experiment corresponding to an integrated luminosity of 72 fb\({}^{-1}\). The results are the most precise to date and are consistent with previous measurements.
|
2309.16065 | Irradiated Disks May Settle into Staircases | Much of a protoplanetary disk is thermally controlled by irradiation from the
central star. Such a disk, long thought to have a smoothly flaring shape, is
unstable to the so-called 'irradiation instability'. But what's the outcome of
such an instability? In particular, is it possible that such a disk settles
into a shape that is immune to the instability? We combine Athena++ with a
simplified thermal treatment to show that passively heated disks settle into a
'staircase' shape. Here, the disk is punctuated by bright rings and dark gaps,
with the bright rings intercepting the lion's share of stellar illumination,
and the dark gaps hidden in their shadows. The optical surface of such a disk
(height at which starlight is absorbed) resembles a staircase. Although our
simulations do not have realistic radiative transfer, we use the RADMC3d code
to show that this steady state is in good thermal equilibrium. It is possible
that realistic disks reach such a state via ways not captured by our
simulations. In contrast to our results here, two previous studies have claimed
that irradiated disks stay smooth. We show here that they err on different
issues. The staircase state, if confirmed by more sophisticated radiative
hydrodynamic simulations, has a range of implications for disk evolution and
planet formation. | Taylor Kutra, Yanqin Wu, Yoram Lithwick | 2023-09-27T23:12:04Z | http://arxiv.org/abs/2309.16065v2 | # Irradiated Disks May Settle into Staircases
###### Abstract
Much of a protoplanetary disk is thermally controlled by irradiation from the central star. Such a disk, long thought to have a smoothly flaring shape, is unstable to the so-called 'irradiation instability'. But what's the outcome of such an instability? In particular, is it possible that such a disk settles into a shape that is immune to the instability? We combine Athena++ with a simplified thermal treatment to show that passively heated disks settle into a'staircase' shape. Here, the disk is punctuated by bright rings and dark gaps, with the bright rings intercepting the lion's share of stellar illumination, and the dark gaps hidden in their shadows. The optical surface of such a disk (height at which starlight is absorbed) resembles a staircase. Although our simulations do not have realistic radiative transfer, we use the RADMC3d code to show that this steady state is in good thermal equilibrium. It is possible that realistic disks reach such a state via ways not captured by our simulations. In contrast to our results here, two previous studies have claimed that irradiated disks stay smooth. We show here that they err on different issues. The staircase state, if confirmed by more sophisticated radiative hydrodynamic simulations, has a range of implications for disk evolution and planet formation.
+
Footnote †: journal: ApJ
0000-0002-4880-7885]Taylor Kutra
0000-0002-4880-7888]Yanqin Wu
0000-0002-4880-7888]Yoram Lithwick
## 1 Introduction
Stellar irradiation is the dominant heat source in most parts of a protoplanetary disk, except perhaps for the inner sub-AU zone (see, e.g. D'Alessio et al., 1998). In such a 'passively heated' disk, small grains in the upper layers absorb stellar photons and re-radiate them as heat. Half of this is lost to space and the other half heats the disk and provides thermal pressure against vertical gravity.
Chiang & Goldreich (1997, hereafter CG97) proposed a simple power-law solution ('flared disk') for the equilibrium state of these passive disks. This is widely used, both to interpret the spectral energy distribution observed from real systems, and to construct physical models to study planet formation.
However, the stability of such solutions to thermal perturbations has recently been called into question. Two studies, Watanabe & Lin (2008); Wu & Lithwick (2021), argued that passive disks suffer from a linear instability, called the 'thermal wave instability' in Watanabe & Lin (2008), and termed more precisely as the 'irradiation instability' in Wu & Lithwick (2021, hereafter WL21). It arises because, when an annulus of a disk is thermally perturbed and acquires a larger scale-height, its optical surface (the altitude at which stellar irradiation is intercepted) can flare more strongly, allowing it to intercept even more starlight. The perturbation then grows, and propagates inwardly. At the core of the irradiation instability is the amount of stellar flux intercepted by a disk which depends on its vertical structure.
If this instability indeed operates and can grow into order unity amplitudes, the disk may appear (to the central star) as a'staircase', with steep star-facing edges ('risers') that receive almost all of the stellar flux, interspersed with 'treads' that are cast in their shadows. If so, this may explain the formation of gaps and rings commonly observed in resolved protoplanetary disks(e.g. Andrews et al., 2018; Huang et al., 2018). It would also affect dust migration and wafting, as well as angular momentum re-distribution in the disk. In
short, the irradiation instability may strongly impact the appearances and evolution of protoplanetary disks.
At the moment, we do not yet have a full understanding of the instability. By necessity, the 1-D simulation of Watanabe and Lin (2008) and the semi-analytical study by WL21 adopted two critical yet problematic assumptions. First, it is assumed that the disk remains vertically isothermal when it is thermally perturbed, while in reality any changes in stellar heating are communicated towards the disk midplane by radiative transfer and can be slow. Second, the disk is assumed to remain in hydrostatic equilibrium at all times. This ignores any possible feedback from hydrodynamics. Such issues are also present in studies by Siebenmorgen and Heymann (2012); Ueda et al. (2021); Okuzumi et al. (2022) As a result, it remains unclear if the irradiation instability operates under realistic conditions (Wu and Lithwick, 2021; Pavlyuchenkov et al., 2022).
In fact, two recent studies by Melon Fuksman and Klahr (2022, hereafter MFK22) and Pavlyuchenkov et al. (2022, hereafter PMA22) have cast shadows on the irradiation instability.
MFK22 simulated the hydrodynamical response of irradiated disks using simplified radiative transfer (the so-called'moments method'). They reported that a disk with an initial staircase-like profile smooths out into a featureless form after a few thermal times. This lead them to conclude that the irradiation instability is suppressed by the combined effects of vertical thermal diffusion and fluid advection, exactly the two processes ignored by Watanabe and Lin (2008) and WL21. Discoercingly, based on a hydro code with a different radiative treatment, PMA22 also reported that the irradiation instability is suppressed in their simulations.
To proceed, it is clear that more realistic treatments of radiative transfer are needed. Unfortunately, sophisticated radiation hydro codes currently under development (see, e.g. Jiang et al., 2014; Roth and Kasen, 2015) are resource intensive and remain proprietary. So in this work, we take a different approach to the problem. Instead of investigating the evolution of a disk under the instability, we limit ourselves to the following question: is there a steady state of the disk that is immune to the instability?
In this work, we will employ the Athena++ hydro code (Stone et al., 2008, 2019) to study irradiated disks, thereby relaxing the assumption of hydrostatic equilibrium. We will calculate the amount of starlight intercepted by a disk annulus self-consistently. However, instead of solving for the vertical transfer of this radiation, we will simply assume that it heats up different heights of the disk at a uniform rate. This is equivalent to assuming vertical isothermality when the disk is in hydrostatic equilibrium, so we will simply call it the'vertical isothermal' assumption.
This assumption is only appropriate when the disk is in (vertical) thermal equilibrium. But since we are looking for the steady state, a state that satisfies both dynamical and thermal equilibria, we may be forgiven for adopting such an assumption. In other words, such an approach, while not conventional, is justified by its ends.
We will first describe our physical and numerical setup (SS2). We then present and analyze in detail the steady states that we obtain (SS3). We ponder the origin of such a steady state (SS4), before re-examining the claims by MFK22 and PMA22 in SS5. We briefly discuss observable signatures of irradiated disks in SS6, before concluding in SS7.
## 2 Simulation Setup
### Irradiation and Thermal Physics
Here, we discuss the key thermal physics inputs for our Athena++ simulations.
First, some terminology. Small dust grains at the optical surface intercept stellar photons - we call these 'visual/optical light' - and re-radiate the heat isotropically. We call the latter radiation 'infrared light'. Roughly half of it is lost to the space above and the other half travels downwards towards the disk midplane. The latter half is then absorbed by grains (mostly larger ones) and re-processed into an even colder blackbody radiation - we call this'mm light', though the wavelength is typically shorter than a millimetre. Unless the opacity law is very steep (see Appendix A), much of the disk below the optical surface is largely isothermal when in thermal equilibrium.
We compute the optical light interception as follows. Let \(\tau_{r}\) be the radial optical depth to stellar photons,
\[\tau_{r}(r,\theta,\phi)\equiv\int_{0}^{r}\rho(r^{\prime},\theta,\phi)\,\kappa _{V}dr^{\prime}\,, \tag{1}\]
and \(\kappa_{V}\) is the Planck opacity for starlights (\(V\) for visual), \(\rho\) is the gas density.
In the analytical work of WL21, stellar heating is determined based on a quantity called the 'optical surface', \(H(r)\), disk height at which \(\tau_{r}=1\).1 Numerically, it is instead much easier to compute the heating as follows. The stellar flux at every point is
Footnote 1: By definition, \(H(r)/r\) cannot decrease with radius. The region of the disk that satisfies \(d(H/r)/dr=0\) is in shadow.
\[F_{\rm irr}(r,\theta,\phi)=\left(\frac{R_{*}}{r}\right)^{2}\,\sigma_{\rm SB}T _{*}^{4}\,e^{-\tau_{r}}\,, \tag{2}\]
where \(\sigma_{\rm SB}\) is the Stefan-Boltzmann constant, and \(R_{*}\) and \(T_{*}\) the stellar radius and surface temperature, respectively. One can then determine the amount of light intercepted by taking the divergence of the flux. In addition to numerical expediency, such a procedure naturally incorporates the case of a 'translucent optical surface' whereby starlight is deposited over a finite horizontal distance (this distance is called'smearing length' in WL21).
We now turn to thermal processing. Any change in heating is communicated to the disk below by radiative transfer, and this proceeds diffusively in the optically thick region. So a perturbed disk is not expected to remain vertically isothermal. However, as we are interested in the final steady state, we will continue making the simplifying assumption that different vertical layers of the disk are heated up in unison. Let us define a quantity called the 'forcing temperature', based on the height-integrated heating at a given radius,
\[\sigma_{\rm SB}T_{\rm force}^{4}(r)=\frac{1}{2}\int_{0}^{z_{\rm max}}(-\nabla \cdot F_{\rm irr})\,dz\,. \tag{3}\]
We can accurately evaluate this integral even with a small number of vertical grids. Let the blackbody cooling be \(\sigma_{\rm SB}T^{4}\). We stipulate that, under radiative forcing, the local temperature, \(T(r,\theta)\), relaxes towards the height-independent \(T_{\rm force}(r)\) as
\[4\sigma_{\rm SB}T^{3}\frac{\partial T}{\partial t}\bigg{|}_{\rm rad}=\frac{1} {\tau_{\rm th}}\,\left[\sigma_{\rm SB}T_{\rm force}^{4}(r)-\sigma_{\rm SB}T^{ 4}\right]\,, \tag{4}\]
where \(\tau_{\rm th}\) is the thermal relaxation time (see below). We call this the'vertical isothermal' assumption. The actual temperature may depend on height slightly, due to the presence of compressional heating in the energy equation (eq. 7). In practice, we further set the cooling term (the last term on the right hand side) to be \(\sigma_{\rm SB}T_{\rm mid}^{4}\), where \(T_{\rm mid}\) is the midplane temperature. This makes little difference to the results.
### Numerical Setup
We use Athena++ (Stone et al., 2008, 2019), a grid-based, high-order Godunov code, to conduct 2D (axis-symmetric) hydrodynamic simulations. Athena++ integrates the following equations of gas dynamics,
\[\frac{\partial\rho}{\partial t}+\nabla\cdot\left[\rho{\bf v}\right]=0\,, \tag{5}\]
\[\frac{\partial(\rho{\bf v})}{\partial t}+\nabla\cdot\left[\rho{\bf v}{\bf v}+ P\right]=-\rho\nabla\Phi\,, \tag{6}\]
\[\frac{\partial E}{\partial t}+\nabla\cdot(E+P){\bf v}=-\rho({\bf v}\cdot \nabla\Phi)+\frac{\rho}{\Gamma-1}\frac{k_{b}}{\mu m_{H}}\frac{\partial T}{ \partial t}\bigg{|}_{\rm rad}. \tag{7}\]
Here \(P\) is gas pressure, \(v\) the 3-D velocity, and \(\Gamma\) the adiabatic index (we take \(\Gamma=7/5\)). The gravitational potential of the central star with mass \(M_{*}\) is given by \(\Phi=-GM_{*}/r\). The energy density, \(E\), is the sum of internal (\(P/(\Gamma-1)\)) and kinetic (\(\rho v^{2}/2\)) energies. The last term on the right hand side of eq. (7) accounts for radiative heating and cooling, and is evaluated using eq. (4). The integration timestep is determined by a minimum Courant number of 0.3.
#### 2.2.1 Disk Model
We choose the following set of parameters for our fiducial disk.
The central star is Sun-like, with solar mass, solar radius and solar temperature. The initial surface density of the gas disk runs as
\[\Sigma(r)=\Sigma_{0}\Big{(}\frac{r}{r_{0}}\Big{)}^{-1}\,, \tag{8}\]
with a two-sided surface density \(\Sigma_{0}=1700\,\rm g/cm^{2}\) at \(r_{0}=1\rm AU\). The initial radial temperature profile is taken from CG97:
\[T(r)=T_{\rm CG97}=T_{0}\Big{(}\frac{r}{r_{0}}\Big{)}^{-3/7}\,, \tag{9}\]
with \(T_{0}=150\,\rm K\). The isothermal sound speed is
\[c_{s}(r)=\sqrt{\frac{k_{b}}{\mu m_{H}}T(r)}\,, \tag{10}\]
while the disk scale height \(h(r)=c_{s}(r)/\Omega(r)\), with \(\Omega(r)=\sqrt{GM/r^{3}}\) being the Keplerian orbital frequency. We initialize the gas density \(\rho(r,z)\) assuming vertical hydrostatic equilibrium for a locally isothermal disk. The initial meridional velocities (\(v_{r}\), \(v_{\theta}\)) are set to zero, and the azimuthal velocity (\(v_{\phi}(r,z)\)) is determined by force balance between gas pressure, gravity and centrifugal force.
For computing the optical depth to the star (eq. 1), we adopt a gas opacity of \(\kappa_{V}=1\,\rm cm^{2}/g\). This is what one expects if about \(10^{-4}\) of the gas mass (and therefore \(\sim 10^{-2}\) of the dust mass) is in small, micron-sized grains. We ignore scattering of photons.
We emulate WL21 and adopt a relaxation time
\[\tau_{\rm th}=\frac{3}{32}\frac{c_{v}\Sigma(r)}{\sigma_{\rm SB}T^{3}}\,, \tag{11}\]
where \(\Sigma(r)\) is the disk surface density, and the specific heat \(c_{v}=\frac{5}{2}\frac{k_{b}}{\mu m_{H}}\) with a mean molecular weight of \(\mu=2.3\). The resultant relaxation time is plotted in Fig. 1 and it is roughly flat with radius. This timescale controls the speed towards equilibrium. Our choice (eq. 11) is shorter than that in WL21 and is adopted for computational expediency. We confirm below that its value does not impact the final steady state.
#### 2.2.2 Domain and Boundary Conditions
Our 2-D hydro simulations are conducted in spherical polar coordinates \((r,\theta)\). In the meantime, the relevant thermal physics is best described in a cylindrical grid. As our disks are thin (\(z=r\sin\theta\ll r\)), we have simply equated \(r\) with the cylindrical radius, e.g., the same heating term (eq. 4) is applied to cells of the same \(r\).
Our integration domain is \(r\in[1,20]\text{AU},\ \theta\in[\pi/2,\pi/2-0.3]\) (above the disk mid-plane, where \(\pi/2\) is midplane). We adopt grid numbers \(N_{R}\times N_{\theta}=256\times 128\).
The radial grids are logarithmic, with the spacing expanding outward as \((r_{i+1}-r_{i})/(r_{i}-r_{i-1})=1.01\). The boundary conditions for the inner and outer radial boundaries are that all variables (gas bulk properties and velocities) are held to their initial values.
For our fiducial run, we follow MFK22 in assuming the presence of an un-modelled disk inward of our radial domain. This modifies the radial optical depth as
\[\tau_{r}(r,\theta)\equiv\tau_{0}(\theta)+\int_{r_{\rm in}}^{r}\!\rho(r^{\prime },\theta)\kappa_{V}dr^{\prime}\,, \tag{12}\]
where we take \(\tau_{0}(\theta)=\kappa_{V}\times\rho(r_{0},\theta)\times(r_{0}-10R_{*})\) (see MFK22). Such a procedure, together with our radial boundary condition that \(\rho(r_{0},\theta)\) is constant, erects a static opaque'screen' at the inner boundary. For our fiducial parameters, this screen blocks starlight up to a height \(z/r\approx 0.11\). It prevents overheating of the simulated disk due to direct exposure. In real disks, the inner material will also adjust to the stellar heating. We investigate one such case in SS3.3.
We now turn to the \(\theta\) direction. Our grids only cover a part of the meridional plane above the equator. We employ a reflecting boundary condition for fluid velocities at \(\theta=\pi/2\). This ensues symmetry about the midplane. Our upper boundary at \(\theta=\pi/2-0.3\) is chosen such that we cover at least 6 pressure scale heights across all radii (scale height is the largest at the outer boundary, \(h/r\sim 0.05\)). This guarantees that the optical surface (\(H\)) always lies within our computation domain.
However, such a cautious choice for the upper boundary introduces a unique computational challenge. As gas density drops super-exponentially with height, we inevitably encounter the gas density floor (set at \(10^{-18}\,\mathrm{g/\,cm^{3}}\)), the minimum gas density introduced in the code to deal with vacuum. Above this height, gas is assigned a constant density and so cannot stay in hydrostatic equilibrium. Waves are continuously excited. Moreover, if densities in the upper ghost-cells are held to their initial values, steep pressure gradients may develop when the gas below evolves. This drives shocks. To minimize these two effects, we choose to set the gas density in the upper boundary to be a linear extrapolation of \(\log\rho\) below it. We hold velocities to their initial values. Some waves still persist and we examine their impacts on the simulation in Appendix B.
#### 2.2.3 Horizontal Radiative Transfer
WL21 pointed out that there is also heat transfer in the radial (horizontal) direction. There are two separate physical effects.
The first effect is due to finite disk thickness. Consider midplane gas at radius \(r\). It experiences heating from grains at altitude \(H\) directly above it (eq. 3), but it is also sensitive to those within a radial distance \(\delta r\leq H\). In other words, the isotropic re-radiation from the small grains at \(H\) are felt over a finite radial distance \(\delta r\leq H\). Accounting for this effect suppresses thermal perturbations with very short wavelengths. This is a more significant issue in the outer disk where \(H/r\) is larger.
To accommodate this effect, we introduce a horizontal smoothing to the heating function,
\[\sigma_{\rm SB}T_{\rm force}^{4}(r_{j})\Big{|}_{\rm smooth}=\frac{\sum_{i=1} ^{N_{r}}K(r_{j},r_{i})\sigma_{\rm SB}T_{\rm force}^{4}(r_{i})}{\sum_{i=1}^{N_{ r}}K(r_{j},r_{i})}\,, \tag{13}\]
with a Gaussian kernel
\[K(r_{j},r_{i})=\exp\left\{-\frac{1}{2}\left[\frac{r_{j}-r_{i}}{\beta H(r_{j}) }\right]^{2}\right\}\,. \tag{14}\]
Here, \(\beta\) is a free parameter and it is expected to be of order unity based on the above physical discussion. We take \(\beta=0.4\) for our fiducial case (so that the FWHM is \(\sim H(r_{j})\)) and experiment with different values in SS3.3.
Another related effect takes place when the perturbation amplitudes are large. Consider an annulus that is not exposed to direct starlight (i.e., in shadow). It cools
Figure 1: Two timescales, the thermal time (eq. 11 ), and the dynamical time, for our fiducial disk, at its initial state. The thermal time is relatively constant across all radii. This plot indicates that, if a steady state exists, we should be able to reach it within a few hundred years.
and contracts vertically. But it will not reach zero temperature. Instead, because its neighbouring rings (which can be at a distance \(\delta r\gg H\)) are hot and puffed up, IR light from these rings can reach the shadowed zone and keep it illuminated (e.g., see Fig. 1 of Okuzumi et al., 2022). This effect is called "back-warming" (e.g., Dullemond and Monnier, 2010). To emulate this qualitatively, we impose a minimum temperature in the disk that is 45% of \(T_{\rm CG97}\) (eq. 9). This value is motivated by results of our RADMC simulations (SS3.1).
Our approaches to solve the two horizontal transfers are crude, and one is referred to Okuzumi et al. (2022) for some more sophisticated treatments. Fortunately, as we show in SS3.3, the irradiation instability is not suppressed by horizontal transfer, and our steady states are not qualitatively affected by the details of these treatments.
### Evolution Towards a Steady State
The time evolution of our fiducial disk is shown in Fig. 2. Starting from a smooth power-law temperature profile, the disk quickly develops temperature kinks that grow in amplitudes as they propagate inward, at a timescale of the local thermal time. These behaviour are as predicted by the linear theory of WL21.
In a bit more detail, we see that the region just beyond 1AU cools with time from an early start. This is because it finds itself partially blocked from starlight by the (numerically erected) inner screen that extends to a height of \(z/r\approx 0.11\). The IR light that the midplane receives now is not sufficient to keep it poking above the screen. It has no choice but to cool and contract vertically.2 Interestingly, its loss is someone else's gain. The region near 3AU now has a relatively unhindered view to the central star, compared to the case in a power-law disk. This region then heats up and extends vertically. A hot ring is formed. Such a ring is analogous to the so-called 'puffed-up inner rims' found in the inner edges of protoplanetary disks (Natta et al., 2001; Dullemond et al., 2001), except here it is in the middle of a disk.
Footnote 2: In this shadowed region, \(H/r\) remains constant (\(\approx 0.11\)) as is set by the inner screen.
According to the analysis in WL21, such a hot ring should continue to propagate inward. To our surprise, it stalls after travelling only a small distance. This is a key result from our study. We will discuss more below (SS4).
With this ring now in place and taking over the role of the numerical inner screen, the story repeats itself. One after another, hot rings form successively outward. Such an inside-out pattern formation is not surprising, since the inner part of the disk controls the irradiation condition for the outer part.3 After \(\sim 100\) yrs, a total of three hot rings are established, fringed by cold belts that lie in their shadows. These features are broad and widely spaced, with widths and spacings all of order the local radius.
Footnote 3: The thermal relaxation time in our fiducial case increases outward. However, we confirm later that even for a case where it decreases outward, the behaviour is similar.
Further integration to beyond 500 yrs returns no appreciable changes. In such a steady state, the total amount of stellar light intercepted by the disk is comparable to that of a conventional power-law disk, but the distribution of this light is highly inequitable. The hot rings, with their steep star-facing edges, capture almost all of the starlight (top panels, Fig. 3). The final disk shape, in terms of the optical surface, resembles a staircase.
Figure 2: How our fiducial disk reaches its staircase steady state. The top panel shows the height of the optical surface as a function of radius (\(H/r\); snapshots taken every 16 yrs, with initial being the lightest in color; values are slightly smoothed from machine output for aesthetics). At the final equilibrium, the optical surface resembles a series of staircase steps. The bottom panel shows the midplane temperature as a function of time. Starting from an initial power-law state, thermal perturbations develop, move inward and stall, segmenting the disk into hot and cold rings. The inner disk reaches equilibrium earlier. There is a shielding screen that extends up to \(z/r\approx 0.11\) at the inner boundary.
## 3 The Steady State
This new steady state is our most important result. We now analyze this state in detail, in order to establish its credibility. This is important, especially because, to arrive at it, we have adopted a questionable assumption ("vertical isothermal"). In particular, we ask whether the staircase disk is truly in thermal and dynamical equilibria. After this, we return to discuss why such a steady state may be immune to further irradiation instability.
### Inspection using RADMC
We study the question of thermal equilibrium using the code RADMC-3d (Dullemond et al., 2012). This is a Monte-Carlo photon code that calculates the equilibrium temperature field, for a given density field. We will compare RADMC results against the temperature field we reach via Athena++. A good agreement indicates that our steady state is in thermal equilibrium.
To conduct RADMC experiments, we have to adopt an opacity law. We choose the following grey opacity,
\[\kappa_{\nu}=1\,\mathrm{cm}^{2}/\mathrm{g}-\mathrm{gas}\,. \tag{15}\]
Such a form crudely represents that arising from a mixture of micron-sized and mm-sized grains (see, e.g. Woitke et al., 2016). We choose this opacity law for two reasons. First, it is the same as our choice of visual opacity (\(\kappa_{V}\)) in the simulations. Second, a grey opacity disk should be vertically isothermal at equilibrium (see Appendix A), therefore compatible with our key assumption.
As in our hydro simulations, we adopt solar parameters for the central star and suppress dust scattering. The gas density field is taken from the last snapshot (\(t=477\) yrs) in our fiducial run. We extend this density field both inward and outward slightly. The inner extension produces the inner screen described in SS2.2.2; and the outer extension helps to avoid excessive cooling at the outer edge due to a truncated disk. We use \(5\times 10^{8}\) photon packets.
Figure 3: Similar to Fig. 2 but with more details. Time runs from left to right and each panel shows the 2-D structure of a physical quantity: the top panels the divergence of the stellar flux (in logarithmic scale), or where irradiation is absorbed by disk; the middle ones the gas temperature, and the lower panels the gas density (in logarithmic scale). The white curve in each panel indicates the optical surface (\(H\)). The disk, starting from a power-law temperature profile, is transformed into a ‘staircase’ after a few thermal times, with most of the stellar heating concentrated near the stair-risers, while regions in-between are cast in shadow (”stair-treads”). The evolution is inside-out. By design, most of the disk is vertically isothermal, except for the low density gas at high altitudes (mostly above the optical surface) that is also affected by compressional heating of boundary-driven waves. This is not consequential, because temperature in this rarefied gas does not control the amount of light the disk intercepts. An animated version of this simulation can be found here.
The RADMC results are presented in Fig. 4, alongside our Athena++ results. The midplane temperatures of the two simulations agree well, and the vertical temperature distributions are also largely similar, especially for regions below the optical surface. These agreements validate a number of procedures in the hydro simulations, including the deposition of stellar flux, the vertical sharing of this flux, the horizontal smoothing, and the minimum temperature floor that accounts for 'backwarming'.
There exist, however, two discrepancies.
The first one concerns temperatures at high altitudes. They are very discrepant between the two codes. This results from two different effects. First, in hydro simulations, the high altitude region is continuously plagued by waves that emanate from the upper boundary (SS2.2.2). But since this is well above the optical surface, we believe it is irrelevant to the dynamics. Second, in RADMC, gas at or above the optical surface can see the star directly and is thus heated to a higher temperature than that in the midplane. This is not captured by our'vertical isothermal' assumption. Again, we believe this does not strongly impact the dynamics.
The second discrepancy lies in the midplane temperature (the left panel of Fig. 4) and contains two separate sets of features. The first set is the region inside 2AU, where the RADMC result is hotter at 1AU and colder at 1-2AU, compared to the Athena++ ones. The former arises because the 1AU zone in RADMC is in radiation contact with hotter material inward of it that is not properly modeled in Athena++. The latter is because our floor temperature treatment (45% of \(T_{\rm CG97}\)) over-estimates the effect of back-warming in the 1-2AU neighbourhood. Neither of these is important for the overall dynamics, because this region is in shadow and does not affect the stellar irradiation the outer disk receives.
The second set of features concerns the annuli immediately inward of each of the three hot rings. While our adoption of a floor temperature seems to work largely, it is not perfect. The biggest offense shows up in these annuli, where RADMC predicts a higher temperature than Athena++. This discrepancy is worrisome because the physical state of these regions may impact the propagation or stalling of thermal waves. This should be investigated in the future.
Overall, we conclude that our steady state is in good thermal equilibrium.
### Dynamical Equilibrium and Meridional Circulation
To quantify dynamical equilibrium, we define a kinetic energy density for the meridional motion,
\[\epsilon_{k}=\frac{1}{2}\rho(v_{r}^{2}+v_{\theta}^{2})\,, \tag{16}\]
where \(v_{r}\) and \(v_{\theta}\) are the radial and vertical velocities, respectively. Figure 5 shows \(\epsilon_{k}\), in ratio to the local midplane pressure. This ratio informs the degree of departure from hydrostatic equilibrium. For comparison, we also integrate a purely hydro disk, one without radiative forcing (the last term in eq. 7).
Comparing the pure hydro disk and our fiducial disk (Fig. 5), we observe a common trend and some differences. We see that as the two disks adjusts hydrodynamically, their kinetic energy densities decay over time by a few orders of magnitude. This process is faster in
Figure 4: Examining the steady state (labelled as ‘Athena’) using RADMC3d. The left panel compares the midplane temperature outputs from Athena and RADMC3d, with the dotted light line (‘CG97’) indicating the initial temperature profile (eq. 9).The right two panels show the 2-D temperature fields from Athena (middle) and from RADMC (right). For regions below the optical surface, our Athena steady state share similar temperatures as that obtained from RADMC3d, confirming that our steady state is in good thermal equilibrium. The various labelled curves represent: \(H\), the optical surface; \(\tau_{z}=1\), the disk vertical photosphere; \(h\), vertical scale height.
the inner region where the dynamical time is shorter. By the end of our simulations (477 yrs), both disks have largely reached dynamical equilibrium, with the ratio of \(\epsilon_{k}/P_{\rm mid}\) falling to \(10^{-4}\) or lower, for most of the disk.
Compared to the pure hydro disk, the irradiated disk takes longer to settle down. This is because the latter is susceptible to unstable thermal waves. Moreover, even after the disk temperature has reached a steady state, there still remain a small, yet discernible meridional circulation near the hot rings.
In Fig. 6, we zoom in to study one such circulation. The streamline plot shows that gas within a couple scale heights of the midplane is largely static, but there remain fast, fluctuating, fluid motion at high altitudes. The latter results, as we explain previously, from the imperfect numerical treatment near the upper boundary. Fortunately, as the pure hydro run in Fig. 5 testifies, this issue is not significant for the overall dynamics. We further elaborate on this point in Appendix B.
A more interesting feature in the streamline plot is the circulation at moderate latitudes. We observe a weak circulation, with velocity magnitudes of order \(10^{-3}c_{s}\). At this speed, the fluid flow is not competitive against radiative heating/cooling and can at best advect only a small amount of the stellar heating. But it may be important for stirring up the dust grains.
In summary, in our 2D simulations, after a few hundred inner orbits, the disk has largely settled down to a dynamical equilibrium, modulated by weak circulations near the hot rings.
### The Steady State and Parameter Choices
To gain more insights on the steady state, we study how it depends on our choice of parameters. In each of the following simulations, a staircase like steady state is reached (Fig. 7). The locations and widths of the hot rings depend on the parameters, but not always in ways that are easily interpretable, possibly because these are results of nonlinear evolution.
The first parameter to consider is the optical opacity, \(\kappa_{V}\). Holding all other parameters fixed, we increase the opacity from 1 (the fiducial case), to 10 and to 100 cm\({}^{2}\)/g, corresponding to increasing dustiness. It appears that more opaque disks tend to harbour wider hot rings. This may be due to the higher optical surface (and therefore larger horizontal smoothing, see eq. 14) in more opaque disks.
We then explore varying the initial condition, in particular, the power-law index in eq. (9) for the initial temperature profile. This appears to alter the final state. This suggests that there is a continuum of final states the disk can settle down to - a hot ring can form at
Figure 5: The path towards dynamical equilibrium. Here, we display the ratio of kinetic energy density (eq. 7) and the local midplane pressure, in the 2-D plane, and for both a pure hydro disk (no stellar heating, left) and our fiducial disk (right). Over time (from top to bottom), both disks settle down to a dynamical equilibrium, and the kinetic energy density largely vanishes. The irradiated disk maintains a small meridional circulation near the hot rings. See Fig. 6 for more details.
Figure 6: Details around a hot ring for our fiducial disk at steady state. The top panel displays the midplane temperature, and the lower one the meridional streamlines (scaled by local sound speed), with a black curve indicating the optical surface (\(H\)). The velocities are in general small (\(\leq 10^{-3}c_{s}\)), except near the upper boundary. Waves are continuously excited there by the imperfect boundary condition.
any location, as long as it is able to intercept enough sunshine to maintain its vertical height. The initial condition affects the evolutionary path and therefore the nonlinear outcome.
We also study the impact of horizontal smoothing, by varying the value of \(\beta\) in eq. (14). While cases with larger smoothing lengths tend to harbor broader (and fewer) hot rings, the case with \(\beta=0\) (no smoothing) reaches an unexpected final state. Almost the entire stellar flux is now intercepted by one very hot ring close to the inner boundary. This likely indicates that, in the absence of horizontal smoothing, our usual steady state (multiple hot rings at moderate heights) is not stable. See SS4 for more discussion.
We also test how the steady state depends on our assumed relaxation time (eq. 11). We lengthen the relaxation time by a factor of 10 overall and integrate for about 6 times longer. There is no appreciable difference in the steady state. This suggests that the thermal time only sets the overall timescale for dynamics, but not the final outcome. We also experiment with a thermal time that increases inward (by multiplying the fiducial one by a factor of \((r/20\text{au})^{-1/2}\)). We observe no difference in the final steady state.
Lastly, we experiment with a live inner rim (Fig. 8). Instead of erecting a static screen inward of 1AU, we simulate the response of a full disk that includes an inner cut-off. The disk surface-density profile is
\[\Sigma(r)=\Sigma_{0}\Big{(}\frac{r}{r_{0}}\Big{)}^{-1}\,\times\Big{[}1-e^{- \big{(}\frac{r}{r_{\text{min}}}\big{)}^{p}}\Big{]}\,, \tag{17}\]
Figure 8: Similar to Fig. 3 but for a disk that includes a ”live” inner rim (eq. 17). The inner rim is heated and puffed up under direct exposure of starlight. This casts a shadow to \(\sim 7\) AU. Beyond this, the disk again develops hot rings and dark gaps.
Figure 7: How the final steady state depends on various model parameters. The top panels show the heights of the optical surfaces, and the lower ones the mid-plane temperatures, with the initial states as dotted lines. By 477 yrs (duration of all runs, except for one case in the right panel), a steady state is reached in every simulation. Compared to the fiducial run (shown in all panels as a black curve), changes to the opacity \(\kappa_{V}\) (leftmost column), the initial temperature profile (second to the left column), the smoothing length \(\beta\) (third column), and the thermal relaxation time (rightmost column), modify the widths and the spacing of the hot rings at steady state, but do not affect its general character.
where we adopt \(r_{\rm rim}=3\)AU and \(p=8\) to emulate simulations in PMA22.4 As the inner rim is now exposed to direct starlight, it is heated to a temperature much above the CG97 values. This casts a long shadow out to \(\sim 7\)AU. But beyond the shadow, the disk is again able to form structures. The overall dynamics is similar to the fiducial case.
Footnote 4: Special care must be taken near the inner edge where the thermal time is short and the integration time-step needs to be small. We artificially lengthen the thermal time there by a factor of \(5/(\kappa_{V}\Sigma)\) where-ever \(\kappa_{V}\,\Sigma\leq 5\).
In summary, the staircase steady state remains robust to changes in model parameters.
## 4 Why do the thermal waves stall?
One hallmark feature of the irradiation instability is that the perturbation propagates towards the star. This is found in the linear analysis of WL21, and confirmed by numerical integrations by Watanabe and Lin (2008); Ueda et al. (2021); Okuzumi et al. (2022). It is also seen in the early stages of our simulations (see Fig. 2). However, after a while, the waves in our simulations stall to form a staircase steady state. This is surprising.
This stalling could be an artefact of our numerical implementations. Alternatively, it could be genuine, either as a result of the hydrodynamics (above cited studies all assume hydrostatic equilibrium), or that the staircase steady state is genuinely immune to the irradiation instability. Here, we discuss each in turn.
We adopt a number of short-cuts in the simulations to deal with the complicated thermal physics. They are not the cause for the stalling: the vertical isothermal assumption is the same as that adopted by the study of WL21 which finds travelling waves; the amount of horizontal smoothing impacts the positions of the hot rings but not the overall results (Fig. 7); adopting either a static inner screen or a live inner rim leads to similar results; the adoption of a temperature floor is motivated by physics (the 'back-lighting' effect), is confirmed by RADMC, and waves stall even when we remove this floor. Lastly, our simulations do produce travelling thermal waves in the initial stage, and waves stall only after a period of growth. This suggests some nonlinear effects are at play.
We consider one nonlinear effect: fluid advection. According to the perturbation analysis of WL21 (see their eq. B25), the thermal perturbation should propagate inward with a phase speed of \(v_{\rm phase}\sim-r/\tau_{\rm th}\), which is of order \(0.1v_{\rm kep}-10v_{\rm kep}\) for our simulations. A fluid flow with a similar velocity can in principle suppress the inward propagation by advecting the thermal energy away from the hot front. However, typical meridional speed we see at our steady state (Fig. 6) is only of order \(10^{-3}c_{s}\) and falls short for the task.
Another nonlinear effect is changes to gas pressure at equilibrium. In the initial power-law state, gas pressure decreases outward monotonically. But as the thermal waves grow in amplitude, some gas is expelled from the hot ring and piles up ahead and behind it (Fig. 13). This, coupled with the evolving temperature, flattens the radial pressure profile ahead of a hot ring. In particular, we observe that wave stalling appears to occur when the pressure in this region develops a inflection point (but not a pressure maximum). However, we do not understand the causal connection between these two events and leave this for a future study.
Lastly, if the staircase state (the nonlinear state) is stable to further irradiation instability, it may explain why the wave stalls. We borrow directly results from WL21. They considered the irradiation instability of a power-law disk with a flaring index
\[\gamma\equiv\frac{d\ln(h/r)}{d\ln r}\,, \tag{18}\]
where \(h\) is the local scale height and \(h/r=c_{s}/v_{\rm kep}\). This index equals \(2/7\) for the conventional CG97 disk (eq. 9), but we measure a much steeper flaring, \(\gamma\sim 2\), at the front edges of our hot rings. The irradiation instability prefers waves with short wavelengths. This is because, when a disk is locally heated up, a shorter wavelength mode can produce a steeper optical surface (larger \(dH/dr\)), allowing the disk to intercept more starlight and overcome the higher black-body loss. Eq. (31) of WL21 states that, for a thermal perturbation of the form \(X\propto e^{st+ik\ln r}\), the instability sets in when the wave-number
\[k\geq k_{\rm min}=\sqrt{\frac{7}{\chi^{2}-8}}\,\chi^{2}\gamma\sim 30\left(\frac{ \gamma}{2}\right)\,, \tag{19}\]
where \(\chi=H/h\) and we have evaluated using \(\chi=4\). So for our case at hand (\(\gamma\approx 2\)), unstable waves have very short radial wave-lengths, \(\sim r/k\sim 0.8h\times\left(\frac{h/r}{0.04}\right)\). This is small compared to our typical smoothing length of \(H\sim 4h\). So at face value, the staircase state seems to be less, or even not, susceptible to the irradiation instability.
Some support for this conjecture is provided by a simulation with zero smoothing (Fig. 7). In this case, it appears that the configuration of multiple hot rings is not stable and the disk continues to evolve until it builds up one very hot ring close to the inner boundary (at which point the boundary condition may come into play).
To summarize, we provide arguments that the wave stalling is likely physical and is a result of nonlinear
evolution. However, we are not certain of the actual mechanism for stalling. More detailed analysis is warranted.
## 5 Comments on previous works
Our result, that smooth disks spontaneously develop into a staircase steady state, runs in direct contradiction to two recent claims by MFK22 and PMA22. Both studies have simulated irradiated disks but found that their simulations produce smooth disks at steady state.
Here, we will examine their respective steady states in detail. In doing so, we bypass the issue of time-dependent radiative transfer, which are treated differently in our and their works. Moreover, for systems in steady state, the thermal equilibrium should be well characterized by Monte-Carlo codes like RADMC3d. So we have the 'ground-truth' to bench-mark results of radiation hydro codes.
### Comments on Melon Fuksman & Klahr (2022)
In MFK22, radiative transfer is treated by following only the first two moments of specific intensity (mean intensity and flux), with the higher moments truncated by way of the so-called Eddington tensor under M1 closure (Levermore, 1984). Such a technique allows them to capture radiative transfer in the optically thick limit and, to a less accuracy, in the optically thin domain. They report that an initial staircase disk relaxes to a smooth, power-law steady state.
To examine the steady state that MFK22 obtained, we would need their 2D density field. Since their data are not publicly available, we digitized plots from their publication.
This is how we proceed. The stellar parameters and the opacity law5 are as provided by MFK22, as is the surface density profile (which includes an inert disk inward of 0.4au). We obtain their final values for the optical surface and the midplane temperatures by digitizing their Figs. (12) and (15) respectively. We then construct a 2D density field assuming that the disk is vertically isothermal and is in hydrostatic equilibrium. We pay special attention to ensure that our disk shares the same optical surface as theirs,6 sometimes by cropping all gas that lies above \(H\).7 With such a procedure, we can be assured that our disk shares the same amount of stellar heating as theirs. Lastly, we use RADMC3d to obtain the equilibrium temperature field for such a density field. A large number of photon packages (\(5\times 10^{8}\)) are used such that the temperature results converge even in very optically thick regions.
Footnote 5: MFK22 assumed that the opacity is contributed entirely by small grains. So it falls off steeply with photon wavelengths as \(\kappa_{\nu}\propto\lambda^{-2}\). This will become relevant below.
Footnote 6: We confirm that the result is only sensitive to the value of \(H\) and the surface density (via the vertical optical depth), but not so much on the vertical density distribution.
Footnote 7: This involves only a small amount of gas. So the surface density is largely unchanged.
Fig. 9 shows the comparison for the three cases investigated by MFK22. In every case, the RADMC3d midplane temperatures fall significantly below those from MFK22, by up to 40%. The discrepancy is worse in the more optically thick inner regions and in disks that are dustier (and therefore more opaque). Satisfactory agreement exists only in one case: the outer part of the disk that has the lowest opacity. In this part, the disk is optically thin to its own radiation. Interestingly, this is also the limit for which the two-moments method has been well calibrated (Melon Fuksman & Mignone, 2019).
The overall discrepancy in midplane temperature is significant. It means that, for whatever reason, the MFK22 disks receive up to 4 times too much heating flux, compared to what RADMC determines.
To convince ourselves of the RADMC3d results, we analyze in detail (Appendix A) the disk thermal structure for different opacity laws and optical depths. In a nutshell, for a disk that is optically thick and has an
Figure 9: Comparison of the steady state reached in MFK22 (thick lines, digitized from their Fig. 12 and 15) against those obtained from RADMC3d (thin lines). The different colors stand for the different dust to gas ratios, as considered in MFK22. The RADMC3d input disks are adjusted to share similar density structures and identical optical surfaces (top panel) as those from MFK22. However, the bottom panel shows that the midplane temperatures in the MFK22 disks are up to 40% hotter than what RADMC3d predicts. This translates to a heating flux that is up to a factor of 4 too large. Discrepancy between the two codes is worse in the inner regions where the disks are optically thick to their own thermal radiation.
opacity law as steep as that adopted by MFK22, the disk's IR photosphere lies well above the mm photosphere and deflects half of the illuminating flux back to space. As a result, the midplane becomes cooler than that with a grey-opacity. RADMC3d correctly captures this process and we have confidence in its results.
We do not understand why the two-moments code transport too much flux to the midplane. But we can speculate on why it may matter for our problem at hand. In a disk with a cooler midplane, for instance, the disk immediately beyond the inner screen should become thinner by up to 20%. It will then be blocked from the star and suffers cooling collapse. At the same time, the disk further out can now receive more sunshine. These changes may then lead to the formation of a stair-case. In other words, the hotter disks in MFK22, which are in contradiction with RADMC, may have precluded the irradiation instability.
In summary, while MFK22 claimed that the irradiation instability is suppressed by the combined effects of vertical thermal diffusion and fluid advection, our analysis casts doubt on their final equilibrium state, and by association, the validity of their claim. Calibrating the equilibrium state, especially for optically thick disks, may be a necessary (though not sufficient) safety guard for any radiation hydro codes.
### Comments on Pavlyuchenkov et al. (2022b)
Using a different radiation hydro code (following only the first moment of specific intensity, and truncating higher moments by the diffusion approximation), PMA22 also found that their disks reach a smooth temperature profile.
Their study includes a live inner rim that is exposed to direct starlight. They also adopt a very high opacity, as well as a high stellar luminosity (\(5L_{\odot}\), as opposed to \(1L_{\odot}\) in this study).
To examine how these may impact the outcome, we modify our simulation to emulate their runs: \(T_{*}=6000\,\mathrm{K}\), \(R_{*}=2R_{\odot}\), such that \(L_{*}=5L_{\odot}\); a uniform initial disk temperature of \(10\,\mathrm{K}\); this is also now the floor temperature; a radial domain that runs from 0.5 to 20 au; a gas surface density profile as in eq. (17) but with \(\Sigma_{0}=200\,\mathrm{g}/\,\mathrm{cm}^{2}\); a Planck opacity of \(\kappa_{V}=50\,\mathrm{cm}^{2}/\mathrm{g}\)-gas (see Pavlyuchenkov et al., 2020). We do not include dust scattering, but it is unclear if it is included in their work.
Our results are presented in Fig. 10. Compared to our run shown in Fig. 8, the higher stellar luminosity (and to a lesser degree, the higher opacity) in this case warms up the inner rim to a higher temperature. It expands and can now intercept stellar fluxes up to a height of \(z/r\sim 0.3\). This shadows the entire disk to beyond the integration boundary (20AU). Our plot resembles the results in Fig. 8 of PMA22. Similarly to what we find here, their outer disk appears very cold and lies largely in shadow. This explains why they do not find substructures - much of the disk is not irradiated.
It is possible that, were they to adopt a lower stellar illumination, or a lower dust opacity, or a different shape of the inner rim, the dynamics and the steady state could change.
## 6 Observable Consequences
While our conclusions are far from being definitive, we muse about a few observational consequences of the staircase disks.
We first ask whether one can recognize a staircase disk if only knowing the spectral energy distribution (SED). We post-process, both the initial (power-law like, flared) and the staircase final state, through RADMC3d (see details in SS3.1) to produce their respective SEDs. These are shown in Fig. 12. There is little difference. This is as expected because both disks intercept and reprocess similar amounts of stellar flux.
Figure 10: Similar to Fig. 8 but for a case that is designed to reproduce the simulation of PMA22. The disk has a uniform initial temperature of \(10\,\mathrm{K}\). The higher stellar luminosity here heats the inner rim up to such a high temperature that it casts a long shadow out to beyond 20au. This explains why PMA22 did not find sub-structures.
In resolved images, on the other hand, Fig. 11 shows that substructures like bright rings and dark gaps are clearly visible. These are reminiscent of those discovered abundantly by high resolution observations, both using adaptive optics in the optical/IR (e.g. Garufi et al., 2016; Avenhaus et al., 2018), and using interferometry in sub-mm by ALMA (e.g. ALMA Partnership et al., 2015; Huang et al., 2018). In scattered light,8, the hot rings can shine up to 100 times brighter than the dark zones. The millimeter map shows a more muted contrast between the substructures, though dust drift may amplify it further. Here, we are focused on the region inward of 20AU, where there has been little observational evidence regarding the presence of sub-structures. This may be related to instrument resolution, and it is of interest to note that recent works have discovered substructures also in compact disks (smaller than 50AU in size, Long et al., 2018; Zhang et al., 2023).
Footnote 8: Here, we turn on dust scattering in RADMC3d during post-processing.
Lastly, we are curious to know if the radial pressure gradient in these stair-case disks can impact dust drift. Fig. 13 shows that, the pressure gradient near the front sides of the hot rings is weakened by almost two orders of magnitude, from values for smooth disks. This drastically reduces (though not completely stalls) the radial drift of all dust grains. Dust may become concentrated near these regions. This evolution is interesting to explore.
There are also other unexplored consequences. The unusual pressure gradient near the hot rings may provide breathing grounds for the baroclinic instability (see, e.g. Klahr, 2004), or the Rossby wave instability (Lovelace et al., 1999). Vortices and/or spirals may form. While these instabilites do not manifest in our 2D simulations, they should be examined in 3D.
## 7 Summary
A passively irradiated disk abhores a smooth profile. We have known, since the pioneering study of Dullemond & Dominik (2004), that a passive disk can change shape and alter the stellar irradiation it receives. As is well documented for Herbig Ae/Be disks, the inner disk rim can puff up and block the region beyond it. In this work, we show that every part of a passive disk can partake in the choreography.
Figure 11: A visual impression of the staircase steady state from Figure 3, in both \(1.5\mu m\) scattered light image (left, linear scale), and \(850\mu m\) thermal emission (middle, linear scale). The right panel shows the surface brightness profiles in logarithmic scale. Even though the surface density in this disk is smooth, prominent sub-structures are present in both wavelengths. Bright rings in scattered light correspond to where the disk sees the star (the ”stair-risers”), and those in sub-mm correspond to where the disk is hot.
Figure 12: Spectral energy distributions for our staircase disk (from Fig. 3, solid curve) and the same disk but with a power-law temperature profile (eq. 9, dashed curve), both measured at 150pc. These two disks intercept and reprocess similar amounts of stellar heating, so their SEDs are nearly indistinguishable. The dotted curve is the stellar SED.
As a natural consequence of the irradiation instability, an initially featureless disk spontaneously morphs into a'staircase' form, with hot rings that intercept the lion's share of the stellar irradiation, and dark zones hidden in their shadows. Each of the hot rings is analogous to a puffed-up inner rim. These sub-structures form after a few thermal time (\(\sim 10^{5}\) yrs in the inner region, see Fig. 2 of WL21), with the inner ones congealing into a static form first. Their locations depend on a number of parameters, including the initial conditions.
The existence of such a steady state is surprising, given that previous works which assume hydrostatic equilibrium have found only inward travelling waves. We examine a few possible causes for the stalling. We argue that the stalling is likely genuine and the result of nonlinear evolution. But we have yet to understand its physical cause.
Major caveats exist for our work. Although our staircase steady states appear to be in thermal and dynamical equilibria, we fall short of showing how, in reality, such a state is reached. To do so, one needs to conduct simulations with realistic radiative transfer. Along the same vein, the disagreement between our work and those of Melon Fuksman & Klahr (2022); Pavlyuchenkov et al. (2022) is unlikely to be fully resolved until better treatments of radiative transfer are introduced. Another major caveat is that our 2D simulations precludes non-axisymmetric hydro and thermal instabilities. These latter may qualitatively alter the picture.
But if the staircase phenomenon is real, it has many interesting implications. It would offer an explanation for the prevalence of gaps and rings in observed disks. It could alter the physical state of the disk and affect processes such as the condensation of volatile species,9 the radial dust drift, and the dust vertical wafting. It may introduce other hydro instabilities that are absent in smooth disks, and possibly affect disk accretion. The processes of planet formation and migration could all be affected. In summary, a passive disk can be quite an interesting place.
Footnote 9: The locations for icelines would be different from the traditional picture. As the disk temperature is not monotonic with radius, there may even be multiple icelines for the same species.
## Acknowledgements
The authors thank Eugene Chiang, Yan-fei Jiang, Francios Foucart, Zhaohuan Zhu, Xuening Bai, Shoji Mori and Ariel Amaral for helpful conversations. TK and YW acknowledge funding from NSERC; TK acknowledges further funding from the Walter C. Sumner Memorial Fellowships, and the Dunlap Institute. YL acknowledges NASA grant 80NSSC23K1262.
|
2301.02632 | A note on $LP$-Kenmotsu manifolds admitting Ricci-Yamabe solitons | In the current note, we study Lorentzian para-Kenmotsu (in brief,
$LP$-Kenmotsu) manifolds admitting Ricci-Yamabe solitons (RYS) and gradient
Ricci-Yamabe soliton (gradient RYS). At last by constructing a 5-dimensional
non-trivial example we illustrate our result. | Mobin Ahmad, Gazala, Mohd Bila | 2022-11-30T11:22:57Z | http://arxiv.org/abs/2301.02632v1 | # A note on \(Lp\)-Kenmotsu manifolds admitting Ricci-Yamabe solitons
###### Abstract.
In the current note, we study Lorentzian para-Kenmotsu (in brief, \(LP\)-Kenmotsu) manifolds admitting Ricci-Yamabe solitons (RYS) and gradient Ricci-Yamabe soliton (gradient RYS). At last by constructing a \(5\)-dimensional non-trivial example we illustrate our result.
**2010 Mathematics Subject Classification.** 53C20, 53C21, 53C25, 53E20.
**Keywords.** Lorentzian para-Kenmotsu manifolds, Ricci-Yamabe solitons, Einstein manifolds.
## 1. **Introduction**
In 2019, a scalar combination of Ricci and Yamabe flows was proposed by the authors Guler and Crasmareanu [6], this advanced class of geometric flows called Ricci-Yamabe (RY) flow of type \((\sigma,\rho)\) and is defined by
\[\frac{\partial}{\partial t}g(t)+2\sigma S(g(t))+\rho r(t)g(t)=0,\ \ \ \ g(0)=g_{0}\]
for some scalars \(\sigma\) and \(\rho\).
A solution to the RY flow is called RYS if it depends only on one parameter group of diffeomorphism and scaling. A Riemannian (or semi-Riemannian) manifold \(M\) is said to have a RYS if
\[\pounds_{K}g+2\sigma S+(2\Lambda-\rho r)g=0, \tag{1.1}\]
where \(\sigma,\rho,\Lambda\in\mathbb{R}\) (the set of real numbers). If \(K\) is the gradient of a smooth function \(v\) on \(M\), then (1.1) is called the gradient Ricci-Yamabe soliton (gradient RYS) and hence (1.1) turns to
\[\nabla^{2}v+\sigma S+(\Lambda-\frac{\rho r}{2})g=0, \tag{1.2}\]
where \(\nabla^{2}v\) is the Hessian of \(v\). It is to be noted that a RYS of types \((\sigma,0)\) and \((0,\rho)\) are known as \(\sigma-\)Ricci soliton and \(\rho-\)Yamabe soliton, respectively. A RYS is said to be shrinking, steady or expanding if \(\Lambda<0,=0\) or \(>0\), respectively. A RYS is said to be a
\(\bullet\) Ricci soliton [7] if \(\sigma=1,\rho=0\),
\(\bullet\) Yamabe soliton [8] if \(\sigma=0,\rho=1\),
\(\bullet\) Einstein soliton [3] if \(\sigma=1,\rho=-1\),
As a continuation of this study, we tried to study RYS in the frame-work of \(LP\)-Kenmotsu manifolds of dimension \(n\). We recommend the papers [1, 2, 5, 9, 10, 13, 15, 16, 17, 18, 19] and the references therein for more details about the related studies.
## 2. **Preliminaries**
An \(n\)-dimensional differentiable manifold \(M\) with structure \((\varphi,\zeta,\nu,g)\) is said to be a Lorentzian almost paracontact metric manifold, if it admits a \((1,1)\)-tensor field \(\varphi\), a contravariant vector field \(\zeta\), a 1-form \(\nu\) and a Lorentzian metric \(g\) satisfying
\[\nu(\zeta)+1=0, \tag{2.1}\]
\[\varphi^{2}E=E+\nu(E)\zeta, \tag{2.2}\]
\[\varphi\zeta=0,\quad\nu(\varphi E)=0, \tag{2.3}\]
\[g(\varphi E,\varphi F)=g(E,F)+\nu(E)\nu(F), \tag{2.4}\]
\[g(E,\zeta)=\nu(E), \tag{2.5}\]
\[\varphi(E,F)=\varphi(F,E)=g(E,\varphi F) \tag{2.6}\]
for any vector fields \(E,F\in\chi(M)\), where \(\chi(M)\) is the Lie algebra of vector fields on \(M\).
If \(\zeta\) is a killing vector field, the (para) contact structure is called a \(K\)-(para) contact. In such a case, we have
\[\nabla_{E}\zeta=\varphi E. \tag{2.7}\]
Recently, the authors Haseeb and Prasad defined and studied the following notion:
**Definition 2.1**.: _A Lorentzian almost paracontact manifold \(M\) is called Lorentzian para-Kenmostu manifold if [11]_
\[(\nabla_{E}\varphi)F=-g(\varphi E,F)\zeta-\nu(F)\varphi E \tag{2.8}\]
_for any \(E,F\) on \(M.\)_
In an \(LP\)-Kenmostu manifold, we have
\[\nabla_{E}\zeta=-E-\nu(E)\zeta, \tag{2.9}\]
\[(\nabla_{E}\nu)F=-g(E,F)-\nu(E)\nu(F), \tag{2.10}\]
where \(\nabla\) denotes the Levi-Civita connection respecting to the Lorentzian metric \(g\). Furthermore, in an \(LP\)-Kenmotsu manifold, the following relations hold [11]:
\[g(R(E,F)G,\zeta)=\nu(R(E,F)G)=g(F,G)\nu(E)-g(E,G)\nu(F), \tag{2.11}\]
\[R(\zeta,E)F=-R(E,\zeta)F=g(E,F)\zeta-\nu(F)E, \tag{2.12}\]
\[R(E,F)\zeta=\nu(F)E-\nu(E)F, \tag{2.13}\]
\[R(\zeta,E)\zeta=E+\nu(E)\zeta, \tag{2.14}\]
\[S(E,\zeta)=(n-1)\nu(E),\ S(\zeta,\zeta)=-(n-1), \tag{2.15}\]
\[Q\zeta=(n-1)\zeta \tag{2.16}\]
for any \(E,F,G\in\chi(M)\), where \(R,S\) and \(Q\) represent the curvature tensor, the Ricci tensor and the \(Q\) Ricci operator, respectively.
**Definition 2.2**.: _[_21_]_ _An \(LP\)-Kenmotsu manifold \(M\) is said to be \(\nu\)-Einstein manifold if its \(S(\neq 0)\) is of the form_
\[S(E,F)=ag(E,F)+b\nu(E)\nu(F), \tag{2.17}\]
_where \(a\) and \(b\) are smooth functions on \(M\). In particular, if \(b=0\), then \(M\) is termed as an Einstein manifold._
**Remark 2.3**.: _[_12_]_ _In an \(LP\)-Kenmotsu manifold of \(n\)-dimension, \(S\) is of the form_
\[S(E,F)=(\frac{r}{n-1}-1)g(E,F)+(\frac{r}{n-1}-n)\nu(E)\nu(F), \tag{2.18}\]
_where \(r\) is the scalar curvature of the manifold._
**Lemma 2.4**.: _In an \(n\)-dimensional \(LP\)-Kenmotsu manifold, we have_
\[\zeta(r)=2(r-n(n-1)), \tag{2.19}\]
\[(\nabla_{E}Q)\zeta=QE-(n-1)E, \tag{2.20}\]
\[(\nabla_{\zeta}Q)E=2QE-2(n-1)E \tag{2.21}\]
_for any \(E\) on \(M\)._
Proof.: Equation (2.18) yields
\[QE=(\frac{r}{n-1}-1)E+(\frac{r}{n-1}-n)\nu(E)\zeta. \tag{2.22}\]
Taking the covariant derivative of (2.22) with respect to \(F\) and making use of (2.9) and (2.10), we lead to
\[(\nabla_{F}Q)E=\frac{F(r)}{n-1}(E+\nu(E)\zeta)-(\frac{r}{n-1}-n)(g(E,F)\zeta+ \nu(E)F+2\nu(E)\nu(F)\zeta).\]
By contracting \(F\) in the foregoing equation and using trace \(\{F\to(\nabla_{F}Q)E\}=\frac{1}{2}E(r)\), we find
\[\frac{n-3}{2(n-1)}E(r)=\big{\{}\frac{\zeta(r)}{n-1}-(r-n(n-1))\big{\}}\nu(E),\]
which by replacing \(E\) by \(\zeta\) and using (2.1) gives (2.19). We refer the readers to see [14] for the proof of (2.20) and (2.21).
**Remark 2.5**.: _From the equation \((\ref{eq:19})\), it is noticed that if an \(n\)-dimensional \(LP\)-Kenmotsu manifold possesses the constant scalar curvature, then \(r=n(n-1)\) and hence (2.18) reduces to \(S(E,F)=(n-1)g(E,F)\). Thus, the manifold under consideration is an Einstein manifold._
## 3. **Ricci-Yamabe solitons on \(Lp\)-Kenmotsu manifolds**
Let the metric of an \(n\)-dimensional \(LP\)-Kenmotsu manifold be a Ricci-Yamabe soliton \((g,K,\Lambda,\sigma,\rho)\), then (1.1) holds. By differentiating (1.1) covariantly with resprct to \(G\), we have
\[(\nabla_{G}\pounds_{K}g)(E,F) = -2\sigma(\nabla_{G}S)(E,F)+\rho(Gr)g(E,F). \tag{3.1}\]
Since \(\nabla g=0\), then the following formula [20]
\[(\pounds_{K}\nabla_{E}g-\nabla_{E}\pounds_{K}g-\nabla_{[K,E]}g)(F,G)=-g(( \pounds_{K}\nabla)(E,F),G)-g((\pounds_{K}\nabla)(E,G),F)\]
turns to
\[(\nabla_{E}\pounds_{K}g)(F,G)=g((\pounds_{K}\nabla)(E,F),G)+g((\pounds_{K}\nabla)( E,G),F).\]
Since the operator \(\pounds_{K}\nabla\) is symmetric, therefore we have
\[2g((\pounds_{K}\nabla)(E,F),G)=(\nabla_{E}\pounds_{K}g)(F,G)+(\nabla_{F} \pounds_{K}g)(E,G)-(\nabla_{G}\pounds_{K}g)(E,F),\]
which by using (3.1) takes the form
\[2g((\pounds_{K}\nabla)(E,F),G) = -2\sigma[(\nabla_{E}S)(F,G)+(\nabla_{F}S)(G,E)+(\nabla_{G}S)(E,F)]\] \[+\rho[(Er)g(F,G)+(Fr)g(G,E)+(Gr)g(E,F)].\]
Putting \(F=\zeta\) in (3.2) and using (2.5), we find
\[2g((\pounds_{K}\nabla)(E,\zeta),G) = -2\sigma[(\nabla_{E}S)(\zeta,G)+(\nabla_{\zeta}S)(G,E)-(\nabla_{ G}S)(E,\zeta)]\] \[+\rho[(Er)\nu(G)+2(r-n(n-1))g(E,G)-(Gr)\nu(E)]\]
By virtue of (2.20) and (2.21), (3.3) leads to
\[2g((\pounds_{K}\nabla)(E,\zeta),G) = -4\sigma[S(E,G)-(n-1)g(E,G)]\] \[+\rho[(Er)\nu(G)+2(r-n(n-1))g(E,G)-(Gr)\nu(E)].\]
By eliminating \(G\) from the foregoing equation, we have
\[2(\pounds_{K}\nabla)(F,\zeta) = \rho g(Dr,F)\zeta-\rho(Dr)\nu(F)-4\sigma QF\] \[+[4\sigma(n-1)+2\rho(r-n(n-1))]F.\]
If we take \(r\) as constant, then from (2.19) we find \(r=n(n-1)\), and hence (3.4) reduces to
\[(\pounds_{K}\nabla)(F,\zeta) = -2\sigma QF+2\sigma(n-1)F.\]
Taking covariant derivative of (3.5) with respect to \(E\), we have
\[(\nabla_{E}\pounds_{K}\nabla)(F,\zeta) = (\pounds_{K}\nabla)(F,E)-2\sigma\nu(E)[QF-(n-1)F]\] \[- 2\sigma(\nabla_{E}Q)F.\]
Again from [20], we have
\[(\pounds_{K}R)(E,F)G=(\nabla_{E}\pounds_{K}\nabla)(F,G)-(\nabla_{F} \pounds_{K}\nabla)(E,G),\]
which by putting \(G=\zeta\) and using (3.6) takes the form
\[(\pounds_{K}R)(E,F)\zeta = 2\sigma\nu(F)(QE-(n-1)E)-2\sigma\nu(E)(QF-(n-1)F)\] \[-2\sigma((\nabla_{E}Q)F-(\nabla_{F}Q)E).\]
Putting \(F=\zeta\) in (3.7) then using (2.1), (2.2), (2.20) and (2.21), we arrive at
\[(\pounds_{K}R)(E,\zeta)\zeta=0.\]
The Lie derivative of \(R(E,\zeta)\zeta=-E-\nu(E)\zeta\) along \(K\) leads to
\[(\pounds_{K}R)(E,\zeta)\zeta-g(E,\pounds_{K}\zeta)\zeta+2\nu(\pounds_{K} \zeta)E=-(\pounds_{K}\nu)(E)\zeta.\]
From (3.8) and (3.9), we have
\[(\pounds_{K}\nu)(E)\zeta=-2\nu(\pounds_{K}\zeta)E+g(E,\pounds_{K}\zeta)\zeta.\]
Taking the Lie derivative of \(g(E,\zeta)=\nu(E)\), we find
\[(\pounds_{K}\nu)(E)=g(E,\pounds_{K}\zeta)+(\pounds_{K}g)(E,\zeta).\]
By putting \(F=\zeta\) in (1.1) and using (2.15), we have
\[(\pounds_{K}g)(E,\zeta)=-\{2\sigma(n-1)+2\Lambda-\rho n(n-1)\}\nu(E),\]
where \(r=n(n-1)\) being used.
The Lie derivative of \(g(\zeta,\zeta)=-1\) along \(K\) we lead to
\[(\pounds_{K}g)(\zeta,\zeta)=-2\nu(\pounds_{K}\zeta). \tag{3.13}\]
From (3.12) and (3.16), we find
\[\nu(\pounds_{K}\zeta)=-\{\sigma(n-1)+\Lambda-\frac{\rho n(n-1)}{2}\}. \tag{3.14}\]
Now, combining the equations (3.10), (3.11), (3.12) and (3.17), we find
\[\Lambda=\frac{\rho n(n-1)}{2}-\sigma(n-1). \tag{3.15}\]
Thus, we have
**Theorem 3.1**.: _Let \((M,g)\) be an \(n\)-dimensional \(LP\)-Kenmotsu manifold admitting Ricci-Yamabe soliton \((g,K,\Lambda,\sigma,\rho)\) with constant scalar curvature tensor, then \(\Lambda=\frac{\rho n(n-1)}{2}-\sigma(n-1).\)_
For \(\sigma=1\) and \(\rho=0\), from (3.15) we have \(\Lambda=-(n-1)\). Thus, we have the following:
**Corollary 3.2**.: _If an \(n\)-dimensional \(LP\)-Kenmotsu manifold admits a Ricci soliton with constant scalar curvature, then the soliton is shrinking._
For \(\sigma=0\) and \(\rho=1\), from (3.15) we have \(\Lambda=\frac{n(n-1)}{2}\). Thus, we have the following:
**Corollary 3.3**.: _If an \(n\)-dimensional \(LP\)-Kenmotsu manifold admits a Yamabe soliton with constant scalar curvature, then the soliton is shrinking._
For \(\sigma=1\) and \(\rho=-1,\) from (3.15) we have \(\Lambda=-\frac{(n^{2}-1)}{2}\). Thus, we have the following:
**Corollary 3.4**.: _If an \(n\)-dimensional \(LP\)-Kenmotsu manifold admits an Einstein soliton with constant scalar curvature, then the soliton is shrinking._
Now, we consider the metric of an \(n\)-dimensional \(LP\)-Kenmotsu manifold as a Ricci-Yamabe soliton \((g,\zeta,\Lambda,\sigma,\rho)\), then from (1.1) and (2.9) we have
\[S(E,F)=-\frac{1}{\sigma}(\Lambda-1-\frac{\rho r}{2})g(E,F)+\frac{1}{\sigma}\nu (E)\nu(F),\ \ where\ \ \sigma\neq 0. \tag{3.16}\]
By putting \(F=\zeta\) in (3.16) and using (2.15), we find
\[\Lambda=\frac{\rho r}{2}-\sigma(n-1). \tag{3.17}\]
Now, comparing (2.18) and (3.17), we have \(r=\frac{n-1}{\sigma}+n(n-1)\), which by using in (3.17) it follows that \(\Lambda=-\sigma(n-1)+\frac{\rho(n-1)(1+n\sigma)}{2\sigma}.\) Thus, we have the following theorem:
**Theorem 3.5**.: _An \(n\)-dimensional \(LP\)-Kenmotsu manifold with constant scalar curvature admitting Ricci-Yamabe soliton \((g,\zeta,\Lambda,\sigma,\rho)\) is an \(\nu\)-Einstein manifold. Moreover, the soliton is expanding, steady or shrinking according to \(\frac{\rho}{\sigma}>2\sigma-\rho n\), \(\frac{\rho}{\sigma}=2\sigma-\rho n\), or \(\frac{\rho}{\sigma}<2\sigma-\rho n\)._
## 4. **Gradient Ricci-Yamabe solitons on \(Lp\)-Kenmotsu manifolds**
**Definition 4.1**.: _A Riemannian \((\)or semi-Riemannian\()\) metric \(g\) on \(M\) is called a gradient RYS, if_
\[Hessv+\sigma S+(\Lambda-\frac{\rho r}{2})g=0, \tag{4.1}\]
_where \(Hessv\) denotes the Hessian of a smooth function \(v\) on \(M\) and defined by \(Hessv=\nabla\nabla v\)._
Let \(M\) be an \(n\)-dimensional \(LP\)-Kenmotsu manifold with \(g\) as a gradient RYS. Then equation (4.1) can be written as
\[\nabla_{E}Dv+\sigma QE+(\Lambda-\frac{\rho r}{2})E=0, \tag{4.2}\]
for all vector fields \(E\) on \(M\), where \(D\) denotes the gradient operator of \(g\). Taking the covariant derivative of (4.2) with respect to \(F\), we have
\[\nabla_{F}\nabla_{E}Dv=-\sigma\{(\nabla_{F}Q)E+Q(\nabla_{F}E)\}+\rho\frac{F(r) }{2}E-(\Lambda-\frac{\rho r}{2})\nabla_{F}E. \tag{4.3}\]
Interchanging \(E\) and \(F\) in (4.3), we lead to
\[\nabla_{E}\nabla_{F}Dv=-\sigma\{(\nabla_{E}Q)F+Q(\nabla_{E}F)\}+\rho\frac{E(r )}{2}F-(\Lambda-\frac{\rho r}{2})\nabla_{E}F. \tag{4.4}\]
By making use of (4.2)-(4.4), we find
\[R(E,F)Dv=\sigma\{(\nabla_{F}Q)E-(\nabla_{E}Q)F\}+\frac{\rho}{2}\{E(r)F-F(r)E\}. \tag{4.5}\]
Now, from (2.18), we find
\[QE=(\frac{r}{n-1}-1)E+(\frac{r}{n-1}-n)\nu(E)\zeta,\]
which on taking covariant derivative with repect to \(F\) leads to
\[(\nabla_{F}Q)E = \frac{F(r)}{n-1}(E+\nu(E)\zeta)-(\frac{r}{n-1}-n)(g(E,F)\zeta\] \[+2\nu(E)\nu(F)\zeta+\nu(E)F).\]
By using (4.6) in (4.5), we have
\[R(E,F)Dv = \frac{(n-1)\rho-2\sigma}{2(n-1)}\{E(r)F-F(r)E\}+\frac{\sigma}{n- 1}\{F(r)\nu(E)\zeta-E(r)\nu(F)\zeta\}\] \[-\sigma(\frac{r}{n-1}-n)(\nu(E)F-\nu(F)E).\]
Contracting forgoing equation along \(E\) gives
\[S(F,Dv) = \big{\{}\frac{(n-1)^{2}\rho-2\sigma(n-2)}{n-1}\big{\}}F(r)\] \[+\frac{\sigma(n-3)(r-n(n-1))}{n-1}\nu(F). \tag{4.8}\]
From the equation (2.18), we can write
\[S(F,Dv)=(\frac{r}{n-1}-1)F(v)+(\frac{r}{n-1}-n)\nu(F)\zeta(v). \tag{4.9}\]
Now, by equating (4.8) and (4.9), then putting \(F=\zeta\) and using (2.1), (2.19), we find
\[\zeta(v)=\frac{r-n(n-1)}{n-1}\{2(n-1)\rho-\frac{\sigma(5n-13)}{n-1}\}. \tag{4.10}\]
Taking the inner product of (4.7) with \(\zeta\), we get
\[F(v)\nu(E)-E(v)\nu(F)=\frac{\rho}{2}\{E(r)\nu(F)-F(r)\nu(E)\},\]
which by replacing \(E\) by \(\zeta\) and using (2.19), (4.10), we infer
\[F(v)=-(r-n(n-1))\{3\rho-\frac{\sigma(5n-13)}{(n-1)^{2}}\}\nu(F)-\frac{\rho}{2} F(r). \tag{4.11}\]
If we take \(r\) as constant, then from Remark 2.5, we get \(r=n(n-1)\). Thus, (4.11) leads to \(F(v)=0\). This implies that \(v\) is constant. Thus, the soliton under the consideration is trivial. Hence we state:
**Theorem 4.2**.: _If the metric of an \(LP\)-Kenmotsu manifold of constant scalar curvature tensor admitting a special type of vector field is gradient RYS, then the soliton is trivial._
For \(v\) constant, (1.2) turns to
\[\sigma QE=-(\Lambda-\frac{\rho r}{2})E,\]
which leads to
\[S(E,F)=-\frac{1}{\sigma}(\Lambda-\frac{\rho n(n-1)}{2})g(E,F),\quad\sigma\neq 0. \tag{4.12}\]
By putting \(E=F=\zeta\) in (4.12) and using (2.15), we obtain
\[\Lambda=\frac{\rho n(n-1)}{2}-\sigma(n-1). \tag{4.13}\]
**Corollary 4.3**.: _If an \(n\)-dimensional \(LP\)-Kenmotsu manifold admits a gradient Ricci soliton with the constant scalar curvature, then the manifold under the consideration is an Einstein manifold and \(\Lambda=\frac{\rho n(n-1)}{2}-\sigma(n-1).\)_
For \(\sigma=1\) and \(\rho=0\), from (4.13) we find \(\Lambda=-(n-1)\). Thu, we have the following:
**Corollary 4.4**.: _If an \(n\)-dimensional \(LP\)-Kenmotsu manifold admits a gradient Ricci soliton with the constant scalar curvature, then the soliton is shrinking._
For \(\sigma=1\) and \(\rho=-1\), from (4.13) we have \(\Lambda=-\frac{(n-1)(n+2)}{2}\). Thus, we have the following:
**Corollary 4.5**.: _If an \(n\)-dimensional \(LP\)-Kenmotsu manifold admits an gradient Einstein soliton with constant scalar curvature, then the soliton is shrinking._
**Example.** We consider the 5-dimensional manifold \(M^{5}=\big{\{}(x_{1},x_{2},x_{3},x_{4},x_{5})\in\mathbb{R}^{5}:x_{5}>0\big{\}}\), where \((x_{1},x_{2},x_{3},x_{4},x_{5})\) are the standard coordinates in \(\mathbb{R}^{5}\). Let \(\varrho_{1}\), \(\varrho_{2}\), \(\varrho_{3}\), \(\varrho_{4}\) and \(\varrho_{5}\) be the vector fields on \(M^{5}\) given by
\[\varrho_{1}=e^{x_{5}}\frac{\partial}{\partial x_{1}},\ \varrho_{2}=e^{x_{5}} \frac{\partial}{\partial x_{2}},\ \varrho_{3}=e^{x_{5}}\frac{\partial}{\partial x_{3}},\ \varrho_{4}=e^{x_{5}}\frac{\partial}{\partial x_{4}},\ \varrho_{5}=\frac{ \partial}{\partial x_{5}}=\zeta,\]
which are linearly independent at each point of \(M^{5}\). Let \(g\) be the Lorentzian metric defined by
\[g(\varrho_{i},\varrho_{i})=1,\quad\text{ for }\quad 1\leq i\leq 4\quad\text{ and }\quad g(\varrho_{5},\varrho_{5})=-1,\]
\[g(\varrho_{i},\varrho_{j})=0,\quad\text{ for }\quad i\neq j,\quad 1\leq i,j \leq 5.\]
Let \(\nu\) be the \(1\)-form defined by \(\nu(E)=g(E,\varrho_{5})=g(\varrho,\zeta)\) for all \(E\in\chi(M^{5})\), and let \(\varphi\) be the \((1,1)\)-tensor field defined by
\[\varphi\varrho_{1}=-\varrho_{2},\ \varphi\varrho_{2}=-\varrho_{1},\ \varphi \varrho_{3}=-\varrho_{4},\ \varphi\varrho_{4}=-\varrho_{3},\ \varphi\varrho_{5}=0.\]
By applying linearity of \(\varphi\) and \(g\), we have
\[\nu(\zeta)=g(\zeta,\zeta)=-1,\ \varphi^{2}E=E+\nu(E)\zeta\text{ and }g(\varphi E,\varphi F)=g(E,F)+\nu(E)\nu(F)\]
for all \(E,F\in\chi(M^{5})\). Thus for \(\varrho_{5}=\zeta\), the structure \((\varphi,\zeta,\nu,g)\) defines a Lorentzian almost paracontact metric structure on \(M^{5}\). Then we have
\[[\varrho_{i},\varrho_{j}]=-\varrho_{i},\quad\text{ for }\quad 1\leq i\leq 4,j=5,\]
\[[\varrho_{i},\varrho_{j}]=0,\quad\text{ otherwise.}\]
By using Koszul's formula, we can easily find we obtain
\[\nabla_{\varrho_{i}}\varrho_{j}=\begin{cases}-\varrho_{5},&1\leq i=j\leq 4, \\ -\varrho_{i},&1\leq i\leq 4,j=5,\\ 0,&otherwise.\end{cases}\]
Also one can easily verify that
\[\nabla_{E}\zeta=-E-\eta(E)\zeta\quad\text{ and }\quad(\nabla_{E}\varphi)F=-g( \varphi E,F)\zeta-\nu(F)\varphi E.\]
Therefore, the manifold is an \(LP\)-Kenmotsu manifold.
From the above results, we can easily obtain the non-vanishing components of \(R\) as follows:
\[R(\varrho_{1},\varrho_{2})\varrho_{1}=-\varrho_{2},\ R(\varrho_{1},\varrho_{2 })\varrho_{2}=\varrho_{1},\ R(\varrho_{1},\varrho_{3})\varrho_{1}=-\varrho_{3 },\ R(\varrho_{1},\varrho_{3})\varrho_{3}=\varrho_{1},\]
\[R(\varrho_{1},\varrho_{4})\varrho_{1}=-v_{4},\ R(\varrho_{1},\varrho_{4}) \varrho_{4}=\varrho_{1},\ R(\varrho_{1},\varrho_{5})\varrho_{1}=-\varrho_{5}, \ R(\varrho_{1},\varrho_{5})\varrho_{5}=-\varrho_{1},\]
\[R(\varrho_{2},\varrho_{3})\varrho_{2}=-\varrho_{3},\ R(\varrho_{2},\varrho_{3 })\varrho_{3}=\varrho_{2},\ R(\varrho_{2},\varrho_{4})\varrho_{2}=-\varrho_{4 },\ R(\varrho_{2},\varrho_{4})\varrho_{4}=\varrho_{2},\]
\[R(\varrho_{2},\varrho_{5})\varrho_{2}=-\varrho_{5},\ R(\varrho_{2},\varrho_{5} )\varrho_{5}=-\varrho_{2},\ R(\varrho_{3},\varrho_{4})\varrho_{3}=-\varrho_{4 },\ R(\varrho_{3},\varrho_{4})\varrho_{4}=\varrho_{3},\]
\[R(\varrho_{3},\varrho_{5})\varrho_{3}=-\varrho_{5},\ R(\varrho_{3},\varrho_{5 })\varrho_{5}=-\varrho_{3},\ R(\varrho_{4},\varrho_{5})\varrho_{4}=-\varrho_{5 },\ R(\varrho_{4},\varrho_{5})\varrho_{5}=-\varrho_{4}.\]
Also, we calculate the Ricci tensors as follows:
\[S(\varrho_{1},\varrho_{1})=S(\varrho_{2},\varrho_{2})=S(\varrho_{3},\varrho_{3 })=S(\varrho_{4},\varrho_{4})=4,\quad\ S(\varrho_{5},\varrho_{5})=-4.\]
Therefore, we have
\[r=S(\varrho_{1},\varrho_{1})+S(\varrho_{2},\varrho_{2})+S(\varrho_{3},\varrho_ {3})+S(\varrho_{4},\varrho_{4})-S(\varrho_{5},\varrho_{5})=20.\]
Now by taking \(Dv=(\varrho_{1}v)\varrho_{1}+(\varrho_{2}v)\varrho_{2}+(\varrho_{3}v)\varrho_ {3}+(\varrho_{4}v)\varrho_{4}+(\varrho_{5}v)\varrho_{5}\), we have
\[\nabla_{\varrho_{1}}Dv=(\varrho_{1}(\varrho_{1}v)-(\varrho_{5}v))\varrho_{1}+( \varrho_{1}(\varrho_{2}v))\varrho_{2}+(\varrho_{1}(\varrho_{3}v))\varrho_{3} +(\varrho_{1}(\varrho_{4}v))\varrho_{4}+(\varrho_{1}(\varrho_{5}v)-(\varrho_{1 }v))\varrho_{5},\]
\[\nabla_{\varrho_{2}}Dv=(\varrho_{2}(\varrho_{1}v))\varrho_{1}+(\varrho_{2}( \varrho_{2}v)-(\varrho_{5}v))\varrho_{2}+(\varrho_{2}(\varrho_{3}v))\varrho_{3} +(\varrho_{2}(\varrho_{4}v))\varrho_{4}+(\varrho_{2}(\varrho_{5}v)-(\varrho_{2 }v))\varrho_{5},\]
\[\nabla_{\varrho_{3}}Dv=(\varrho_{3}(\varrho_{1}v))\varrho_{1}+(\varrho_{3}( \varrho_{2}v))\varrho_{2}+(\varrho_{3}(\varrho_{3}v)-(\varrho_{5}v))\varrho_{3} +(\varrho_{3}(\varrho_{4}v))\varrho_{4}+(\varrho_{3}(\varrho_{5}v)-(\varrho_{3 }v))\varrho_{5},\]
\[\nabla_{\varrho_{4}}Dv=(\varrho_{4}(\varrho_{1}v))\varrho_{1}+(\varrho_{4}( \varrho_{2}v))\varrho_{2}+(\varrho_{4}(\varrho_{3}v))\varrho_{3}+(\varrho_{4 }(\varrho_{4}v)-(\varrho_{5}v))\varrho_{4}+(\varrho_{4}(\varrho_{5}v)-(\varrho_{4 }v))\varrho_{5},\]
\[\nabla_{\varrho_{5}}Dv=(\varrho_{5}(\varrho_{1}v))\varrho_{1}+(\varrho_{5}( \varrho_{2}v))\varrho_{2}+(\varrho_{5}(\varrho_{3}v))\varrho_{3}+(\varrho_{5}( \varrho_{4}v))\varrho_{4}+(\varrho_{5}(\varrho_{5}v))\varrho_{5}.\]
Thus, by virtue of (4.2), we obtain
\[\begin{cases}\varrho_{1}(\varrho_{1}v)-\varrho_{5}v=-(\Lambda+4\sigma-10\rho),\\ \varrho_{2}(\varrho_{2}v)-\varrho_{5}v=-(\Lambda+4\sigma-10\rho),\\ \varrho_{3}(\varrho_{3}v)-\varrho_{5}v=-(\Lambda+4\sigma-10\rho),\\ \varrho_{4}(\varrho_{4}v)-\varrho_{5}v=-(\Lambda+4\sigma-10\rho),\\ \varrho_{5}(\varrho_{5}v)=-(\Lambda+4\sigma-10\rho),\\ \varrho_{1}(\varrho_{2}v)=\varrho_{1}(\varrho_{3}v)=\varrho_{1}(\varrho_{4}v) =0,\\ \varrho_{2}(\varrho_{1}v)=\varrho_{2}(\varrho_{3}v)=\varrho_{2}(\varrho_{4}v) =0,\\ \varrho_{3}(\varrho_{1}v)=\varrho_{3}(\varrho_{2}v)=\varrho_{3}(\varrho_{4}v) =0,\\ \varrho_{4}(\varrho_{1}v)=\varrho_{4}(\varrho_{2}v)=\varrho_{4}(\varrho_{3}v) =0,\\ \varrho_{1}(\varrho_{5}v)-(\varrho_{1}v)=\varrho_{2}(\varrho_{5}v)-(\varrho_{2} v)=0,\\ \varrho_{3}(\varrho_{5}v)-(\varrho_{3}v)=\varrho_{4}(\varrho_{5}v)-(\varrho_{4} v)=0.\end{cases} \tag{4.14}\]
Thus, the equations in (4.14) are respectively amounting to
\[e^{2x_{5}}\frac{\partial^{2}v}{\partial x_{1}^{2}}-\frac{\partial v}{\partial x _{5}}=-(\Lambda+4\sigma-10\rho),\]
\[e^{2x_{5}}\frac{\partial^{2}v}{\partial x_{2}^{2}}-\frac{\partial v}{\partial x _{5}}=-(\Lambda+4\sigma-10\rho),\]
\[e^{2x_{5}}\frac{\partial^{2}v}{\partial x_{3}^{2}}-\frac{\partial v}{\partial x _{5}}=-(\Lambda+4\sigma-10\rho),\]
\[e^{2x_{5}}\frac{\partial^{2}v}{\partial x_{4}^{2}}-\frac{\partial v}{\partial x _{5}}=-(\Lambda+4\sigma-10\rho),\]
\[\frac{\partial^{2}v}{\partial x_{5}^{2}}=-(\Lambda+4\sigma-10\rho),\]
\[\frac{\partial^{2}v}{\partial x_{1}\partial x_{2}}=\frac{\partial^{2}v}{ \partial x_{1}\partial x_{3}}=\frac{\partial^{2}v}{\partial x_{1}\partial x _{4}}=\frac{\partial^{2}v}{\partial x_{2}\partial x_{3}}=\frac{\partial^{2}v }{\partial x_{2}\partial x_{4}}=\frac{\partial^{2}v}{\partial x_{3}\partial x _{4}}=0,\]
\[e^{x_{5}}\frac{\partial^{2}v}{\partial x_{5}\partial x_{1}}+\frac{\partial v }{\partial x_{1}}=e^{x_{5}}\frac{\partial^{2}v}{\partial x_{5}\partial x_{2}} +\frac{\partial v}{\partial x_{2}}=e^{x_{5}}\frac{\partial^{2}v}{\partial x _{5}\partial x_{3}}+\frac{\partial v}{\partial x_{3}}=e^{x_{5}}\frac{\partial ^{2}v}{\partial x_{5}\partial x_{4}}+\frac{\partial v}{\partial x_{4}}=0.\]
From the above equations it is observed that \(v\) is constant for \(\Lambda=-4\sigma+10\rho\). Hence, equation (4.2) is satisfied. Thus, \(g\) is a gradient RYS with the soliton vector field \(K=Dv\), where \(v\) is constant and \(\Lambda=-4\sigma+10\rho\). Hence, Theorem 4.2 is verified.
|
2309.09145 | Maker-Breaker Rado games for equations with radicals | We study two-player positional games where Maker and Breaker take turns to
select a previously unoccupied number in $\{1,2,\ldots,n\}$. Maker wins if the
numbers selected by Maker contain a solution to the equation \[
x_1^{1/\ell}+\cdots+x_k^{1/\ell}=y^{1/\ell} \] where $k$ and $\ell$ are
integers with $k\geq2$ and $\ell\neq0$, and Breaker wins if they can stop
Maker. Let $f(k,\ell)$ be the smallest positive integer $n$ such that Maker has
a winning strategy when $x_1,\ldots,x_k$ are not necessarily distinct, and let
$f^*(k,\ell)$ be the smallest positive integer $n$ such that Maker has a
winning strategy when $x_1,\ldots,x_k$ are distinct.
When $\ell\geq1$, we prove that, for all $k\geq2$, $f(k,\ell)=(k+2)^\ell$ and
$f^*(k,\ell)=(k^2+3)^\ell$; when $\ell\leq-1$, we prove that
$f(k,\ell)=[k+\Theta_k(1)]^{-\ell}$ and $f^*(k,\ell)=[\exp(O_k(k\log
k))]^{-\ell}$. Our proofs use elementary combinatorial arguments as well as
results from number theory and arithmetic Ramsey theory. | Collier Gaiser, Paul Horn | 2023-09-17T03:26:50Z | http://arxiv.org/abs/2309.09145v2 | # Maker-Breaker Rado Games for Equations with Radicals
###### Abstract.
We study two-player positional games where Maker and Breaker take turns to select a previously unoccupied number in \(\{1,2,\ldots,n\}\). Maker wins if the numbers selected by Maker contain a solution to the equation
\[x_{1}^{1/\ell}+\cdots+x_{k}^{1/\ell}=y^{1/\ell}\]
where \(k\) and \(\ell\) are integers with \(k\geq 2\) and \(\ell\neq 0\), and Breaker wins if they can stop Maker. Let \(f(k,\ell)\) be the smallest positive integer \(n\) such that Maker has a winning strategy when \(x_{1},\ldots,x_{k}\) are not necessarily distinct, and let \(f^{*}(k,\ell)\) be the smallest positive integer \(n\) such that Maker has a winning strategy when \(x_{1},\ldots,x_{k}\) are distinct.
When \(\ell\geq 1\), we prove that, for all \(k\geq 2\), \(f(k,\ell)=(k+2)^{\ell}\) and \(f^{*}(k,\ell)=(k^{2}+3)^{\ell}\); when \(\ell\leq-1\), we prove that \(f(k,\ell)=[k+\Theta(1)]^{-\ell}\) and \(f^{*}(k,\ell)=[\exp(O(k\log k))]^{-\ell}\). Our proofs use elementary combinatorial arguments as well as results from number theory and arithmetic Ramsey theory.
Key words and phrases:Rado games, extremal combinatorics, homogeneous equations, fractional powers 2020 Mathematics Subject Classification: 91A46,05D10,11D72
## 1. Introduction
Let \(\mathcal{F}\) be a family of finite subsets of \(\mathbb{N}:=\{1,2,\ldots\}\) and \(n\in\mathbb{N}\). Maker-Breaker games played on \([n]:=\{1,2,\ldots,n\}\) with winning sets \(\mathcal{F}\) are two-player positional games where Maker and Breaker take turns to select a previously unoccupied number in \([n]\). Maker goes first. Maker wins if they can occupy a set in \(\mathcal{F}\) and Breaker wins otherwise. The van der Waerden games introduced by Beck [1] are games of this type. In van der Waerden games, \(\mathcal{F}\) is the set of \(k\)-term arithmetic progressions for a fixed \(k\). These games were motivated by a result of van der Waerden's theorem [23] which says that if \(\mathbb{N}\) is partitioned into two classes, then one of them contains arbitrarily long arithmetic progressions. By the compactness principle [10, Chapter 1] and strategy stealing [2, Section 5] (see also [14, Chapter 1]), Maker can win the van der Waerden games if \(n\) is large enough. Therefore, one would naturally want to find the smallest \(n\) such that Maker can win the van der Waerden games. Beck [1] proved that, for any given \(k\), the smallest \(n\) such that Maker has a winning strategy for the van der Waerden games is between \(2^{k-7k^{7/8}}\) and \(k^{3}2^{k-4}\).
Recently, Kusch, Rue, Spiegel, and Szabo [17] studied a generalization of van der Waerden games called Rado games. In Rado games, \(\mathcal{F}\) is the set of solutions to a system of linear equations. By Rado's theorem [21], if \(n\) is large enough, then Maker is guaranteed to win the Rado games if the system of linear equations satisfies the so-called column condition. Kusch, Rue, Spiegel, and Szabo allowed maker to select \(q\geq 1\) numbers each round and derived asymptotic thresholds of \(q\) for Breaker's to win. Their result on \(3\)-term arithmetic progressions was later improved by Cao et al. [7]. Hancock [12] replaced \([n]\) with a random subset of \([n]\) where each number is included with probability \(p\) and proved asymptotic thresholds of \(p\) for Breaker or/and Maker to win. However, unlike the van der Waerden games, the smallest \(n\) such that Maker wins for the unbiased and deterministic Rado games are left unstudied.
In this paper, we study the smallest positive integer \(n\) such that Maker wins the Rado games on \([n]\) when \(\mathcal{F}\) is the set of solutions to the equation
\[x_{1}^{1/\ell}+\cdots+x_{k}^{1/\ell}=y^{1/\ell} \tag{1.1}\]
where \(k\) and \(\ell\) are integers with \(k\geq 2\) and \(\ell\neq 0\). Equation (1.1) is connected with results in arithmetic Ramsey theory. In arithmetic Ramsey theory, a system of equations \(E(x_{1},\ldots,x_{k},y)=0\) in variables \(x_{1},\ldots,x_{k},y\) is called **partition regular** if whenever \(\mathbb{N}\) is partitioned into a finite number of classes, one of them contains a solution to \(E(x_{1},\ldots,x_{k},y)=0\). In 1991, Lefmann [18] proved that, among other things, Equation (1.1) is partition regular for all \(\ell\in\mathbb{Z}\backslash\{0\}\). In the same year, Brown and Rodl [6] proved that if a system \(E(x_{1},\ldots,x_{k},y)=0\) of homogeneous equations is partition regular, then the system \(E(1/x_{1},\ldots,1/x_{k},1/y)=0\) is also partition regular.
To state our results, we first define the games we study in detail. Let \(A\subseteq\mathbb{N}\) be a finite set and let \(e(x_{1},\ldots,x_{k},y)=0\) be an equation in variables \(x_{1},\ldots,x_{k},y\). The Maker-Breaker Rado games denoted \(G(A,e(x_{1},\ldots,x_{k},y)=0)\) and \(G^{*}(A,e(x_{1},\ldots,x_{k},y)=0)\) have the following rules:
1. Maker and Breaker take turns to select a number from \(A\). Once a number is selected by a player, neither players can select that number again. Maker starts the game.
2. Maker wins the \(G(A,e(x_{1},\ldots,x_{k},y)=0)\) game if a collection of the numbers chosen by Maker form a solution to \(e(x_{1},\ldots,x_{k},y)=0\) where \(x_{1},\ldots,x_{k}\) are _not_ necessarily distinct; and Maker wins the \(G^{*}(A,e(x_{1},\ldots,x_{k},y)=0)\) game if a collection of the numbers chosen by Maker form a solution to \(e(x_{1},\ldots,x_{k},y)=0\) where \(x_{1},\ldots,x_{k}\) are distinct.
3. Breaker wins if Maker fails to occupy a solution to \(e(x_{1},\ldots,x_{k},y)=0\).
If \(A=[n]\) for some \(n\in\mathbb{N}\), then we write \(G(n,e(x_{1},\ldots,x_{k},y)=0):=G([n],e(x_{1},\ldots,x_{k},y)=0)\) and \(G^{*}(n,e(x_{1},\ldots,x_{k},y)=0):=G^{*}([n],e(x_{1},\ldots,x_{k},y)=0)\). We use the following shorter notations for games with Equation (1.1):
\[G(n,k,\ell):=G\left(n,x_{1}^{1/\ell}+\cdots+x_{k}^{1/\ell}=y^{1/\ell}\right)\]
and
\[G^{*}(n,k,\ell):=G^{*}\left(n,x_{1}^{1/\ell}+\cdots+x_{k}^{1/\ell}=y^{1/\ell} \right).\]
We say that a player wins a game if there is a **winning strategy** which guarantees that this player wins no matter what the other player does. A winning strategy as a set of instructions which tells the player what to do each round given what had been previously played by both players. Let \(f(k,\ell)\) be the smallest positive integer \(n\) such that Maker wins the \(G(n,k,\ell)\) game and let \(f^{*}(k,\ell)\) be the smallest positive integer \(n\) such that Maker wins the \(G^{*}(n,k,\ell)\) game.
For \(\ell\geq 1\), we are able to find exact formulas for \(f(k,\ell)\) and \(f^{*}(k,\ell)\).
**Theorem 1.1**.: _For all integers \(k\geq 2\) and \(\ell\geq 1\), we have \(f(k,\ell)=(k+2)^{\ell}\)._
**Theorem 1.2**.: _For all integers \(k\geq 2\) and \(\ell\geq 1\), we have \(f^{*}(k,\ell)=(k^{2}+3)^{\ell}\)._
Our proofs of Theorems 1.1 and 1.2 involve showing that \(f(k,1)=k+2\) and \(f^{*}(k,1)=k^{2}+3\) using elementary combinatorial arguments, and that \(f(k,\ell)\leq[f(k,1)]^{\ell}\) and \(f^{*}(k,\ell)\leq[f^{*}(k,1)]^{\ell}\) using a result of Besicovitch [3] on the linear independence of integers with fractional powers.
For \(\ell\leq-1\), our main results are the following:
**Theorem 1.3**.: _Let \(k,\ell\) be integers with \(\ell\leq-1\). Then \(f(k,\ell)=[k+\Theta(1)]^{-\ell}\). More specifically, if \(k\geq 1/(2^{-1/\ell}-1)\), then \(f(k,\ell)\geq(k+1)^{-\ell}\); and if \(k\geq 4\), then \(f(k,\ell)\leq(k+2)^{-\ell}\)._
**Theorem 1.4**.: _Let \(k,\ell\) be integers with \(\ell\leq-1\). Then \(f^{*}(k,\ell)=[\exp(O(k\log k))]^{-\ell}\)._
The proof of Theorem 1.4 involves showing that \(f^{*}(k,-1)=\exp(O(k\log k))\) using a game theoretic variant of a theorem in arithmetic Ramsey theory by Brown and Rodl [6].
Our results indicate that it is "easier" to form a solution to Equation (1.1) strategically compared to their counterparts in arithmetic Ramsey theory. To illustrate this, let \(R(k,\ell)\) be the smallest positive integer \(n\) such that if \([n]\) is partitioned into two classes then one of them has a solution to Equation (1.1) with \(x_{1},\ldots,x_{k}\) not necessarily distinct, and let \(R^{*}(k,\ell)\) be the smallest positive integer \(n\) such that if \([n]\) is partitioned into two classes then one of them has a solution to
Equation (1.1) with \(x_{1},\ldots,x_{k}\) distinct. Note that by strategy stealing, we have \(f(k,\ell)\leq R(k,\ell)\) and \(f^{*}(k,\ell)\leq R^{*}(k,\ell)\). When \(\ell\in\{-1,1\}\), some results on \(R(k,\ell)\) and \(R^{*}(k,\ell)\) are known.
For \(\ell=1\), Beutelsapacher and Brestovansky [4] proved that \(R(k,1)=k^{2}+k-1\). The exact formula for \(R^{*}(k,1)\) is not known, but Boza, Revuelta, and Sanz [5] proved that, for \(k\geq 6\), \(R^{*}(k,1)\geq(k^{3}+3k^{2}-2k)/2\). Hence, by Theorems 1.1 and 1.2, we have
\[\lim_{k\to\infty}\frac{f(k,1)}{R(k,1)}=\lim_{k\to\infty}\frac{f^{*}(k,1)}{R^{* }(k,1)}=0.\]
For \(\ell=-1\), Myers and Parrish [19] calculated that \(R(2,-1)=60\), \(R(3,-1)=40\), \(R(4,-1)=48\), and \(R(5,-1)=39\); and the first author [9] proved that \(R(k,-1)\geq k^{2}\). So by Theorem 1.3, we have
\[\lim_{k\to\infty}\frac{f(k,-1)}{R(k,-1)}=0. \tag{1.2}\]
Unfortunately, we don't know a similar lower bound for \(R^{*}(k,-1)\). However, we believe that Maker can still do better by strategically selecting numbers.
**Conjecture 1.5**.: \(\lim_{k\to\infty}f^{*}(k,-1)/R^{*}(k,-1)=0\)_._
This paper is organized as follows. We first prove some preliminary results in Section 2. The next four sections are devoted to proving Theorems 1.1 to 1.4. In Section 7, we study Rado games for linear equations with arbitrary coefficients. We discuss some future research directions in Section 8.
### Asymptotic Notation
We use standard asymptotic notation and all limits are in terms of \(k\) throughout this paper. For functions \(f(k)\) and \(g(k)\), \(f(k)=O\big{(}g(k)\big{)}\) if there exist constants \(K\) and \(C\) such that \(|f(k)|\leq C|g(k)|\) for all \(k\geq K\); \(f(k)=\Omega(g(k))\) if there exist constants \(K^{\prime}\) and \(c\) such that \(|f(k)|\geq c|g(k)|\) for all \(k\geq K^{\prime}\); \(f(k)=\Theta(g(k))\) if \(f(k)=O(g(k))\) and \(f(k)=\Omega(g(k))\); and \(f(k)=o(g(k))\) if \(\lim_{k\to\infty}f(k)/g(k)=0\).
## 2. Preliminaries
We prove some results which will be used to prove Theorems 1.1 to 1.4. Our first result shows that the games for equations with radicals can be partially reduced to games for equation without radicals, i.e., \(\ell=1\) or \(\ell=-1\).
**Lemma 2.1**.: _Let \(k\) and \(\ell\) be integers with \(k\geq 2\) and \(\ell\neq 0\). If \(\ell\geq 1\), then_
\[f(k,\ell)\leq\left[f(k,1)\right]^{\ell}\text{ and }f^{*}(k,\ell)\leq\left[f(k,1 )\right]^{\ell}.\]
_If \(\ell\leq-1\), then_
\[f(k,\ell)\leq\left[f(k,-1)\right]^{-\ell}\text{ and }f^{*}(k,\ell)\leq\left[f(k,-1)\right]^{-\ell}.\]
Proof.: Let \(k\) and \(\ell\) be integers with \(k\geq 2\) and \(\ell\neq 0\). We prove that \(f(k,\ell)\leq\left[f(k,1)\right]^{\ell}\). The other inequalities can be proved similarly.
Write \(M=f(k,1)\) and let \(\mathcal{M}\) be a Maker's winning strategy for the \(G(M,k,1)\) game. Notice that if \((x_{1},\ldots,x_{k},y)=(a_{1},\ldots,a_{k},b)\) is a solution to \(x_{1}+\cdots+x_{k}=y\), then \((x_{1},\ldots,x_{k},y)=(a_{1}^{\ell},\ldots,a_{k}^{\ell},b^{\ell})\) is a solution to \(x_{1}^{1/\ell}+\cdots+x_{k}^{1/\ell}=y^{1/\ell}\).
For \(i=1,2,\ldots\), let \(m_{i}\in[M^{\ell}]:=\{1,2,\ldots,M^{\ell}\}\) be the number chosen by Maker and let \(b_{i}\in[M^{\ell}]\) be the number chosen by Breaker in round \(i\). We define a strategy for Maker recursively. Write \([M]^{\ell}:=\{1^{\ell},2^{\ell},\ldots,M^{\ell}\}\). In round \(1\), if \(\mathcal{M}\) tells Maker to choose \(a_{1}\) for the \(G(M,k,1)\) game, then set \(m_{1}=a_{1}^{\ell}\). If \(b_{1}=z_{1}^{\ell}\) for some \(z_{1}\in[M]\), then set \(b_{1}^{\prime}=z_{1}\); otherwise, arbitrarily set \(b_{1}^{\prime}\) equal to some number in \(M\backslash\{a_{1}\}\). In round \(i\geq 2\), given \(a_{1},a_{2},\ldots,a_{i-1},b_{1}^{\prime},b_{2}^{\prime},\ldots,b_{i-1}^{\prime}\), if \(\mathcal{M}\) tells Maker to choose \(a_{i}\), then set \(m_{i}=a_{i}\). This is possible because \(\mathcal{M}\) is a winning strategy. If \(b_{i}=z_{i}^{\ell}\) for some \(z_{i}\in[M]\), then set \(b_{i}^{\prime}=z_{i}\); otherwise, arbitrarily set \(b_{i}^{\prime}\) equal to some number in \(M\backslash\{a_{1},a_{2},\ldots,a_{i-1},a_{i},b_{1}^{\prime},b_{2}^{\prime}, \ldots,b_{i-1}^{\prime}\}\).
Now since \(\mathcal{M}\) is a winning strategy, there exists \(t\) such that \(\{a_{1},a_{2},\ldots,a_{t}\}\) has a solution to \(x_{1}+\cdots+x_{k}=y\). Hence \(\{m_{1},m_{2},\ldots,m_{t}\}=\{a_{1}^{\ell},a_{2}^{\ell},\ldots,a_{t}^{\ell}\}\) has a solution to \(x_{1}^{1/\ell}+\cdots+x_{k}^{1/\ell}=y^{1/\ell}\). Therefore, Maker wins the \(G([M^{\ell}],k,\ell)\) game.
Theorems 1.1 and 1.2 indicate that these inequalities in Lemma2.1 are actually equality when \(\ell\geq 2\). This is due to a result of Besicovitch [3]. To state this result, we first need the following definition.
**Definition 2.2**.: Let \(a\in\mathbb{N}\backslash\{1\}\). We say that \(a\) is **power-\(\ell\) free** if \(a=b^{\ell}c\), with \(b,c\in\mathbb{N}\), implies \(b=1\).
**Theorem 2.3** (Besicovitch [3]).: _For all positive integers \(\ell\geq 2\), the set_
\[A(\ell):=\{a^{1/\ell}:a\in\mathbb{N}\backslash\{1\}\text{ and $a$ is power-$\ell$ free}\}\]
_is linearly independent over \(\mathbb{Z}\). That is, if \(a_{1},\ldots,a_{m}\in A(\ell)\) and \(c_{1},\ldots,c_{m}\in\mathbb{N}\) satisfy \(c_{1}a_{1}+\cdots+c_{m}a_{m}=0\), then \(c_{1}=\cdots=c_{m}=0\)._
Besicovitch [3] actually provided an elementary proof of a stronger result, but Theorem2.3 is enough for our purposes. For interested readers, we note that Richards [22] proved a similar result to the one in [3], but using Galois theory. A direct consequence of Theorem2.3 is the following result which will be used in proving Theorems1.1 and 1.2.
**Corollary 2.4**.: _Let \(k\geq 2\) and \(\ell\geq 1\) be integers. The solutions to \(x_{1}^{1/\ell}+\cdots+x_{k}^{1/\ell}=y^{1/\ell}\) are of the form \((x_{1},\ldots,x_{k},y)=(ca_{1}^{\ell},\ldots,ca_{k}^{\ell},cb^{\ell})\) where \(a_{1},\ldots,a_{k},b,c\in\mathbb{N}\), \(a_{1}+\cdots+a_{k}=b\), and \(c\) is power-\(\ell\) free._
Proof.: Let \(k\geq 2\) and \(\ell\geq 1\) be integers. Suppose that \(\alpha_{1},\ldots,\alpha_{k},\beta\in\mathbb{N}\) satisfy
\[\alpha_{1}^{1/\ell}+\cdots+\alpha_{k}^{1/\ell}=\beta^{1/\ell}.\]
We write \(\alpha_{i}=c_{i}a_{i}^{\ell}\) for all \(i=1,...,k\), and \(\beta=db^{\ell}\) where \(a_{1},\ldots,a_{k},c_{1},\ldots,c_{k},b,d\in\mathbb{N}\) and \(c_{1},\ldots,c_{k},d\) are power-\(\ell\) free. Then we have
\[a_{1}c_{1}^{1/\ell}+\cdots+a_{k}c_{k}^{1/\ell}-bd^{1/\ell}=0. \tag{2.1}\]
We first show that \(c_{1}=\cdots=c_{k}=d\). Suppose, for a contradiction, that \(c_{1},\ldots,c_{k},d\) are not all the same. We split this into two cases.
Case 1: \(d\neq c_{i}\) for all \(i\in[k]\). After combining terms with same \(\ell\)-th roots, the left-hand side of Equation2.1 has at least two terms where one of them is \(-bd^{1/\ell}\). Now by Theorem2.3, \(b=0\) which is a contradiction.
Case 2: \(d=c_{i}\) for some \(i\in[k]\). Then there exists \(j\in[k]\backslash\{i\}\) such that \(c_{j}\neq c_{i}\). After combining terms with same \(\ell\)-th roots, the left-hand side of Equation2.1 has a term with \(c_{j}^{1/\ell}\). This is because all the terms with \(c_{j}^{1/\ell}\) contain only positive coefficients. By Theorem2.3, the coefficient of \(c_{j}^{1/\ell}\) is zero after combining like terms. But this is impossible because the coefficient of \(c_{j}^{1/\ell}\) is the sum of a subset of \(\{a_{1},...,a_{k}\}\) consisting only positive integers.
Hence we have \(c_{1}=\cdots=c_{k}=d\). Therefore, \(a_{1}+\cdots+a_{k}=b\).
We note that Newman [20] proved Corollary2.4 for the case \(k=2\) without using Theorem2.3.
Next, we prove a game theoretic variant of a result by Brown and Rodl [6, Theorem 2.1]. We note that an equation \(e(x_{1},\ldots,x_{k},y)=0\) is homogeneous if whenever \((x_{1},\ldots,x_{k},y)=(a_{1},\ldots,a_{k},b)\) is a solution to \(e(x_{1},\ldots,x_{k},y)=0\), for all \(m\in\mathbb{N}\), \((x_{1},\ldots,x_{k},y)=(ma_{1},\ldots,ma_{k},mb)\) is a also a solution to \(e(x_{1},\ldots,x_{k},y)=0\).
**Theorem 2.5**.: _Let \(A\) be a finite subset of \(\mathbb{N}\), \(L\) the least common multiple of \(A\), \(k\in\mathbb{N}\), and \(e(x_{1},\ldots,x_{k},y)=0\) a homogeneous equation. If Maker wins the \(G(A,e(x_{1},\ldots,x_{k},y)=0)\) game, then Maker wins the \(G(L,e(1/x_{1},\ldots,1/x_{k},1/y)=0)\) game. Similarly, if Maker wins the \(G^{*}(A,e(x_{1},\ldots,x_{k},y)=0)\) game, then Maker wins the \(G^{*}(L,e(1/x_{1},\ldots,1/x_{k},1/y)=0)\) game._
Proof.: Let \(k\in\mathbb{N}\) be an integer, \(A\subseteq\mathbb{N}\) a finite set, \(L\) the least common multiple of \(A\), and \(e(x_{1},\ldots,x_{k},y)=0\) a homogeneous equation. We prove that if Maker wins the \(G(A,e(x_{1},\ldots,x_{k},y)=0)\) game, then Maker wins the \(G(L,e(1/x_{1},\ldots,1/x_{k},1/y)=0)\) game. The statement for the \(G^{*}(L,e(1/x_{1},\ldots,1/x_{k},1/y)=0)\) game can be proved in a similar way.
Suppose that Maker wins the \(G(A,e(x_{1},\ldots,x_{k},y)=0)\) game. Let \(\mathcal{M}\) be a Maker's winning strategy. We consider the following Maker's strategy for the \(G(L,e(1/x_{1},\ldots,x_{k},1/y)=0)\) game. In round 1, if \(\mathcal{M}\) tells Maker to choose \(m_{1}\) for the \(G(A,e(x_{1},\ldots,x_{k},y)=0)\) game, then Maker chooses \(L/m_{1}\in\{1,\ldots,L\}\). The rest of the strategy is defined inductively. For all rounds \(i\), let \(L/b_{i}\) be the number chosen by Breaker and \(L/m_{i}\) be the number chosen by Maker where \(m_{i}\in\{1,\ldots,L\}\). If \(b_{i}\in A\), then we set \(b^{\prime}_{i}=b_{i}\); if \(b_{i}\notin A\), then arbitrarily set \(b^{\prime}_{i}\) equal to some number in \(A\backslash\{m_{1},\ldots,m_{i},b^{\prime}_{1},\ldots,b^{\prime}_{i-1}\}\). For all rounds \(i\geq 2\), given \(\{m_{1},\ldots,m_{i-1},b^{\prime}_{1},\ldots,b^{\prime}_{i-1}\}\), if \(\mathcal{M}\) tells Maker to choose \(m_{i}\) for the \(G(A,e(x_{1},\ldots,x_{k},y))=0\) game, then Maker chooses \(L/m_{i}\) for the \(G(L,e(1/x_{1},\ldots,1/x_{k},1/y)=0)\) game. This process is possible because \(\mathcal{M}\) is a winning strategy.
Since \(\mathcal{M}\) is a winning strategy, in some round \(t\), there exists a subset \(\{a_{1},\ldots,a_{s}\}\) of \(\{m_{1},\ldots,m_{t}\}\) which form a solution to \(e(x_{1},\ldots,x_{k},y)=0\). By homogeneity, \(\{L/a_{1},\ldots,L/a_{s}\}\) form a solution to \(e(1/x_{1},\ldots,1/x_{k},1/y)=0\). So Maker wins the \(G(L,e(1/x_{1},\ldots,1/x_{k},1/y)=0)\) game.
The key feature of Theorem 2.5 is that one can choose a set \(A\) whose least common multiple \(L\) is small. This was not used by Brown and Rodl [6, Theorem 2.1]. For interested readers, we note that the first author [9] recently improved a quantitative result by Brown and Rodl [6, Theorem 2.5] with the help of this observation.
Finally, we also need the following definitions.
**Definition 2.6**.: Given \(m\in\mathbb{N}\) mutually disjoint subsets \(\{s_{1},t_{1}\}\), \(\{s_{2},t_{2}\}\),..., \(\{s_{m},t_{m}\}\) of \(\mathbb{N}\) with size 2, the **pairing strategy** over those disjoint subsets for a player is the following: if their opponent chooses \(s_{i}\) for some \(i=1,2,\ldots,m\), then this player chooses \(t_{i}\).
**Definition 2.7**.: Let \(k\geq 2\) be an integer and \(a_{1}x_{1}+\cdots+a_{k}x_{k}=y\) a linear equation. Suppose, at some point of the \(G^{*}(n,a_{1}x_{1}+\cdots+a_{k}x_{k}=y)\) game, Maker has claimed a set \(A\) of at least \(k\) integers. We call \(a_{1}\alpha_{1}+\cdots+a_{k}\alpha_{k}\) a \(k\)**-sum** for any \(k\) distinct integers \(\alpha_{1},\ldots,\alpha_{k}\in A\).
## 3. Proof of Theorem 1.1
Let \(k,\ell\) be integers with \(k\geq 2\) and \(\ell\geq 1\). We first show that \(f(k,1)=k+2\).
**Lemma 3.1**.: _For all integers \(k\geq 2\), we have \(f(k,1)=k+2\)._
Proof.: We first show that Maker wins the \(G(k+2,k,1)\) game.
Case 1: \(k=2\). Maker starts by choosing 2. Since \(2+2=4\) and \(1+1=2\), Maker wins the game in the next round by choosing either 1 or 4, whichever is available.
Case 2: \(k>2\). Maker starts by selecting 1. Notice that
\[1+1+\cdots+1=k\cdot 1=k,\]
\[1+1+\cdots+1+2=(k-1)\cdot 1+2=k+1,\]
and
\[1+1+\cdots+1+2+2=(k-2)\cdot 1+2\cdot 2=k+2.\]
If Breaker chooses \(k\) in the first round, then Maker chooses \(2\) in round \(2\) and wins the game in round \(3\) by choosing either \(k+1\) or \(k+2\). If Breaker does not choose \(k\) in round \(1\), then Maker can win the game in round \(2\) by choosing \(k\).
Now we show that Breaker wins the \(G(k+1,k,1)\) game. When \(\ell=1\), the only possible solutions to Equation (1.1) in \(\{1,\ldots,k+1\}\) are
\[(x_{1},x_{2},\ldots,x_{k-1},x_{k},y)=(1,1,\ldots,1,1,k)\]
and
\[(x_{1},x_{2},\ldots,x_{k-1},x_{k},y)=(1,1,\ldots,1,2,k+1).\]
If \(k=2\), then Breaker wins the game by the pairing strategy over \(\{1,2\}\). If \(k\geq 3\), then Breaker wins the game by the pairing strategy over \(\{1,k\}\) and \(\{2,k+1\}\).
By Lemmas 2.1 and 3.1, we have \(f(k,\ell)\leq[f(k,1)]^{\ell}=(k+2)^{\ell}\). It remains to show that \(f(k,\ell)\geq(k+2)^{\ell}\). This is true for \(\ell=1\) by Lemma 3.1. So we assume \(\ell\geq 2\). It suffices to show that Breaker wins the \(G\left((k+2)^{\ell}-1,k,\ell\right)\) game. Since the only solutions to \(x_{1}+\cdots+x_{k}=y\) in \(\{1,2,\ldots,k+1\}\) are
\[(x_{1},\ldots,x_{k-2},x_{k-1},x_{k},y)=(1,\ldots,1,1,1,k),\]
and
\[(x_{1},\ldots,x_{k-2},x_{k-1},x_{k},y)=(1,\ldots,1,1,2,k+1),\]
by Corollary 2.4, the only solutions to \(x_{1}^{1/\ell}+\cdots+x_{k}^{1/\ell}=y^{1/\ell}\) in \(\{1,2,\ldots,(k+2)^{\ell}-1\}\) are
\[(x_{1},\ldots,x_{k-2},x_{k-1},x_{k},y)=(a,\ldots,a,a,a,ak^{\ell}),\]
and
\[(x_{1},\ldots,x_{k-2},x_{k-1},x_{k},y)=(b,\ldots,b,b,b2^{\ell},b(k+1)^{\ell}),\]
where \(a,b\in\{1,2,\ldots,2^{\ell}-1\}\). Notice that \(a\) and \(b\) are power-\(\ell\) free.
If \(k=2\), then Breaker wins the game by the pairing strategy over the sets \(\{a,a2^{\ell}\}\) where \(a\in\{1,2,\ldots,2^{\ell}-1\}\). If \(k\geq 3\), then Breaker wins the game by the pairing strategy over the sets \(\{a,ak^{\ell}\}\) and \(\{b2^{\ell},b(k+1)^{\ell}\}\) where \(a,b\in\{1,2,\ldots,2^{\ell}-1\}\). In these pairing strategies, if Maker chooses some \(a\) or \(b2^{\ell}\) so that \(ak^{\ell}>(k+2)^{\ell}-1\) or \(b(k+1)^{\ell}>(k+2)^{\ell}-1\), then Breaker arbitrarily chooses an available number in \(\{1,2,\ldots,(k+2)^{\ell}-1\}\).
## 4. Proof of Theorem 1.2
Let \(k,\ell\) be integers with \(k\geq 2\) and \(\ell\geq 1\). We first establish that \(f^{*}(k,1)=k^{2}+3\).
**Lemma 4.1**.: _For all integers \(k\geq 2\), we have \(f^{*}\left(k,1\right)\leq k^{2}+3\)._
Proof.: It suffices to show that Maker wins the \(G^{*}(k^{2}+3,k,1)\) game. For \(i=1,2,...,\lceil n/2\rceil\), let \(m_{i}\) denote the number selected by Maker in round \(i\). For \(j=1,2,\ldots,\lfloor n/2\rfloor\), let \(b_{j}\) denote the number selected by Breaker in round \(j\).
We first consider the case that \(k=2\). Then \(k^{2}+3=7\). Maker starts by choosing \(m_{1}=1\). Then no matter what \(b_{1}\) is, there are three consecutive numbers in \(\{2,3,4,5,6,7\}\) available to Maker, say \(\{a,b,c\}\). Maker sets \(m_{2}=b\). Notice that \(1+a=b\) and \(1+b=c\). Since Breaker can only choose one of \(a\) and \(c\), Maker wins in round \(3\) by setting \(m_{3}=a\) or \(m_{3}=c\).
Now suppose \(k=3\). Then \(k^{2}+3=12\). Maker starts by choosing \(m_{1}=1\). We have \(4\) cases based on Breaker's choices.
Case 1: If \(b_{1}\neq 2\), then Maker chooses \(m_{2}=2\). Suppose Breaker has selected \(b_{2}\). Now consider the \(3\)-term arithmetic progressions of difference \(m_{1}+m_{2}=3\):
\[\{3,6,9\},\{4,7,10\},\{5,8,11\}.\]
At the start of round \(3\), Breaker has chosen two numbers and hence one of these \(3\)-term arithmetic progressions is available to Maker. Maker can set \(m_{3}\) equal to the middle number of the available
3-term arithmetic progression and win the game in round 4 by choosing either the smallest or the largest number of the same 3-term arithmetic progression.
Case 2: If \(b_{1}=2\), then Maker chooses \(m_{2}=3\). Suppose \(b_{2}\neq 4,8,12\). Since \(\{4,8,12\}\) is a 3-term arithmetic progression of difference \(m_{1}+m_{2}=4\), Maker can set \(m_{3}=8\) and win the game in round 4 by choosing either 4 or 12.
Case 3: If \(b_{1}=2\), then Maker chooses \(m_{2}=3\). Suppose \(b_{2}=4\) or 8. Then Maker sets \(m_{3}=5\). If \(b_{3}\neq 9\), then Maker sets \(m_{4}=9\). Since \(m_{1}+m_{2}+m_{3}=1+3+5=9=m_{4}\), Maker wins the game. Suppose \(b_{3}=9\). Then Maker sets \(m_{4}=6\). Since \(m_{1}+m_{2}+m_{4}=1+3+6=10\) and \(m_{1}+m_{3}+m_{4}=1+5+6=12\), Maker wins in round 5 by choosing either 10 or 12.
Case 4: If \(b_{1}=2\), then Maker chooses \(m_{2}=3\). Suppose \(b_{2}=12\). Then Maker sets \(m_{3}=4\). If \(b_{3}\neq 8\), then Maker sets \(m_{4}=8\). Since \(m_{1}+m_{2}+m_{3}=1+3+4=8=m_{4}\), Maker wins the game. Suppose \(b_{3}=8\). Then Maker sets \(m_{4}=5\). Since \(m_{1}+m_{2}+m_{4}=1+3+5=9\) and \(m_{1}+m_{3}+m_{4}=1+4+5=10\), Maker wins in round 5 by choosing either 9 or 10.
Finally, we consider that \(k\geq 4\). We start with an observation.
**Claim 1.** Since \(k\geq 4\), all the \(k\)-sums are at least
\[\sum_{i=1}^{k}i=\frac{1}{2}k^{2}+\frac{1}{2}k>2k.\]
We prove that Maker can win with the following strategy: if a \(k\)-sum is available to Maker, then Maker chooses the \(k\)-sum and win the game; otherwise Maker selects the smallest number available. By this strategy, Maker will choose the smallest numbers possible for the first \(k\) rounds and the smallest \(k\)-sum is \(m_{1}+\cdots+m_{k}\).
**Claim 2.**\(m_{i}\leq 2i-1\) for \(i=1,...,k\). Indeed, at the start of round \(i\), Maker and Breaker have together chosen \(2(i-1)=2i-2\) numbers. Hence, one of the numbers in \(\{1,2,\ldots,2i-1\}\) is still available to Maker. So by Maker's strategy, we have \(m_{i}\leq 2i-1\).
By Claim 2, we have
\[\sum_{i=1}^{k}m_{i}\leq 1+3+\cdots+2k-1=k^{2}\leq k^{2}+3.\]
If Breaker didn't choose \(m_{1}+\cdots+m_{k}\) during the first \(k\) rounds, then Maker chooses \(m_{1}+\cdots+m_{k}\) in round \(k+1\) and wins the game.
Now suppose that Breaker has selected \(m_{1}+\cdots+m_{k}\) during the first \(k\) rounds. Consider the middle of round \(k+1\) when Maker has chosen \(k+1\) numbers but Breaker has only chosen \(k\) numbers where \(s\), \(1\leq s\leq k\), of them are \(k\)-sums. Since there are \(2k+1\) numbers in \(\{1,2,\ldots,2k+1\}\) and Breaker has chosen only \(k\) numbers, we have \(m_{k+1}\leq 2k+1\) by Maker's strategy. Since \(m_{1},\ldots,m_{k+1}\) are distinct, the total number of \(k\)-sums is \(\binom{k+1}{k}=k+1\).
**Claim 3.** If Breaker has chosen \(s\)\(k\)-sums during the first \(k\) rounds and one of them is \(\sum_{i=1}^{k}m_{i}\), then \(m_{k+1-s+j}\leq 2(k+1-s+j)-1-j=2(k+1-s)+j-1\) for \(j=1,2,\ldots,s\). By Claim 1, the \(k\)-sums are greater than \(2k\). So if Breaker has chosen \(s\)\(k\)-sums, then Breaker has chosen at most \(k-s\) numbers in \(\{1,2,\ldots,2k-s+1\}\). By Maker's strategy, Maker has chosen \(k+1\) numbers in \(\{1,2,\ldots,2k-s+1\}\). If \(s=1\), then we have \(m_{k+1}\leq 2k\) and the claim is true. If \(s>1\), then by Maker's strategy, we have \(m_{k+1}>m_{k}>\cdots>m_{k+1-s+1}\). Since \(m_{k+1},\ldots,m_{k+1-s+1}\in\{1,2,\ldots,2k-s+1\}\), the claim is also true.
Now we split it into two cases two cases based on the value of \(s\) and what Breaker chooses in round \(k+1\).
Case 1: \(1\leq s\leq k-1\) or \(s=k\) and Breaker does not choose a \(k\)-sum in round \(k+1\). Then Breaker will have chosen at most \(k\)\(k\)-sums at the beginning of round \(k+2\). By Claim 2 and Claim
3, at the beginning of round \(k+2\), there exists an unclaimed \(k\)-sum whose value is at most
\[\sum_{i=1}^{k+1-s-2}m_{i}+\sum_{i=k+1-s}^{k+1}m_{i} \leq\sum_{i=1}^{k+1-s-2}(2i-1)+\sum_{j=0}^{s}[2(k+1-s)+j-1]\] \[= (k-s-1)^{2}+(s+1)2(k+1-s)+\frac{s(s-1)}{2}-1\] \[= k^{2}-\frac{1}{2}s^{2}+\frac{3}{2}s+2\leq k^{2}+3.\]
Hence Maker chooses this \(k\)-sum in round \(k+2\) and wins the \(G^{*}(k^{2}+3,k,1)\) game.
Case 2: \(s=k\) and Breaker chooses a \(k\)-sum in round \(k+1\). In this cases, at the end of round \(k+1\), Breaker has chosen all possible \(k\)-sums from \(\{m_{1},\ldots,m_{k+1}\}\). By Claim 1, the \(k\)-sums are greater than \(2k\). Since \(k+2\leq 2k\) for \(k\geq 2\), Breaker didn't choose any number in \(\{1,2,\ldots,k+2\}\). So \(m_{i}=i\) for \(i=1,2,\ldots,k+2\). Notice that the largest \(k\)-sum before round \(k+2\) is
\[\sum_{i=2}^{k+1}m_{i}=\sum_{i=1}^{k+1}i=\frac{(k+1)(k+2)}{2}-1=\frac{1}{2}k^{2 }+\frac{3}{2}k.\]
Setting \(m_{k+2}=k+2\), Maker now has two larger \(k\)-sums which are untouched by Breaker:
\[m_{k+2}+\sum_{i=2}^{k}m_{i}=k+2+\frac{k(k+1)}{2}-1=\frac{1}{2}k^{2}+\frac{3}{2 }k+1\]
and
\[m_{k+1}+m_{k+2}+\sum_{i=2}^{k-1}m_{i}=k+1+k+2+\frac{(k-1)k}{2}-1=\frac{1}{2}k^ {2}+\frac{3}{2}k+2.\]
Since \(k\geq 4\), we have
\[k^{2}+3\geq\frac{1}{2}k^{2}+\frac{3}{2}k+2.\]
Hence Maker can win the \(G^{*}(k^{2}+3,k,1)\) game in round \(k+3\).
**Lemma 4.2**.: _For all integers \(k\geq 2\), we have \(f^{*}\left(k,1\right)\geq k^{2}+3\)._
Proof.: It suffices to show that Breaker wins the \(G(k^{2}+2,k,1)\) game. For \(i=1,2,\ldots,\lceil n/2\rceil\), let \(m_{i}\) denote the number selected by Maker in round \(i\). For \(j=1,2,\ldots,\lfloor n/2\rfloor\), let \(b_{j}\) denote the number selected by Breaker in round \(j\).
We first consider \(k=2\). Then \(k^{2}+2=2^{2}+2=6\). If \(m_{1}=1\), then Breaker chooses \(b_{1}=4\). Now Breaker wins by the pairing strategy over \(\{2,3\}\) and \(\{5,6\}\). If \(m_{1}\neq 1\), then Breaker chooses \(b_{1}=1\). Now there are only two solutions available to Maker: \(2+3=5\) and \(2+4=6\). There are three cases:
Case 1: \(m_{1}=2\). Then Breaker wins by the pairing strategy over \(\{3,5\}\) and \(\{4,6\}\).
Case 2: \(m_{1}\neq 1,2\), \(b_{1}=1\), \(m_{2}=2\). Then Breaker wins by the pairing strategy over \(\{3,5\}\) and \(\{4,6\}\).
Case 3: \(m_{1}\neq 1,2\), \(b_{1}=1\), \(m_{2}\neq 2\). Then by choosing \(b_{2}=2\), Breaker wins because the smallest numbers now available to Maker are \(3\) and \(4\), and \(3+4=7>6\).
Now we consider \(k\geq 3\). Notice that we have \(k^{2}-1\geq 2k+2\) when \(k\geq 3\). We will prove that Breaker wins with the following strategy:
1. in each round \(i\in[k-1]\), Breaker chooses smallest number available;
2. and in round \(k\), if there is an unclaimed number in \([2k-2]\), then Breaker chooses the unclaimed number; otherwise, Breaker's strategy depends on the sum of the numbers in \([2k-2]\) claimed by Maker, which is denoted by \(A\): * If \(A=(k-1)^{2}+3\), then Breaker chooses smallest numbers possible.
* If \(A=(k-1)^{2}+2\), then Breaker plays the pairing strategy over \(\{2k-1,k^{2}+2\}\).
* If \(A=(k-1)^{2}+1\), then Breaker plays the pairing strategy over \(\{2k-1,k^{2}+1\}\) and \(\{2k,k^{2}+2\}\).
* If \(A=(k-1)^{2}\), then Breaker plays the pairing strategy over \(\{2k-1,k^{2}\}\), \(\{2k,k^{2}+1\}\), and \(\{2k+1,k^{2}+2\}\).
Let \(a_{1}<a_{2}<a_{3}<\cdots<a_{s}\) with \(s\leq\lceil n/2\rceil\) be the numbers chosen by Maker when the game ends.
**Claim 1:**\(a_{i}\geq 2i-1\) for \(i=1,2,\ldots,k\), \(a_{k+1}\geq 2k\), and \(a_{k+2}\geq 2k+1\). Since \(a_{i}\geq 1=2\cdot 1-1\), this is true for \(i=1\). Now consider \(2\leq i\leq k\). By Breaker's strategy, Breaker can select at least \(i-1\) numbers in \(\{1,\ldots,2(i-1)\}\). So Maker can select at most \(i-1\) numbers in \(\{1,\ldots,2(i-1)\}\). Hence \(a_{i}\geq 2(i-1)+1=2i-1\).
**Claim 2:** If \(a_{k-1}>2k-2\), then Breaker wins. If this happens, then \(a_{k-1}\geq 2k-1\) and \(a_{k}\geq 2k\). Hence the smallest \(k\)-sum possible for Maker is
\[\sum_{i=1}^{k}a_{i}\geq 2k-1+2k+\sum_{i=1}^{k-2}(2i-1)=2k-1+2k+(k-2)^{2}=k^{2}+3 >k^{2}+2\]
and hence Breaker wins.
**Claim 3:** The smallest \(k\)-sum possible for Maker is \(\sum_{i=1}^{k}a_{i}\geq\sum_{i=1}^{k}(2i-1)=k^{2}\). So Maker needs one of \(k^{2}\), \(k^{2}+1\), and \(k^{2}+2\) to win.
**Claim 4:** If a \(k\)-sum does not contain all \(\{a_{1},...,a_{k-1}\}\), then Breaker wins. Indeed, if a \(k\)-sum does not contain all of \(\{a_{1},\ldots,a_{k-1}\}\), then the \(k\)-sum is at least
\[a_{k}+a_{k+1}+\sum_{i=1}^{k-2}a_{i}\geq 2k-1+2k+(k-2)^{2}=k^{2}+3>k^{2}+2.\]
We first suppose that after Maker has chosen \(m_{1},\ldots,m_{k}\), there is an unclaimed number in \([2k-2]\). In this case, Breaker sets \(b_{k}\) equal to some number in \([2k-2]\). Now Breaker has chosen \(k\) numbers in \([2k-2]\) which implies that Maker can choose at most \(k-2\) numbers in \([2k-2]\). Hence \(a_{k-1}>2k-2\). By Claim 2, Breaker wins.
Now assume that all the numbers in \([2k-2]\) are claimed in the middle of round \(k\) when Breaker has chosen \(k\) numbers and Breaker has chosen \(k-1\) numbers. In this case, we must have \(a_{1},\ldots,a_{k-1}\in[2k-2]\) and hence \(\sum_{i=1}^{k-1}a_{i}=A\). We consider the solutions to \(x_{1}+\cdots+x_{k}=y\), where \(x_{1},\ldots,x_{k}\) are distinct, such that Breaker has not occupied any number in them. By Claim 4, if a \(k\)-sum does not contain all numbers in \(\{a_{1},\ldots,a_{k-1}\}\), then Breaker wins. So we have the following cases:
Case 1: If \(A=\sum_{i=1}^{k-1}a_{i}=(k-1)^{2}\), then there are three solutions to \(x_{1}+\cdots+x_{k}=y\), where \(x_{1},\ldots,x_{k}\) are distinct, such that Breaker has not occupied any number in them: \(\{a_{1},\ldots,a_{k-1},2k-1,k^{2}\}\), \(\{a_{1},\ldots,a_{k-1},2k,k^{2}+1\}\), and \(\{a_{1},\ldots,a_{k-1},2k+1,k^{2}+2\}\). This is because if \(A=\sum_{i=1}^{k-1}a_{i}=(k-1)^{2}\), then
\[a_{k}+\sum_{i=1}^{k-1}a_{i}\geq 2k-1+(k-1)^{2}=k^{2},\]
\[a_{k+1}+\sum_{i=1}^{k-1}a_{i}\geq 2k+(k-1)^{2}=k^{2}+1,\]
\[a_{k+2}+\sum_{i=1}^{k-1}a_{i}\geq 2k+1+(k-1)^{2}=k^{2}+2,\]
and
\[a_{s}+\sum_{i=1}^{k-1}a_{i}\geq 2k+1+1+(k-1)^{2}=k^{2}+3>k^{2}+2\]
for \(s\geq k+3\).
Case 2: If \(A=\sum_{i=1}^{k-1}a_{i}=(k-1)^{2}+1\), then there are two solutions to \(x_{1}+\cdots+x_{k}=y\), where \(x_{1},\ldots,x_{k}\) are distinct, such that Breaker has not occupied any number in them: \(\{a_{1},\ldots,a_{k-1},k^{2}+1\}\) and \(\{a_{1},\ldots,a_{k-1},a_{k+1},k^{2}+2\}\). This is because if \(A=\sum_{i=1}^{k-1}a_{i}=(k-1)^{2}+1\), then
\[a_{k}+\sum_{i=1}^{k-1}a_{i}\geq 2k-1+(k-1)^{2}+1=k^{2}+1,\]
\[a_{k+1}+\sum_{i=1}^{k-1}a_{i}\geq 2k+(k-1)^{2}+1=k^{2}+2,\]
and
\[a_{s}+\sum_{i=1}^{k-1}a_{i}\geq 2k+1+(k-1)^{2}+1=k^{2}+3>k^{2}+2\]
for \(s\geq k+2\).
Case 3: If \(A=\sum_{i=1}^{k-1}a_{i}=(k-1)^{2}+2\), then there is only one solution to \(x_{1}+\cdots+x_{k}=y\), where \(x_{1},\ldots,x_{k}\) are distinct, such that Breaker has not occupied any number in them: \(\{a_{1},\ldots,a_{k},k^{2}+2\}\). This is because if \(A=\sum_{i=1}^{k-1}a_{i}=(k-1)^{2}+2\), then
\[a_{k}+\sum_{i=1}^{k-1}a_{i}\geq 2k-1+(k-1)^{2}+2=k^{2}+2,\]
and
\[a_{s}+\sum_{i=1}^{k-1}a_{i}\geq 2k+(k-1)^{2}+2=k^{2}+3>k^{2}+2\]
for \(s\geq k+1\).
In Case 1, Breaker uses the pairing strategy over\(\{2k-1,k^{2}\}\), \(\{2k,k^{2}+1\}\), and \(\{2k+1,k^{2}+2\}\). Since these sets are pairwise disjoint, Breaker wins. Similarly, in Case 2, Breaker uses the pairing strategy over \(\{2k-1,k^{2}+1\}\) and \(\{2k,k^{2}+2\}\); and in Case 3, Breaker uses the pairing strategy over \(\{2k-1,k^{2}+2\}\).
By Lemmas 2.1, 4.1 and 4.2, we have \(f^{*}(k,\ell)\leq[f^{*}(k,1)]^{\ell}=(k^{2}+3)^{\ell}\). It remains to show that \(f^{*}(k,\ell)\geq(k^{2}+3)^{\ell}\) for all \(\ell\geq 2\). It suffices to show that Breaker wins the \(G((k^{2}+3)^{\ell}-1,k,\ell)\) game. For all \(c\in\{1,2,\ldots,2^{\ell}-1\}\), let
\[A(c)=\{c\cdot 1^{\ell},c\cdot 2^{\ell},\ldots,c\cdot(k^{2}+2)^{\ell}\}\cap\{1,2, \ldots,(k^{2}+3)^{\ell}-1\}.\]
Notice that if \(c,c^{\prime}\in\{1,2,\ldots,2^{\ell}-1\}\) with \(c\neq c^{\prime}\), then \(A(c)\cap A(c^{\prime})=\emptyset\). By Corollary 2.4, each solution to \(x_{1}^{1/\ell}+\cdots+x_{k}^{1/\ell}=y^{1/\ell}\) in \(\{1,2,\ldots,(k^{2}+3)^{\ell}-1\}\) with \(x_{1},\ldots,x_{k}\) distinct belong to \(A(c)\) for some \(c\in\{1,2,\ldots,2^{\ell-1}\}\).
Let \(\mathcal{B}\) be a Breaker's winning strategy for the \(G^{*}(k^{2}+2,k,1)\) game. We define a Breaker's strategy for the \(G((k^{2}+3)^{\ell}-1,k,\ell)\) game recursively. For rounds \(i=1,2,\ldots\), let \(m_{i}\) be the number chosen by Maker and let \(b_{i}\) be the number chosen by Breaker. Let \(m_{1}=c_{1}a_{1}^{\ell}\) where \(c_{1}\) is power-\(\ell\) free. If \(\mathcal{B}\) tells Breaker to choose \(\alpha_{1}\) for the \(G^{*}(k^{2}+2,k,1)\) game given that Maker has selected \(a_{1}\), then Breaker sets \(b_{1}=c_{1}\alpha_{1}^{\ell}\). Consider round \(i\geq 2\). Suppose Maker has chosen \(m_{1}=c_{1}a_{1}^{\ell},m_{2}=c_{2}a_{2}^{\ell},\ldots,m_{i}=c_{i}a_{i}^{\ell}\) and Breaker has selected \(b_{1}=c_{1}\alpha_{1}^{\ell},b_{2}=c_{2}\alpha_{2}^{\ell},\ldots,b_{i-1}=c_{i- 1}\alpha_{i-1}^{\ell}\). Let \(c_{j_{1}},c_{j_{2}},\ldots,c_{j_{s}}\in\{1,\ldots,i-1\}\) be all the indices such that
\[c_{j_{1}}=c_{j_{2}}=\cdots=c_{j_{s}}=c_{i}.\]
If \(\mathcal{B}\) tells Breaker to choose \(\alpha_{i}\) for the \(G^{*}(k^{2}+2,k,1)\) game given that Maker has has selected \(a_{j_{1}},a_{j_{2}},\ldots,a_{j_{s}}\), \(a_{i}\) and Breaker has selected \(b_{j_{1}},b_{j_{2}},\ldots,b_{j_{s}}\), then Breaker sets \(b_{i}=c_{i}\alpha_{i}^{\ell}\).
Since \(\mathcal{B}\) is a winning strategy for Breaker, Breaker can stop Maker from completing a solution set from each \(A(c)\) and hence wins the game.
## 5. Proof of Theorem 1.3
Let \(k,\ell\) be integers with \(k\geq 2\) and \(\ell\leq-1\). We start with an observation.
**Lemma 5.1**.: _If \(n<2k^{-\ell}\) and Maker does not choose \(1\) in the first round, then Breakers wins the \(G(n,k,\ell)\) game._
Proof.: Suppose \(n<2k^{-\ell}\) and Maker does not choose \(1\) in the first round. We show that Break wins the \(G(n,k,\ell)\) game by choosing \(1\) in the first round. Suppose, for a contradiction, that Maker wins. Let \((x_{1},\ldots,x_{k},y)=(a_{1},\ldots,a_{k},b)\) be a solution to Equation (1.1) in \(\{1,2,\ldots,n\}\) completed by Maker. Then since \(a_{i}\leq n<2k^{-\ell}\) for all \(i=1,\ldots,k\), we have
\[b^{1/\ell}=a_{1}^{1/\ell}+\cdots+a_{k}^{1/\ell}>k(2k^{-\ell})^{1/\ell}=2^{1/ \ell}.\]
So \(b<2\) which is impossible.
Now we prove the lower bound in Theorem 1.3.
**Lemma 5.2**.: _If \(k\geq 1/(2^{-1/\ell}-1)\), then \(f(k,\ell)\geq(k+1)^{-\ell}\)._
Proof.: Suppose \(k\geq 1/(2^{-1/\ell}-1)\). It suffices to show that that Breaker wins the \(G((k+1)^{-\ell}-1,k,\ell)\) game. By straightforward calculation, we have
\[(k+1)^{-\ell}-1<2k^{-\ell}.\]
Hence, by Lemma 5.1, we can assume that Maker chooses \(1\) in the first round and \(b=1\). Now we show that the only solution to \(x_{1}^{1/\ell}+\cdots+x_{k}^{1/\ell}=1\) in \(\{1,2,\ldots,(k+1)^{-\ell}-1\}\) is \((x_{1},\ldots,x_{k})=(k^{-\ell},\ldots,k^{-\ell})\). This would imply that Breaker can choose \(k^{-\ell}\) in the first round and win the game. Let \(a_{1},\ldots,a_{k}\in\{1,2,\ldots,(k+1)^{-\ell}-1\}\) with
\[a_{1}^{1/\ell}+\cdots+a_{k}^{1/\ell}=1,\]
and \(a_{1}\leq\ldots\leq a_{k}\). Since the sum a rational number and an irrational number is irrational, \(a_{1}^{1/\ell},\ldots,a_{k}^{1/\ell}\) are rational numbers. Since \(a_{1},\ldots,a_{k}\in\{1,2,\ldots,(k+1)^{-\ell}-1\}\), we have \(a_{1},\ldots,a_{k}\in\{1,2^{-\ell},\ldots,k^{-\ell}\}\). If \(a_{i}<k^{-\ell}\) for some \(i\in[k]\), then
\[1=a_{1}^{1/\ell}+\cdots+a_{k}^{1/\ell}>k(k^{-\ell})^{1/\ell}=1\]
which is impossible. Hence the only solution to \(x_{1}^{1/\ell}+\cdots+x_{k}^{1/\ell}=1\) in \(\{1,2,\ldots,(k+1)^{-\ell}-1\}\) is \((x_{1},\ldots,x_{k})=(k^{-\ell},\ldots,k^{-\ell})\) and Breaker wins the \(G((k+1)^{-\ell}-1,k,\ell)\) game.
Now we prove the upper bound in Theorem 1.3. By Lemma 2.1, \(f(k,\ell)\leq[f(k,-1)]^{-\ell}\). Hence, it suffices to show that for all \(k\geq 4\), \(f(k,-1)\leq k+2\). The next two lemmas will establish this.
**Lemma 5.3**.: _If \(k+1\neq p\) or \(p^{2}\) for any prime \(p\), then \(f(k,-1)\leq k+1\)._
Proof.: Suppose \(k+1\neq p\) or \(p^{2}\) for any prime \(p\). We will prove that Maker wins the \(G(k+1,k,-1)\) game. In this case, we have \(k+1=AB\) for some integers \(A>1\) and \(B>1\) with \(A\neq B\). Then we have \((A-1)B\neq B(A-1)\), \((A-1)B<k<k+1\) and \(B(A-1)<k<k+1\). Consider the following solutions in \(\{1,2,\ldots,k+1\}\):
\[(x_{1},x_{2},\ldots,x_{k-1},x_{k},y)=(k,k,\ldots,k,k,1),\]
\[(x_{1},\ldots,x_{(A-1)B},x_{(A-1)B+1},\ldots,x_{k},y)=(AB,\ldots,AB,A(B-1), \ldots,A(B-1),1),\]
and
\[(x_{1},\ldots,x_{A(B-1)},x_{A(B-1)+1},\ldots,x_{k},y)=(AB,\ldots,AB,(A-1)B, \ldots,(A-1)B,1).\]
Based on these solutions, Maker wins the \(G(k+1,k,-1)\) game using the following strategy: Maker chooses \(1\) in the first round; if Breaker does not choose \(k\) in the first round, then Maker chooses \(k\) in the second round to win the game; otherwise, Maker will choose \(k+1=AB\) in the second round and win the game by choosing either \(A(B-1)\) or \((A-1)B\) in the third round.
**Lemma 5.4**.: _If \(k+1=p\) or \(p^{2}\) for some prime \(p\geq 5\), then \(f(k,-1)\leq k+2\)._
Proof.: Suppose \(k+1=p\) or \(p^{2}\) for some prime \(p\geq 5\). We show that Maker wins the \(G(k+2,k,-1)\) game.
Since \(k+1\geq 5\) is odd, \(k\) is even and \(k\geq 4\). Hence \((k+2)/2\neq k\). Consider the following solutions in \(\{1,2,\ldots,k+2\}\):
\[(x_{1},x_{2},\ldots,x_{k-1},x_{k},y)=(k,k,\ldots,k,k,1),\]
\[(x_{1},\ldots,x_{(k-2)/2},x_{(k-2)/2+1},\ldots,x_{k},y)=(k-2,\ldots,k-2,k+2, \ldots,k+2,1),\]
and
\[(x_{1},x_{2},x_{3},\ldots,x_{k},y)=((k+2)/2,(k+2)/2,k+2,\ldots,k+2,1).\]
Based on these solutions, Maker wins the \(G(k+2,k,-1)\) game by the following strategy: Maker chooses \(1\) in the first round; if Breaker does not choose \(k\) in the first round, then Maker chooses \(k\) in the second round to win the game; otherwise, Maker will choose \(k+2\) in the second round and win the game by choosing either \((k+2)/2\) or \(k-2\) in the third round.
### Remarks
The inequality in Lemma 5.4 becomes equality when \(k+1=p\) for some odd prime \(p\).
**Theorem 5.5**.: _If \(k+1=p\) for some odd prime \(p\), then \(f(k,-1)=k+2\)._
Proof.: Suppose \(k+1=p\) for some odd prime. By Lemma 5.4, we have \(f(k,-1)\leq k+2\). It remains to show that \(f(k,-1)\geq k+2\). To do this, it suffices to show that Breaker wins the \(G(k+1,k,-1)\) game.
Case 1: \(k+1=3\). The only solution to \(1/x_{1}+\cdots+1/x_{k}=1/y\) in \(\{1,2,3\}\) with \(x_{1},\ldots,x_{k}\) not necessarily distinct is \((x_{1},x_{2},y)=(2,2,1)\). Hence Breaker can win by choosing either \(1\) or \(2\) in the first round.
Case 2: \(k+1\geq 5\). By Lemma 5.1, if Maker does not choose \(1\) in the first round, then Breaker wins. So we assume that Maker chooses \(1\) in the first round. Now we show that Breaker wins by choosing \(k\) in the first round. It suffices to show that \(\{1,2,\ldots,k-1,k+1\}\) does not have a solution to \(1/x_{1}+\cdots+1/x_{k}=1/1\) where \(x_{1}\), \(\ldots\), \(x_{k}\) are not necessarily distinct. Suppose \((x_{1},x_{2},\ldots,x_{k-1},x_{k})=(a_{1},a_{2},\ldots,a_{k-1},a_{k})\) is a solution in \(\{1,2,\ldots,k-1,k+1\}\). We show that \(a_{k}=k+1\). Suppose not. Then \(a_{i}<k\) for all \(i=1,2,\ldots,k\). So
\[\frac{1}{a_{1}}+\cdots+\frac{1}{a_{k}}>\frac{1}{k}+\cdots+\frac{1}{k}=\frac{1} {1}\]
which is a contradiction. Hence \(a_{k}=k+1\). Now we have
\[1=\frac{A}{k+1}+\sum_{i=1}^{k-A}\frac{1}{a_{i}}\]
where \(A\in\{1,2,\ldots,k-1\}\) and \(a_{i}<k\) for all \(i=1,\ldots,k-A\). Rearranging the equation, we get
\[\sum_{i=1}^{k-x}\frac{1}{a_{i}}=\frac{p-x}{p}.\]
Since \(p\) is prime, \(p\) divides the least common multiple of \(a_{1},\ldots,a_{k-x}\). Since \(p\) is prime, \(p\) divides \(a_{i}\) for some \(i\) which is a contradiction because \(a_{i}<p\) for all \(i\). Hence Breaker wins the game.
We are unable to verify that \(f(k,-1)=k+2\) when \(k+1=p^{2}\) for some odd prime \(p\). However, we believe this should be the case.
**Conjecture 5.6**.: If \(k+1=p^{2}\) for some odd prime \(p\), then \(f(k,-1)=k+2\).
## 6. Proof of Theorem 1.4
Let \(k,\ell\) be integers with \(k\geq 2\) and \(\ell\leq-1\). By Lemma 2.1, we have \(f^{*}(k,\ell)\leq[f^{*}(k,-1)]^{-\ell}\). It remains to show that \(f^{*}(k,-1)=\exp(O(k\log k))\). By Theorem 2.5, it suffices to find a finite set \(A\subseteq\mathbb{N}\) such that Maker wins the \(G^{*}(A,x_{1}+\cdots+x_{k}=y)\) game and the least common multiple of \(A\) is small.
**Lemma 6.1**.: _Let \(k\geq 4\) be an integer and let \(A=\{1,\ldots,2k+1\}\cup\{k^{2}-k+1,\ldots,k^{2}+2k\}\). Then Maker wins the \(G^{*}(A,x_{1}+\cdots+x_{k}=y)\) game._
Proof.: Let \(k\geq 4\). For \(i=1,\ldots,k+3\), let \(m_{i}\) be the number selected by Maker in round \(i\) and let \(b_{i}\) be the number selected by Breaker in round \(i\).
Consider the following strategy for Maker:
1. Set \(m_{1}=1\) and \(M_{1}=\{\{2,3\},\{4,5\},\ldots,\{2k,2k+1\}\}\).
2. For \(i=2,\ldots,k+1\), if \(b_{i-1}\in B\) for some \(B\in M_{i-1}\), then set \(m_{i}\in B\backslash\{b_{i-1}\}\) and \(M_{i}=M_{i-1}\backslash\{B\}\); if \(b_{i-1}\notin B\) for any \(B\in M_{i-1}\), then set \(m_{i}=\min_{S\in M_{i-1}}\min S\), \(M_{i}=M_{i-1}\backslash S^{\prime}\) where \(m_{i}\in S^{\prime}\).
3. In round \(k+2\), if there exists a subset \(\{a_{1},\ldots,a_{k}\}\subseteq\{m_{1},\ldots,m_{k+1}\}\) of size \(k\) such that \(a_{1}+\cdots+a_{k}\in\{k^{2}-k+1,\ldots,k^{2}+2k\}\backslash\{b_{1},\ldots,b_{ k+1}\}\), then set \(m_{k+2}=a_{1}+\cdots+a_{k}\). Otherwise, set \(m_{k+2}=2k+1\), and then, in round \(k+3\), set \(m_{k+3}=a_{1}+\cdots+a_{k}\) where \(\{a_{1},\ldots,a_{k}\}\subseteq\{m_{1},\ldots,m_{k+2}\}\) has size \(k\) with \(a_{1}+\cdots+a_{k}\in\{k^{2}-k+1,\ldots,k^{2}+2k\}\backslash\{b_{1},\ldots,b_{ k+2}\}\).
In Step (3), Maker wins for the first case. So it remains to show that if no subset \(\{a_{1},\ldots,a_{k}\}\subseteq\{m_{1},\ldots,m_{k+1}\}\) of size \(k\) satisfies \(a_{1}+\cdots+a_{k}\in\{k^{2}-k+1,\ldots,k^{2}+2k\}\backslash\{b_{1},\ldots,b_{ k+1}\}\), then Maker can set \(m_{k+2}=2k+1\) in round \(k+2\) and there exists a subset \(\{a_{1},...,a_{k}\}\subseteq\{m_{1},\ldots,m_{k+2}\}\) of size \(k\) such that \(a_{1}+\cdots+a_{k}\in\{k^{2}-k+1,\ldots,k^{2}+2k\}\backslash\{b_{1},\ldots,b_{ k+2}\}\).
Suppose, at the beginning of round \(k+2\), no subset \(\{a_{1},\ldots,a_{k}\}\subseteq\{m_{1},\ldots,m_{k+1}\}\) of size \(k\) satisfies \(a_{1}+\cdots+a_{k}\in\{k^{2}-k+1,\ldots,k^{2}+2k\}\backslash\{b_{1},\ldots,b_{ k+1}\}\). First note that by Maker's strategy, for all \(i=2,\ldots,k+1\), \(m_{i}=2(i-1)\) or \(2(i-1)+1\). So for all subsets \(\{a_{1},\ldots,a_{k}\}\subseteq\{m_{1},\ldots,m_{k+1}\}\) of size \(k\), we have
\[a_{1}+\cdots+a_{k}\geq 1+2+4+\cdots+2(k-1)=k^{2}-k+1\]
and
\[a_{1}+\cdots+a_{k}\leq 3+5+\cdots+2k+1=(k+1)^{2}-1=k^{2}+2k.\]
So if no subset \(\{a_{1},\ldots,a_{k}\}\subseteq\{m_{1},\ldots,m_{k+1}\}\) of size \(k\) satisfies \(a_{1}+\cdots+a_{k}\in\{k^{2}-k+1,\ldots,k^{2}+2k\}\backslash\{b_{1},\ldots,b_{ k+1}\}\), then \(b_{1},\ldots,b_{k+1}\notin\{1,\ldots,2k+1\}\). Now according to Maker's strategy, we have, \(m_{1}=1\), and \(m_{i}=2(i-1)\) for all \(i=2,\ldots,k+1\). This implies that at the beginning of round \(k+2\), \(2k+1\) is available to Maker and hence Maker can set \(m_{k+2}=2k+1\). At the same time, for all subsets \(\{a_{1},\ldots,a_{k}\}\subseteq\{m_{1},\ldots,m_{k+1}\}\) of size \(k\), we have \(a_{1}+\cdots+a_{k}\leq 2+4+\cdots+2k=k^{2}+k\) and hence \(b_{1},\ldots,b_{k+1}\leq k^{2}+k\). By setting \(m_{k+2}=2k+1\), there are at least two subsets of \(\{m_{1},\ldots,m_{k+2}\}\) of size \(k\) whose sum is greater than \(k^{2}+k\). They are \(\{2,4,\ldots,2(k-1),2k+1\}\) and \(\{2,4,\ldots,2(k-2),2k,2k+1\}\). The first subset sums to \(k^{2}+k+1<k^{2}+2k\) and the second one sums to \(k^{2}+k+3<k^{2}+2k\). Since Breaker can only occupy one of them in round \(k+2\), there exists a subset \(\{a_{1},\ldots,a_{k}\}\subseteq\{m_{1},...,m_{k+2}\}\) of size \(k\) such that \(a_{1}+\cdots+a_{k}\in\{k^{2}-k+1,\ldots,k^{2}+2k\}\backslash\{b_{1},\ldots,b_{ k+2}\}\). This proves that Maker wins the \(G^{*}(A,x_{1}+\cdots+x_{k}=y)\) game.
Let \(k\geq 4\) be an integer and let \(A:=\{1,\dots,2k+1\}\cup\{k^{2}-k+1,\dots,k^{2}+2k\}\). By Theorem 2.5 and Lemma 6.1, we have
\[f^{*}(k,-1)\leq \mathrm{lcm}\{n:n\in A\}\] \[\leq \mathrm{lcm}\{1,...,2k+1\}\mathrm{lcm}\{k^{2}-k+1,...,k^{2}+2k\}\] \[\leq \mathrm{lcm}\{1,...,2k+1\}(k^{2}+2k)^{3k}\] \[= e^{(2+o(1))k}e^{3k\ln(k^{2}+2k)}.\]
Hence we have \(f^{*}(k,-1)=\exp(O(k\ln k))\).
### Remarks
By exhaustive search, we are able to find the exact value of \(f^{*}(k,-1)\) for \(k=2\).
**Proposition 6.2**.: \(f^{*}(2,-1)=36\)_._
Proof.: We first show that Maker wins the \(G^{*}(36,2,-1)\) game. Consider the following solutions to \(1/x_{1}+1/x_{2}=1/y\) in \(\{1,2,\dots,36\}\) with \(x_{1}\neq x_{2}\): \((x_{1},x_{2},y)=(4,12,3)\), \((6,12,4)\), \((12,36,9)\), and \((18,36,12)\). We construct a rooted binary tree using these solutions as follows:
In Figure 1, each path from the root \(12\) to a leaf is a solution set to \(1/x_{1}+1/x_{2}=1/y\). It is easy to see that Maker can win this game by doing the following:
1. Maker selects the root in round \(1\).
2. In round \(2\), Maker selects a vertex that is adjacent to the root such that both of its children are untouched by Breaker.
3. In round \(3\), Maker chooses a child of the vertex that Maker selected in round \(2\).
Now we show that Breaker wins the \(G^{*}(36,2,-1)\) game. One can check that there are \(13\) solutions to \(1/x_{1}+1/x_{2}=1/y\) in \(\{1,2,\dots,35\}\): \(\{2,3,6\}\), \(\{3,4,12\}\), \(\{4,6,12\}\), \(\{4,5,20\}\), \(\{5,6,30\}\), \(\{6,8,24\}\), \(\{6,9,18\}\), \(\{6,10,15\}\), \(\{8,12,24\}\), \(\{10,14,35\}\), \(\{10,15,30\}\), \(\{12,20,30\}\), and \(\{12,21,28\}\). Breaker wins the game using the pairing strategy over \(\{4,12\}\), \(\{8,24\}\), \(\{10,15\}\), \(\{2,3\}\), \(\{5,20\}\), \(\{6,30\}\), \(\{9,18\}\), \(\{14,35\}\), \(\{20,30\}\), and \(\{21,28\}\).
For general \(k\), Theorem 1.4 only provides an upper bound for \(f^{*}(k,-1)\). It is trivially true that \(f^{*}(k,-1)\geq 2k+1\) because Maker needs to occupy at least \(k+1\) numbers to win. However, we don't have a nontrivial lower bound.
**Problem 6.3**.: Find a nontrivial lower bound for \(f^{*}\left(k,-1\right)\).
## 7. Equations with Arbitrary Coefficients
In this section, we briefly discuss the Maker-Breaker Rado games for the equation
\[a_{1}x_{1}+\dots+a_{k}x_{k}=y, \tag{7.1}\]
where \(k,a_{1},\dots,a_{k}\) are positive integers with \(k\geq 2\) and \(a_{1}\geq a_{2}\geq\dots\geq a_{k}\). Write \(w:=a_{1}+\dots+a_{k}\), and \(w^{*}:=\sum_{i=1}^{k}(2i-1)a_{i}\). Let \(f(a_{1}x_{1}+\dots+a_{k}x_{k}=y)\) be the smallest positive integer \(n\) such
Figure 1. Rooted Binary Tree for Solutions to \(1/x_{1}+1/x_{2}=1/y\)
that Maker wins the \(G(n,a_{1}x_{1}+\cdots+a_{k}x_{k}=y)\) game and let \(f^{*}(a_{1}x_{1}+\cdots+a_{k}x_{k}=y)\) be the smallest positive integer \(n\) such that Maker wins the \(G^{*}(n,a_{1}x_{1}+\cdots+a_{k}x_{k}=y)\) game.
Hopkins and Schaal [16], and Guo and Sun [11] proved that if \(\{1,2,\ldots,a_{k}w^{2}+w-a_{k}\}\) is partitioned into two classes, then one of them contains a solution to Equation (7.1) with \(x_{1},\ldots,x_{k}\) not necessarily distinct; and there exists a partition of \(\{1,2,\ldots,a_{k}w^{2}+w-a_{k}-1\}\) into two classes such that neither contains a solution to Equation (7.1) with \(x_{1},\ldots,x_{k}\) not necessarily distinct. By these results and strategy stealing, we have \(f(a_{1}x_{1}+\cdots+a_{k}x_{k}=y)\leq a_{k}w^{2}+w-a_{k}\). The next theorem shows that, in fact, \(f(a_{1}x_{1}+\cdots+a_{k}x_{k}=y)\) is much smaller than \(a_{k}w^{2}+w-a_{k}\).
**Theorem 7.1**.: _For all integers \(k\geq 2\), we have \(w+2a_{k}\leq f(a_{1}x_{1}+\cdots+a_{k}x_{k}=y)\leq w+a_{k-1}+a_{k}\)._
Proof.: The case that \(k=2\) and \(a_{1}=a_{2}=1\) is a special case of Lemma 3.1. So we assume that \(k>2\) or \(k=2\) but \(a_{1}\geq 2\). Then \(w>2\).
We first show that Maker wins the \(G(w+a_{k-1}+a_{k},a_{1}x_{1}+\cdots+a_{k}x_{k}=y)\) game. Maker chooses \(1\) in round \(1\). If Breaker does not choose \(w\) in round \(1\), then Maker wins in round \(2\) by choosing \(w\). If Breaker chooses \(w\) in round \(1\), then Maker chooses \(2\) in round \(2\) and either \(w+a_{k}\) or \(w+a_{k-1}+a_{k}\) in round \(3\) to win the game.
Now we show that Breaker wins the \(G(w+2a_{k}-1,a_{1}x_{1}+\cdots+a_{k}x_{k}=y)\) game. The only solutions to Equation (7.1) in \(\{1,2,\ldots,w+2a_{k}-1\}\) are
\[(x_{1},x_{2},\ldots,x_{k-1},x_{k},y)=(1,1,\ldots,1,1,w)\]
and
\[(x_{1},x_{2},\ldots,x_{k-1},x_{k},y)=(1,1,\ldots,1,2,w+a_{k}).\]
Now Breaker wins by the pairing strategy over \(\{1,w\}\) and \(\{2,w+a_{k}\}\). Note that if \(a_{i}=a_{k}\) for some \(i\in\{1,2,\ldots,k-1\}\), then \((x_{1},\ldots,x_{i-1},x_{i},x_{i+1},\ldots,x_{k},y)=(1,\ldots,1,2,1,\ldots,1,w+ a_{1})\) is also a solution, but Breaker can still win the game by the pairing strategy becuase \(w+a_{i}=w+a_{k}\).
The next theorem provides lower and upper bounds for \(f^{*}(a_{1}x_{1}+\cdots+a_{k}x_{k}=y)\).
**Theorem 7.2**.: _For all integers \(k\geq 4\), we have_
\[w^{*}\leq f^{*}(a_{1}x_{1}+\cdots+a_{k}x_{k}=y)\leq w^{*}+(2k-2)(a_{1}-a_{k}) +(k+3)a_{k-2}.\]
Proof.: Let \(k\geq 4\) be an integer and write \(W=w^{*}+(2k-2)(a_{1}-a_{k})+(k+3)a_{k-2}\). We first show that Breaker wins the \(G^{*}(w^{*}-1,a_{1}x_{1}+\cdots+a_{k}x_{k}=y)\) game by choosing the smallest number available each round. Suppose, for a contradiction, that Maker wins. Let \(\alpha_{1}\leq\alpha_{2}\leq\cdots\leq\alpha_{s}\), where \(s\geq k+1\), be the numbers chosen by Maker after winning the game. Then by Breaker's strategy, we have \(\alpha_{i}\geq 2i-1\) for all \(i=1,2,\ldots,k\). By the rearrangement inequality [13], the smallest \(k\)-sum is
\[\sum_{i=1}^{k}a_{i}\alpha_{i}\geq\sum_{i=1}^{k}(2i-1)a_{i}=w^{*}\]
which is a contradiction.
Now we show that Maker wins the \(G^{*}(W,a_{1}x_{1}+\cdots+a_{k}x_{k}=y)\) game. We split it into two cases.
Case 1: \(\alpha_{1}=\alpha_{k}=c\) for some \(c\). Since the coefficients of \(x_{1},\ldots,x_{k}\) are the same, Maker's strategy defined in the proof of Lemma 4.1 still applies by multiplying the \(k\)-sums in the proof of Lemma 4.1 by \(c\). So Maker wins the \(G^{*}(ck^{2}+3c,a_{1}x_{1}+\cdots+a_{k}x_{k}=y)\) game. Since
\[W=w^{*}+(2k-2)(a_{1}-a_{k})+(k+3)a_{k-2}=ck^{2}+ck+3c>ck^{2}+3c,\]
Maker wins the \(G^{*}(W,a_{1}x_{1}+\cdots+a_{k}x_{k}=y)\) game.
Case 2: \(a_{1}>a_{k}\). We will show that Maker wins the game with the following strategy:
1. Maker chooses the smallest number available each round for the first \(k+1\) rounds;
2. and then chooses an available \(k\)-sum in round \(k+2\).
For \(i=1,2,\ldots,k+1\), let \(m_{i}\) be the number chosen by Maker in round \(i\). Then by Maker's strategy, we have \(i\leq m_{i}\leq 2i-1\) for all \(i=1,2,\ldots,k+1\).
Since \(a_{1}>a_{k}\), there exists \(t\in\{2,3,\ldots,k\}\) such that \(\alpha_{t}<\alpha_{t-1}\). For \(i=1,\ldots,k+1\), let \(m_{i}\) be the number chosen by Maker in round \(i\). By the rearrangement inequality, we have the following \(k\) distinct \(k\)-sums involving only \(m_{1},\ldots,m_{k}\):
\[(a_{t}m_{t+j}+a_{t+j}m_{t})-(a_{t}m_{t}+a_{t+j}m_{t+j})+\sum_{i=1}^{k}a_{i}m_{ i},\text{ where }j=0,1,\ldots,k-t\]
and
\[(a_{t-j^{\prime}}m_{k}+a_{k}m_{t-j^{\prime}})-(a_{t-j^{\prime}}m_{t-j^{\prime} }+a_{k}m_{k})+\sum_{i=1}^{k}a_{i}m_{i},\text{ where }j^{\prime}=1,2,\ldots,t-1.\]
Among these distinct \(k\)-sums, the smallest is \(\sum_{i=1}^{k}a_{i}m_{i}\) and the largest is
\[(a_{1}m_{k}+a_{k}m_{1})-(a_{1}m_{1}+a_{k}m_{k})+\sum_{i=1}^{k}a_{i}m_{i}=a_{1} m_{k}+\left(\sum_{i=2}^{k-1}a_{i}m_{i}\right)+a_{k}m_{1}. \tag{7.2}\]
Since \(k\geq 4\), there are two terms of the form \(a_{i}m_{i}\), \(i\in\{2,\ldots,k-1\}\), in the middle of the right hand side of Equation (7.2). Replacing \(m_{k-1}\) with \(m_{k+1}\) and replacing \(m_{k-2}\) with \(m_{k+1}\), we get two larger and distinct \(k\)-sums:
\[a_{1}m_{k}+\left(\sum_{i=2}^{k-2}a_{i}m_{i}\right)+a_{k-1}m_{k+1}+a_{k}m_{1}\]
and
\[a_{1}m_{k}+\left(\sum_{i=2}^{k-3}a_{i}m_{i}\right)+a_{k-2}m_{k+1}+a_{k-1}m_{k- 1}+a_{k}m_{1}.\]
The largest of these \(k\)-sums is
\[a_{1}m_{k}+\left(\sum_{i=2}^{k-3}a_{i}m_{i}\right)+a_{k-2}m_{k+1 }+a_{k-1}m_{k-1}+a_{k}m_{1}\] \[= a_{1}m_{k}++a_{k-2}m_{k+1}+a_{k}m_{1}-a_{1}m_{1}-a_{k-2}m_{k-2}- a_{k}m_{k}+\sum_{i=1}^{k}a_{i}m_{i}\] \[= (m_{k}-m_{1})(a_{1}-a_{k})+a_{k-2}(m_{k+1}-m_{k-2})+\sum_{i=1}^{k }a_{i}m_{i}\] \[\leq w^{*}+[(2k-1)-1](a_{1}-a_{k})+[2k+1-(k-2)]a_{k-2}\] \[= w^{*}+(2k-2)(a_{1}-a_{k})+(k+3)a_{k-2}=W.\]
So there exists a \(k\)-sum unoccupied by Breaker in the beginning of round \(k+2\) and hence Maker wins the \(G^{*}(W,a_{1}x_{1}+\cdots+a_{k}x_{k}=y)\) game by choosing the available \(k\)-sum in round \(k+2\).
The bounds in Theorem 7.2 can be optimized using the technique in the proofs of Lemmas 4.1 and 4.2, but we don't attempt it here.
## 8. Concluding Remarks
It would be interesting to study Rado games for other well-studied equations in arithmetic Ramsey theory. One direction is to study Rado games for
\[a_{1}x_{1}^{1/\ell}+\cdots+a_{k}x_{k}^{1/\ell}=y^{1/\ell}, \tag{8.1}\]
where \(\ell,k,a_{1},\ldots,a_{k}\) are positive integers with \(k\geq 2\) and \(\ell\neq 0\). We studied the \(G(n,a_{1}x_{1}+\cdots+a_{k}x_{k}=y)\) and \(G^{*}(n,a_{1}x_{1}+\cdots+a_{k}x_{k}=y)\) games in Section7, but how the fractional power \(1/\ell\) interacts with the coefficients \(a_{1},\ldots,a_{k}\) is yet unknown.
**Problem 8.1**.: What is the smallest integer \(n\) such that Maker wins the \(G(n,a_{1}x_{1}^{1/\ell}+\cdots+a_{k}x_{k}^{1/\ell}=y^{1/\ell})\) game for \(\ell\in\mathbb{Z}\backslash\{-1,0,1\}\)? And what is the smallest integer \(n\) such that Maker wins the \(G^{*}(n,a_{1}x_{1}^{1/\ell}+\cdots+a_{k}x_{k}^{1/\ell}=y^{1/\ell})\) game for \(\ell\in\mathbb{Z}\backslash\{-1,0,1\}\)?
Another direction is to study Rado games for the equation
\[x_{1}^{\ell}+\cdots+x_{k}^{\ell}=y^{\ell} \tag{8.2}\]
where \(\ell\in\mathbb{Z}\backslash\{-1,0,1\}\) and \(k\in\mathbb{N}\backslash\{1\}\). In 2016, Heule, Kullmann, and Marek [15] verified that if \(\{1,2,\ldots,7825\}\) is partitioned into two classes, then one of them contains a solution to Equation8.2 with \(k=\ell=2\) and that there exists a partition of \(\{1,2,\ldots,7824\}\) into two classes so that neither contains a solution to Equation8.2 with \(k=\ell=2\). It is easy to see that if \(a_{1},a_{2},b\in\mathbb{N}\) with \(a_{1}^{2}+a_{2}^{2}=b^{2}\), then \(a_{1}\neq a_{2}\). So the result of Heule, Kullmann, and Marek implies that Maker wins both the \(G(7825,x_{1}^{2}+x_{2}^{2}=y^{2})\) game and the \(G^{*}(7825,x_{1}^{2}+x_{2}^{2}=y^{2})\) game. It would be interesting to see if Maker can do better.
**Problem 8.2**.: Does there exist \(n<7825\) such that Maker wins the \(G^{*}(n,x_{1}^{2}+x_{2}^{2}=y^{2})\) game?
The situation for Maker is more complicated when \(\ell\geq 3\). By Fermat's last theorem [24], for all \(n,\ell\in\mathbb{N}\) with \(\ell\geq 3\), Breaker wins both the \(G(n,x_{1}^{\ell}+x_{2}^{\ell}=y^{\ell})\) game and the \(G^{*}(n,x_{1}^{\ell}+x_{2}^{\ell}=y^{\ell})\) for \(\ell\geq 3\). By homogeneity, Breaker also wins the \(G(n,x_{1}^{\ell}+x_{2}^{\ell}=y^{\ell})\) game and the \(G^{*}(n,x_{1}^{\ell}+x_{2}^{\ell}=y^{\ell})\) game for all \(n\in\mathbb{N}\) and \(\ell\leq-3\). Hence, in order to study Rado games for Equation8.2, one needs extra conditions on \(k\) and \(\ell\) to make sure there are solutions to Equation8.2 in \(\mathbb{N}\). Recently, Chow, Lindqvist, and Prendiville [8] proved that, for all \(\ell\in\mathbb{N}\), there exists \(k_{0}\in\mathbb{N}\) such that for all \(k\geq k_{0}\), if we partition of \(\mathbb{N}\) into two classes, then one of them contains a solution to Equation8.2 with \(x_{1},\ldots,x_{k}\) not necessarily distinct. By the result of Brown and Rodl [6] described in Section1, the same result holds for \(\ell\in\{-1,-2,\ldots\}\) as well. For example, if \(|\ell|=2\), then \(k=4\) suffices; and if \(|\ell|=3\), then \(k=7\) is enough.
|
2309.10414 | Effects of plasma resistivity in three-dimensional full-F gyro-fluid
turbulence simulations | A full-F, isothermal, electromagnetic, gyro-fluid model is used to simulate
plasma turbulence in a COMPASS-sized, diverted tokamak. A parameter scan
covering three orders of magnitude of plasma resistivity and two values for the
ion to electron temperature ratio with otherwise fixed parameters is setup and
analysed. Simulations are performed with a new version of the FELTOR code,
which is fully parallelized on GPUs. Each simulation covers a couple of
milliseconds.
Two transport regimes for high and low plasma resistivities are revealed.
Beyond a critical resistivity the mass and energy confinement reduces with
increasing resistivity. Further, for high plasma resistivity the direction of
parallel acceleration is swapped compared to low resistivity.
The integration of exact conservation laws over the closed field line region
allows for an identification of numerical errors within the simulations. The
electron force balance and energy conservation show relative errors on the
order of $10^{-3}$ while the particle conservation and ion momentum balance
show errors on the order of $10^{-2}$.
Relative fluctuations amplitudes increase from below $1\%$ in the core to
$15\%$ in the edge and up to $40\%$ in the scrape-off layer.
Finally, three-dimensional visualisations using ray tracing techniques are
displayed and discussed. The field-alignment of turbulent fluctuations in
density and parallel current becomes evident. | M. Wiesenberger, M. Held | 2023-09-19T08:27:52Z | http://arxiv.org/abs/2309.10414v1 | # Effects of plasma resistivity in three-dimensional full-F gyro-fluid turbulence simulations
###### Abstract
A full-F, isothermal, electromagnetic, gyro-fluid model is used to simulate plasma turbulence in a COMPASS-sized, diverted tokamak. A parameter scan covering three orders of magnitude of plasma resistivity and two values for the ion to electron temperature ratio with otherwise fixed parameters is setup and analysed. Simulations are performed with a new version of the FELTOR code, which is fully parallelized on GPUs. Each simulation covers a couple of milliseconds.
Two transport regimes for high and low plasma resistivities are revealed. Beyond a critical resistivity the mass and energy confinement reduces with increasing resistivity. Further, for high plasma resistivity the direction of parallel acceleration is swapped compared to low resistivity.
The integration of exact conservation laws over the closed field line region allows for an identification of numerical errors within the simulations. The electron force balance and energy conservation show relative errors on the order of \(10^{-3}\) while the particle conservation and ion momentum balance show errors on the order of \(10^{-2}\). Relative fluctuations amplitudes increase from below 1% in the core to 15% in the edge and up to 40% in the scrape-off layer.
Finally, three-dimensional visualisations using ray tracing techniques are displayed and discussed. The field-alignment of turbulent fluctuations in density and parallel current becomes evident.
+
Footnote †: : _Plasma physics and controlled fusion_
_Keywords_: gyro-fluid, resistivity, edge transport, confinement, FELTOR
## 1 Introduction
Turbulence in the edge and scrape-off layer (SOL) regions of magnetically confined plasmas displays very efficient (and unwelcome) transport properties [1, 2]. In fact, the observed levels of transport of particles and thermal energy out of the confined region by far exceed the ones predicted by collisional transport theory [3, 4] even if neoclassical effects from the magnetic field geometry are taken into account. This has led to the alternative denomination of turbulent transport as "anomalous" transport. Since particle and energy confinement are the ultimate goal of any magnetic fusion device plasma turbulence is subject to intensive research.
Numerous challenges exist when modelling plasma turbulence. For example, it is observed that relative fluctuation levels increase from the edge into the SOL and may approach and even exceed order unity [5, 6, 7, 8, 9]. This was recently also found close to the X-point region [10]. This means that a linearisation of equations around a background profile is inadmissible in modelling. Avoiding such a separation between stationary profile and dynamic fluctuations in models has the additional advantage that a profile can interact with turbulence and evolve self-consistently in time. The profile is then an output of the model rather than a given input.
Furthermore, it is observed that the ratio of ion-temperature relative to electron temperature is above one in the edge and scrape-off layer regions [11, 12, 13]. Turbulent eddies in the edge and in blobs in the scrape-off layer are of the size \(\rho_{s}=\sqrt{T_{e}m_{i}}/(eB_{0})\) where \(T_{e}\) and \(m_{i}\) are electron temperature and ion mass respectively, \(e\) is unit charge and \(B_{0}\) is the reference magnetic field strength. With \(\rho_{i}=\sqrt{T_{i}m_{i}}/(eB_{0})\approx\rho_{s}\) (with \(T_{i}\) the ion temperature) this leads to finite Larmor radius and polarization effects being important for the dynamics of turbulent eddies and blobs [14, 15, 16].
Full-F gyro-fluid models are able to evolve large fluctuation amplitudes, steep background profiles and include finite Larmor radius effects [17, 14, 18, 19, 16]. Gyro-fluid models in general result from taking velocity space moments over an underlying gyro-kinetic model and share many of its advantages: finite Larmor radius corrections, consistent particle drifts, an energy and momentum theorem based on variational methods in the underlying gyro-kinetic model and an inherent symmetry in moment equations with regards to multiple ion species. These advantages are absent from so-called drift-fluid models that result from a drift-expansion of the Braginskii equations [20, 21, 22, 23, 24]. A downside of gyro-fluid models, inherited again from their underlying gyro-kinetic models, are the impractical expressions for plasma-neutral interactions and scattering collisions available today. Attempts at numerically implementable expressions derived in a long-wavelength limit were recently presented in [25]. Compared to gyro-kinetic models, gyro-fluid models invoke a closure scheme that can be tailored to specific physical regimes of interest, e.g. the collisional regime. Such closures can be adopted at the chosen number of moments, which emerge typically from a Hermite-Laguerre expansion in velocity space of the gyro-averaged gyro-center distribution function [17, 19]. The number of moment equations is usually small (2 in the present work) and the associated reduced velocity space resolution translates to a corresponding saving in computational cost over gyro-kinetic models. This implies that gyro-fluid models are more computationally efficient for parameter scans or for resolving larger plasma volumes than gyro-kinetic models.
Further challenges arise in numerical approaches to plasma turbulence. The dynamics of a magnetized plasma is highly anisotropic with respect to \(\mathbf{\hat{b}}\), the magnetic unit vector. Fluctuations along \(\mathbf{\hat{b}}\) typically have a much larger extension \(L_{\parallel}\) than fluctuations perpendicular to it \(L_{\perp}\ll L_{\parallel}\). In a numerical simulation the use of field-aligned coordinates, in particular flux-tube coordinate systems thus seems appropriate. The field alignment translates to a low spatial resolution requirement along the field line following coordinate [26, 27, 28]. However, field aligned coordinate systems cannot include X-points in the geometry. This is a major downside as one or more X-points in the magnetic field equilibrium are a crucial ingredient to current tokamak design and in particular ITER [29]. The X-point is connected to the construction of a divertor, which separates the plasma-wall interactions from the confined plasma region [1]. Further, it plays a crucial role in and at least facilitates the transition to the high confinement mode [30, 31, 32]. Correct modelling of magnetic field equilibria that include one or even several X-points is thus critical.
Two solutions to the problem exist to date. With the increase in computational resources it is possible to directly discretize and simulate model equations on non field-aligned coordinate systems [33, 34]. This allows simulations including X-points as exemplified by the GBS [35], STORM [36] or TOKAM-3X [37] codes. However, such an approach does not exploit the field-aligned character of turbulence and can thus only be used for small to medium sized tokamaks due to both strong numerical diffusion and extreme computational cost [38, 39]. An alternative approach is the so-called flux-coordinate independent approach [40, 38, 41, 42]. Here, the grid is not field-aligned while at the same time the toroidal direction is resolved by only a few points. Turbulence simulations of AUG were successfully performed with the GRILLIX
code [43, 44].
For the verification of codes the method of manufactured solution is often used [45, 37, 33, 35]. However, even in two-dimensional turbulence simulations numerical errors on the order of machine precision exponentially increase to order one within a short period of time [46]. This is fundamentally due to the turbulent nature of the underlying model and not an issue of the numerical implementation. Thus, turbulence simulations due to their very nature cannot reach pointwise convergence after sufficiently long simulation time. This makes the method of manufactured solutions unsuitable for a verification of results on a long time span.
In this contribution we address the above challenges in a new version of the simulation code FELTOR [47, 46]. As opposed to the drift-fluid models discretized in the mentioned GRILLIX, TOKAM-3X, GBS and STORM codes FELTOR discretizes a full-F gyro-fluid model and thus benefits from finite Larmor radius effects, an exact energy conservation law and consistent particle drifts. Polarization effects are taken in the long wavelength limit in order to avoid inversion of an operator function [16]. Similar to the GRILLIX code FELTOR uses an FCI scheme for its parallel dynamics but in a recently upgraded finite volume FCI formulation [42] that has significantly improved conservation properties compared to previous versions [40, 38, 41]. For the perpendicular dynamics FELTOR chooses discontinuous Galerkin methods [48, 49] in contrast to the above codes, which rely on finite difference methods. FELTOR is the only code among the mentioned ones that is fully ported to GPUs using a platform independent MPI+X implementation. Recently, all the above codes including FELTOR were part of a validation effort in TORPEX and TCV [50, 51].
FELTOR allows stable simulations encompassing several milliseconds of turbulent dynamics. The simulations are computationally cheap enough that a parameter scan is possible. We vary the plasma resistivity and the ion to electron temperature ratio in 12 simulations. We present techniques for three-dimensional visualisations using ray-tracing in order to gain visual intuition of the magnetic field, the density and the parallel current. In particular the field-alignment of turbulent fluctuations with \(L_{\parallel}\ll L_{\perp}\) is visible. In order to quantitatively analyse the simulation data we introduce the flux-surface averages and integration. Numerically, these are accurately computed by transforming on a flux-aligned grid [42]. We discuss flux-surface averaged density and fluctuation profiles. Afterwards, we focus on verification of the implementation. Since, as pointed out above, pointwise long-term convergence tests are impossible we here present verification through exact analytical conservation laws. These include mass, parallel momentum and energy conservation as well as the electron force balance. We suggest to use volume and time integration to define a numerical error of simulation results. At the same time we are able to identify the largest and most important terms in each of the mentioned conservation equations and further in the total parallel momentum balance. Applied to the mass and energy conservation, we can compute and discuss the mass and energy confinement times. The latter relate back to our initial statement of confinement being an important goal for the magnetic fusion devices.
This work is structured as follows. In Section 2 we present the gyro-fluid model including resistivity and diffusive terms, the density source and boundary conditions. This is followed by Section 3 where the magnetic field is described. A parameter scan over plasma resistivity and ion temperature is setup for model simulations of the COMPASS tokamak in Section 4 discussing the COMPASS magnetic field and the exact physical parameters in use. In Section 5 we present the results. We discuss performance observations, three-dimensional visualisations and density and fluctuation profiles. In particular, here we show the numerical verification with a focus on mass, energy, ion momentum and parallel force balance. Finally, we discuss particle and energy confinement times computed from previously analysed terms in the mass and energy conservation equations. We conclude in Section 6.
## 2 The model
In the following we denote \(\phi\) as the electric potential, \(A_{\parallel}\) the parallel magnetic potential, \(m\) the species mass, \(q\) the species charge, \(N\) the gyro-centre density, \(U_{\parallel}\) the gyro-centre parallel velocity, \(T_{\perp}\), \(T_{\parallel}\) the perpendicular, parallel temperatures, \(\mathbf{\hat{b}}\) the magnetic unit vector field and \(B\) the magnetic field strength. Note that all species dependent quantities \(m\), \(q\), \(N\), \(U_{\parallel}\), \(T_{\perp}\) and \(T_{\parallel}\) have an implied species index \(s\) that we omit in the notation. We define two magnetic field curvature vectors
\[\mathbf{K}_{\mathbf{\nabla}\cdot\mathbf{\hat{b}}} \coloneqq \frac{1}{B}(\mathbf{\nabla}\times\mathbf{\hat{b}}), \tag{1}\] \[\mathbf{K}_{\mathbf{\nabla}B} \coloneqq \frac{1}{B}(\mathbf{\hat{b}}\times\mathbf{\nabla}\ln B), \tag{2}\]
as well as perpendicular and parallel derivatives
\[\nabla_{\perp} \coloneqq -\mathbf{\hat{b}}\times(\mathbf{\hat{b}}\times\mathbf{\nabla}), \Delta_{\perp} \coloneqq \mathbf{\nabla}\cdot\mathbf{\nabla}_{\perp}, \tag{3}\] \[\nabla_{\parallel} \coloneqq \mathbf{\hat{b}}\cdot\mathbf{\nabla}, \Delta_{\parallel} \coloneqq \mathbf{\nabla}\cdot\mathbf{\hat{b}}\mathbf{\hat{b}}\cdot\mathbf{\nabla}. \tag{4}\]
Notice the formulary in A.
### Gyro-fluid moment equations
The gyro-centre continuity and parallel momentum conservation equations read for each species [17, 19, 52, 53] (omitting the species label)
\[\frac{\partial}{\partial t}N +\nabla\cdot\mathbf{J}_{N}=\Lambda_{N}+S_{N}, \tag{5}\] \[\frac{\partial}{\partial t}\left(mNU_{\parallel}\right) +qN\frac{\partial}{\partial t}A_{\parallel}+\nabla\cdot\mathbf{J}_{mNU}\] \[= F_{mNU,\mathbf{\nabla}B}+F_{mNU,\psi}+R_{\parallel}+\Lambda_{mNU}. \tag{6}\]
The system is closed by the parallel Ampere law
\[-\mu_{0}\Delta_{\perp}A_{\parallel}=\sum_{\mathrm{s}}qNU_{\parallel} \tag{7}\]
and the polarisation equation
\[\sum_{\mathrm{s}}\left[q\Gamma_{1}N+\nabla\cdot\left(\frac{mN}{B^{2}}\nabla_{ \perp}\phi\right)\right]=0, \tag{8}\]
where we sum over all species. We have the density current
\[\mathbf{J}_{N}:= NU_{\parallel}(\mathbf{\hat{b}}+\mathbf{b}_{\perp})+N\frac{\mathbf{\hat{b}} \times\nabla\psi}{B}\] \[+\frac{NT_{\parallel}+mNU_{\parallel}^{2}}{q}\mathbf{K}_{\mathbf{\nabla} \times\mathbf{\hat{b}}}+\frac{NT_{\perp}}{q}\mathbf{K}_{\mathbf{\nabla}B}, \tag{9}\]
momentum current
\[\mathbf{J}_{mNU}:= (mNU_{\parallel}^{2}+NT_{\parallel})(\mathbf{\hat{b}}+\mathbf{b}_{\perp})\] \[+mNU_{\parallel}\frac{\mathbf{\hat{b}}\times\nabla\psi}{B}\] \[+m\frac{3U_{\parallel}NT_{\parallel}+mNU_{\parallel}^{3}}{q}\mathbf{K }_{\mathbf{\nabla}\times\mathbf{\hat{b}}}\] \[+m\frac{U_{\parallel}NT_{\perp}}{q}\mathbf{K}_{\mathbf{\nabla}B} \tag{10}\]
and the electric and mirror force terms
\[F_{mNU,\psi} = -qN(\mathbf{\hat{b}}+\mathbf{b}_{\perp})\cdot\nabla\psi \tag{11}\] \[-mNU_{\parallel}\mathbf{K}_{\mathbf{\nabla}\times\mathbf{\hat{b}}}\cdot\nabla\psi,\] \[F_{mNU,\mathbf{\nabla}B} = -NT_{\perp}(\mathbf{\hat{b}}+\mathbf{b}_{\perp})\cdot\nabla\ln B\] (12) \[-m\frac{U_{\parallel}NT_{\perp}}{q}\mathbf{K}_{\mathbf{\nabla}\times\mathbf{ \hat{b}}}\cdot\nabla\ln B.\]
The definition of the diffusive terms \(\Lambda_{N}\) and \(\Lambda_{mNU}\) and the resistivity \(R_{\parallel}\) are shown in Section 2.3 while the gyro-centre density source term \(S_{N}\) is defined in Section 2.4. No source is added in the parallel momentum equation. We use
\[\Gamma_{1}:= \left(1-\frac{\rho_{0}^{2}}{2}\Delta_{\perp}\right)^{-1},\quad \quad\quad\rho_{0}^{2}:=\frac{mT_{\perp}}{q^{2}B_{0}^{2}}, \tag{13}\] \[\mathbf{b}_{\perp}:= \frac{\nabla\times A_{\parallel}\mathbf{\hat{b}}}{B}=A_{\parallel}\bm {K}_{\mathbf{\nabla}\times\mathbf{\hat{b}}}+\frac{\mathbf{\nabla}A_{\parallel}\times\mathbf{ \hat{b}}}{B},\] (14) \[\psi:= \Gamma_{1}(\phi)-\frac{m}{2qB^{2}}|\nabla_{\perp}\phi|^{2},\] (15) \[T_{\perp}= T_{\parallel}=T=const. \tag{16}\]
These are the Pade approximated gyro-average operator \(\Gamma_{1}\) with thermal gyro-radius \(\rho_{0}\), the perpendicular magnetic field perturbation \(\mathbf{b}_{\perp}\), the gyro-centre potential \(\psi\) and temperature \(T\).
We keep a 2nd order accurate gyro-averaging operator \(\Gamma_{1}\) independent of particle position that closely mimics an exponential to arbitrary order [19]. The polarisation in the second term in Eq. (8) is taken in a long wavelength limit while all finite Larmor radius effects are neglected in the parallel magnetic potential \(A_{\parallel}\).
In Eq. (9) we can identify the density flux parallel to the magnetic field \(\mathbf{\hat{b}}\) perturbed by magnetic fluctuations \(\mathbf{b}_{\perp}\), followed by the \(\mathbf{E}\times\mathbf{B}\), the curvature and the grad-B drifts.
The first term in the momentum current Eq. (10) consists of the parallel momentum current quadratic in the parallel velocity \(U_{\parallel}\). This term is an expression of Burger's term and can lead to shocks if no parallel viscosity was added to the system. The term \(\nabla\cdot(NT_{\parallel}(\mathbf{\hat{b}}+\mathbf{b}_{\perp}))\) stemming from \(\mathbf{\nabla}\cdot\mathbf{J}_{mNU}\) with \(\mathbf{J}_{mNU}\) from Eq. (10) can be combined with the mirror force \(NT_{\perp}(\mathbf{\hat{b}}+\mathbf{b}_{\perp})\cdot\nabla\ln B\) in Eq. (12) to yield the familiar pressure gradient \((\mathbf{\hat{b}}+\mathbf{b}_{\perp})\cdot\nabla(NT)\) with the identity \(\nabla\cdot(\mathbf{\hat{b}}+\mathbf{b}_{\perp})=-(\mathbf{\hat{b}}+\mathbf{b}_{\perp})\cdot \nabla\ln B\) and the assumption \(T_{\perp}=T_{\parallel}=T\). Further, in Eq. (10) we have the \(\mathbf{E}\times\mathbf{B}\) and curvature drift transport of parallel momentum. In the parallel electric force Eq. (11) we have the parallel and perturbed gradients of the gyro-centre electric potential \(\psi\) together with a correction due to the magnetic curvature. Even though the latter term is small it must be kept to guarantee energetic consistency. The equivalent correction also appears in the mirror force term Eq. (12).
### Simplifications
#### 2.2.1 Two species
Even though the model is formulated inherently as a multi-species model we here only treat an electron-ion plasma, specifically with Deuteron ions (\(q_{i}=e\), \(m_{i}\approx 2m_{p}\) with \(m_{p}\) the proton mass). The model can also be used to simulate electron-positron plasmas [54]. Multi-species gyro-fluid simulations were presented in [55, 56].
#### 2.2.2 Small electron mass
We take the electron gyro-radius to be zero \(\rho_{0,e}=0\) and thus have [14, 15]
\[\Gamma_{1,e}=1,\quad\quad\psi_{e}=\phi. \tag{17}\]
This is combined with neglecting the electron mass in the polarisation equation, which thus reads
\[-en_{e}+q\Gamma_{1,i}N_{i}+\nabla\cdot\left(\frac{m_{i}N_{i}}{B^{2}}\nabla_{ \perp}\phi\right)=0. \tag{18}\]
Note here that we denote the electron gyro-centre density as \(n_{e}\) and gyro-centre parallel velocity as
\(u_{\parallel,e}\) (as opposed to \(N_{e}\) and \(U_{\parallel,e}\)) to signify that these quantities coincide with the actual fluid particle density and parallel particle velocity.
#### 2.2.3 Toroidal field line approximation
The toroidal field line approximation applies \(\hat{\mathbf{b}}=\pm\hat{\mathbf{e}}_{\varphi}\) to all perpendicular operators (e.g.: perpendicular elliptic operator and curvature operators) but retains the full expression for the magnetic field unit vector \(\hat{\mathbf{b}}\) in parallel operators \(\nabla_{\parallel}\) and \(\Delta_{\parallel}\)[52, 53]. Note that we allow the negative sign \(-\hat{\mathbf{e}}_{\varphi}\) to enable a sign reversal of the magnetic field.
We employ cylindrical coordinates \((R,Z,\varphi)\), with \(\varphi\) anti directed to the geometric toroidal angle (**clockwise** if viewed from above) to obtain a right handed system. This yields
\[\hat{\mathbf{b}}\times\nabla f\cdot\nabla g \approx\pm\hat{\mathbf{e}}_{\varphi}\times\nabla f\cdot\nabla g=\pm \hat{\mathbf{e}}_{\varphi}\cdot(\nabla f\times\nabla g)\] \[=\pm\frac{1}{R}\left(\frac{\partial f}{\partial R}\frac{\partial g }{\partial Z}-\frac{\partial f}{\partial Z}\frac{\partial g}{\partial R}\right), \tag{19}\] \[\nabla_{\perp}f \approx\frac{\partial f}{\partial R}\hat{\mathbf{e}}_{R}+\frac{ \partial f}{\partial Z}\hat{\mathbf{e}}_{Z},\] (20) \[\Delta_{\perp}f \approx\frac{1}{R}\frac{\partial}{\partial R}\left(R\frac{ \partial f}{\partial R}\right)+\frac{\partial}{\partial Z}\left(\frac{\partial f }{\partial Z}\right). \tag{21}\]
The curl of \(\hat{\mathbf{b}}\) reduces to \(\nabla\times\hat{\mathbf{b}}\approx-\frac{\pm 1}{R}\hat{\mathbf{e}}_{Z}\). This simplifies the curvature operators to:
\[\mathbf{K}_{\nabla\times\hat{\mathbf{b}}} \approx-\frac{\pm 1}{BR}\hat{\mathbf{e}}_{Z},\] \[\mathbf{K}_{\nabla B} \approx-\frac{\pm 1}{B^{2}}\frac{\partial B}{\partial Z}\hat{\mathbf{e}}_{R }+\frac{\pm 1}{B^{2}}\frac{\partial B}{\partial R}\hat{\mathbf{e}}_{Z} \tag{22}\]
and
\[\nabla\cdot\mathbf{K}_{\nabla\times\hat{\mathbf{b}}}=\frac{\pm 1}{RB^{2}}\frac{ \partial B}{\partial Z}=-\nabla\cdot\mathbf{K}_{\nabla B}, \tag{23}\]
which maintains a vanishing divergence of the total curvature \(\nabla\cdot\mathbf{K}=0\) with \(\mathbf{K}:=\mathbf{K}_{\nabla\times\hat{\mathbf{b}}}+\mathbf{K}_{\nabla B}\).
The toroidal field approximation is motivated numerically. The true perpendicular derivatives contain derivatives in the \(\varphi\) direction, which would have to be resolved numerically. Since we expect turbulent eddies to be highly elongated along the field lines but very narrow perpendicular to \(\hat{\mathbf{b}}\) this translates to a very high resolution requirement in the \(\varphi\) direction. The toroidal field approximation in combination with the FCI approach avoids this.
### Resistivity and diffusive terms
Here, we discuss the terms \(\Lambda_{N}\) in Eq. (5) and \(\Lambda_{mNU}\), \(R_{\parallel}\) in Eq. (6). These terms take the form
\[\Lambda_{N}:= -\mu_{N,\perp}(-\Delta_{\perp})^{2}N+\mu_{N,\parallel}\Delta_{ \parallel}N\equiv-\nabla\cdot\mathbf{j}_{N,\nu}, \tag{24}\]
with \(\mathbf{j}_{N,\nu}:=-\mu_{N,\perp}\nabla_{\perp}(-\Delta_{\perp}N)-\mu_{N, \parallel}\hat{\mathbf{b}}\nabla_{\parallel}N\),
\[\Lambda_{m_{e}n_{e}u_{e}}:= -\mu_{U,\perp}(-\Delta_{\perp})^{2}u_{\parallel,e}+\mu_{ \parallel,e}\Delta_{\parallel}u_{\parallel,e}\] \[-\nabla\cdot(m_{e}u_{\parallel,e}\mathbf{j}_{n_{e},\nu}),\] \[\Lambda_{m_{i}N_{i}U_{i}}:= -\mu_{U,\perp}(-\Delta_{\perp})^{2}U_{\parallel,i}+\mu_{ \parallel,i}\Delta_{\parallel}U_{\parallel,i}\] \[-\nabla\cdot(m_{i}U_{\parallel,i}\mathbf{j}_{N,\nu}), \tag{25}\]
and
\[R_{\parallel}:= -\eta_{\parallel}eqn_{e}(N_{i}U_{\parallel,i}-n_{e}u_{\parallel,e }). \tag{26}\]
We first notice that the diffusion terms have the form of total divergences \(\Lambda_{N}=-\nabla\cdot j_{N,\nu}\) and \(\Lambda_{mNU}=:-\nabla\cdot(\mathbf{\tilde{j}}_{mNU,\nu}+mU_{\parallel}\mathbf{j}_{N, \nu})\). Under volume integration these terms vanish modulo surface terms, which is important for mass and momentum conservation. Second, we notice the term \(-\nabla\cdot(mU\mathbf{j}_{\nu,N})\) in the momentum diffusion (25) has the form of a velocity convection. This is a correction term that prevents energy from being generated by mass diffusion as we will see explicitly in Section 5.3.2 and was suggested by for example [57, 42].
The consistent treatment of the diffusive terms is particularly important for the parallel ion momentum equation. The alternative variant \(\Lambda_{mNU,\parallel}:=\mu_{\parallel}\Delta_{\parallel}U_{\parallel}+\mu_{N, \parallel}mU_{\parallel}\Delta_{\parallel}N\) has the advantage that in velocity formulation \(\Lambda_{U,\parallel}=\mu_{\parallel}\Delta_{\parallel}U_{\parallel}/(mN)\) simplifies [43]. However, in this formulation the term \(\mu_{N,\parallel}mU_{\parallel}\Delta_{\parallel}N\) unphysically generates momentum, leading to artificial toroidal rotation after a long enough simulation time. Other works on drift-fluid models completely neglect the parallel ion and electron viscosities [37, 36, 35].
In Eqs. (24) and (25), \(\mu_{N,\perp}\) and \(\mu_{U,\perp}\) are ad-hoc artificial numerical diffusion coefficients that are added to stabilize perpendicular advection and are thought to be small. In the same sense \(\mu_{N,\parallel}\) represents artificial parallel diffusion necessary to stabilize the parallel advection [42].
The parallel velocity difference \(u_{\parallel,i}-u_{\parallel,e}:=(N_{i}U_{\parallel,i}-n_{e}u_{\parallel,e})/n_ {e}\) determines the parallel resistive term \(R_{\parallel}\) in Eq. (26). The term is positive for electrons with \(q_{e}=-e\) and negative for ions with \(q_{i}=e\). This form both conserves parallel momentum and vanishes for zero current but leads to a quadratic energy dissipation term only in the long-wavelength limit as we discuss in Section 5.3.2.
For the parallel viscosity \(\mu_{\parallel}\) and the parallel resistivity \(\eta\) we copy the parallel resistive and viscous terms from the Braginskii fluid equations [20]. The electron-ion and ion-ion collision frequencies are given by \(\nu_{ei}=\sqrt{2}z^{2}e^{4}\ln\Lambda n_{e}/(12\pi^{3/2}\sqrt{m_{e}}\epsilon_{ 0}^{2}T_{e}^{3/2})\), \(\nu_{ee}=\nu_{ei}/\sqrt{2}\) and \(\nu_{ii}=z^{4}e^{4}\ln\Lambda n_{i}/(12\pi^{3/2}\sqrt{m_{i}}\epsilon_{0}^{2}T_{i }^{3/2})=\nu_{ei}\sqrt{m_{e}/m_{i}}/((T_{i}/T_{e})^{3/2}\sqrt{2})\). We define with the parallel Spitzer resistivity \(\eta_{\parallel}:=0.51\frac{m_{e}\nu_{ei}}{n_{e}e^{2}}\) and the parallel electron and ion viscosities \(\mu_{1,e}:=0.73\frac{n_{e}T_{e}}{\nu_{ei}}\) and
\[\mu_{\parallel,i} =0.96\frac{n_{i}T_{i}}{\nu_{ii}}\left[20\right]\] the dimensionless parameter \[\eta :=\frac{en_{0}\eta_{\parallel}}{B_{0}}=0.51\frac{\nu_{ei,0}}{\Omega _{e0}}\] \[=8.45\cdot 10^{-5}\ln\lambda\left(\frac{n_{0}}{10^{19}\mathrm{m}^{3}} \right)\left(\frac{T_{e}}{\mathrm{eV}}\right)^{-3/2}\left(\frac{B_{0}}{ \mathrm{T}}\right)^{-1}, \tag{27}\]
with \(\nu_{ei,0}:=\nu_{ei}(n_{0},T_{e})\) as well as
\[\nu_{\parallel,e} :=\frac{\mu_{\parallel,e}}{m_{e}n_{0}\rho_{s}^{2}\Omega_{i0}}=0.73 \frac{\Omega_{e0}}{\nu_{ei,0}}=\frac{0.37}{\eta}, \tag{28}\] \[\nu_{\parallel,i} :=\frac{\mu_{\parallel,i}}{m_{i}n_{0}\rho_{s}^{2}\Omega_{i0}}=0.96 \frac{\Omega_{0}}{\nu_{ii,0}}=\left(\frac{T_{i}}{T_{e}}\right)^{3/2}\sqrt{ \frac{m_{e}}{m_{i}}}\frac{0.69}{\eta}, \tag{29}\]
with \(\ln\lambda\approx 10\), \(\Omega_{i0}=eB_{0}/m_{i}\) the ion gyro-frequency and \(\Omega_{e0}=eB_{0}/m_{e}\) the electron gyro-frequency. Finally, in order to prevent unreasonably small simulation timestep we need to impose a maximum and minimum on \(\nu_{\parallel,e}\) and \(\nu_{\parallel,i}\):
\[\nu_{\parallel,e} =\min\biggl{(}\frac{0.37}{\eta},\ \frac{0.37}{10^{-4}}\biggr{)}, \tag{30a}\] \[\nu_{\parallel,i} =\min\biggl{(}\max\left(\sqrt{\frac{m_{e}}{m_{i}}}\frac{0.69}{10^ {-4}},\ \biggl{(}\frac{T_{i}}{T_{e}}\biggr{)}^{3/2}\sqrt{\frac{m_{e}}{m_{i}}}\frac{0.69 }{\eta}\right),\] \[\qquad\qquad\frac{0.37}{10^{-4}}\biggr{)}. \tag{30b}\]
We emphasize that this restriction is numerically motivated. The physical implications of Eq. (30) are discussed in Section 5.
### Sources
We provide a constant influx of particles
\[S_{n_{e}}(R,Z,\varphi,t)=\omega_{s}n_{\mathrm{s}}(R,Z), \tag{31}\]
where \(\omega_{s}\) is the source strength parameter and \(n_{\mathrm{s}}(R,Z)\) is an in principle arbitrary toroidally symmetric profile, which we discuss further in Section 4.2. In order to not generate potential with the source term the ion gyro-centre source needs to fulfill \(S_{n_{e}}=\Gamma_{1,i}S_{N_{i}}+\nabla\cdot\left(\frac{m_{i}S_{N_{i}}}{B^{2}} \nabla_{\downarrow}\phi\right)\) for given particle source \(S_{n_{e}}\) and potential \(\phi\), which follows from a time derivative of Eq. (8). We were unable to invert this equation numerically. Only in the long wavelength limit can it be inverted to yield the approximation [25]
\[S_{N_{i}}\approx\left(1-\frac{1}{2}\rho_{0i}^{2}\Delta_{\perp}\right)S_{n_{e}} -\nabla\cdot\left(\frac{m_{i}S_{n_{e}}}{B^{2}}\nabla_{\downarrow}\phi\right). \tag{32}\]
The long wavelength limit should be well-fulfilled for a realistic source term since the amplitude \(\omega_{s}\) is typically quite small. Note that the additional terms besides \(S_{n_{e}}\) in Eq. (32) are total divergences, which means they do not change the volume integrated "total" particle number created by the source.
A second task of the source \(S_{N}\) is to globally ensure a minimum density. This is required since through sheath dissipation the density can in principle become arbitrarily close to zero. This is, however, both detrimental to the stability of the simulation as well as the CFL condition (and thus the allowed time step) of the simulation and in reality also never happens due to e.g. wall-recycling. For both electrons and ions we choose the additional source term
\[S_{N,\min} =-\omega_{\min}(N-n_{\min})H_{\alpha/2}(n_{\min}-\alpha/2-N), \tag{33}\]
where \(H_{\alpha}(x)\) is a continuously differentiable approximation to the Heaviside function with width \(\alpha\). The Heaviside function ensures that this source term only acts when the density is below the lower limit. In our simulations we choose \(\omega_{\min}=1\), \(n_{\min}=0.2n_{0}\), \(\alpha=0.05\).
### Boundary conditions
Following [43] we setup boundary conditions with the immersed boundary method using volume penalization [58]. In order to do this we first formally define a wall function
\[\chi_{w}(\mathbf{x})=\begin{cases}1\text{ for }\mathbf{x}\in\Omega_{w}\\ 0\text{ else}\end{cases}, \tag{34}\]
where \(\Omega_{w}\) is the wall domain. Analogously, a sheath function \(\chi_{s}\) can be defined using a sheath domain \(\Omega_{s}\). Both \(\chi_{w}\) and \(\chi_{s}\) are further specified in Section 4.1. We have \(\Omega_{s}\cap\Omega_{w}=\varnothing\). We can then enforce boundary conditions on the wall and sheath by
\[\frac{\partial}{\partial t}N= F_{N}(1-\chi_{s}-\chi_{w})-\omega_{s}\chi_{s}(N-N_{sh})\] \[-\omega_{w}\chi_{w}(N-N_{w}), \tag{35a}\] \[\frac{\partial}{\partial t}(mU_{\parallel}+qA_{\parallel})= \frac{mF_{mNU}-mU_{\parallel}F_{N}}{N}(1-\chi_{s}-\chi_{w})\] \[-m\omega_{s}\chi_{s}(U_{\parallel}-U_{\parallel,sh})\] \[-m\omega_{w}\chi_{w}(U_{\parallel}-U_{\parallel,w}), \tag{35b}\]
where \(F_{N}:=-\nabla\cdot\mathbf{j}_{N}-\nabla\cdot\mathbf{J}_{N}+\Lambda_{N}+S_{N}\) follows from Eq. (5) and \(F_{mNU}=-\nabla\cdot\mathbf{J}_{mNU}+F_{mNU,\nabla\cdot}B+F_{mNU,\psi}+R_{ \parallel}+\Lambda_{mNU}\) follows from Eq. (6). We choose \(\omega_{s}=5\) and \(\omega_{w}=0.01\). The polarization equation is penalized according to the immersed boundary method
\[-\nabla\cdot\left(\frac{N_{i}}{B^{2}}\nabla_{\downarrow}\phi\right)=(\Gamma_{1, i}N_{i}-n_{e})(1-\chi_{w}-\chi_{s}). \tag{36}\]
We do not penalize the parallel Ampere law due to numerical stability.
We choose the wall conditions \(N_{w}=0.2\) and \(U_{\parallel,w}=0\). Further, we have \(\phi_{w}=0\) and \(\nabla_{\downarrow}A_{\parallel,w}=0\)
for the electric and magnetic potential. The latter two are however only enforced at the domain boundaries rather than through a penalization method. We have the insulating sheath boundary conditions
\[U_{\|,i,sh} =\pm\sqrt{\frac{T_{e}+T_{i}}{m_{i}}}, \tag{37}\] \[u_{\|,e,sh} =U_{\|,i,sh}N_{i}/n_{e}. \tag{38}\]
\(N_{sh}\) is chosen such that \(\nabla_{\|}N|_{sh}=0\).
## 3 The magnetic field
This section discusses FELTOR's general capabilities to represent toroidally symmetric magnetic fields. The specific magnetic field used for the main physical discussion in Section 5 is presented in Section 4.1.
### The flux function
In cylindrical coordinates the general axisymmetric magnetic field obeying an MHD equilibrium (\(\mu_{0}\mathbf{j}=\nabla\times\mathbf{B}\), \(\nabla p=\mathbf{j}\times\mathbf{B}\)) can be written as [59]
\[\mathbf{B}=\frac{1}{R}\left[I(\psi_{p})\mathbf{\hat{e}}_{\varphi}+\frac{\partial\psi_ {p}}{\partial Z}\mathbf{\hat{e}}_{R}-\frac{\partial\psi_{p}}{\partial R}\mathbf{\hat {e}}_{Z}\right]. \tag{39}\]
Here, \(\psi_{p}\) is the poloidal flux function and \(I(\psi_{p})\) is the current stream function. For the sake of clarity we define the poloidal magnetic field \(\mathbf{B}_{p}=\frac{1}{R}\left(\frac{\partial\psi_{p}}{\partial Z}\mathbf{\hat{e}}_{ R}-\frac{\partial\psi_{p}}{\partial R}\mathbf{\hat{e}}_{Z}\right)\) and the toroidal magnetic field \(\mathbf{B}_{t}=\frac{I}{R}\mathbf{\hat{e}}_{\varphi}\).
Note that with a typically convex function \(\psi_{p}\) (second derivative is positive), \(I(\psi_{p})>0\) and the previously defined coordinate system the field line winding is a left handed screw in the positive \(\mathbf{\hat{e}}_{\varphi}\)-direction. Also note that then \(\mathbf{B}\times\mathbf{\nabla}\mathbf{B}\) points down, which for a lower single null configuration is towards the magnetic X-point, and we have the **favourable** drift direction (in experiments H-mode is reached more easily in this configuration [60, 32, 61]).
We have the contravariant components of \(\mathbf{B}\)
\[B^{R}=\frac{1}{R}\frac{\partial\psi_{p}}{\partial Z},\quad B^{Z}=-\frac{1}{R} \frac{\partial\psi_{p}}{\partial R},\quad B^{\varphi}=\frac{I}{R^{2}} \tag{40}\]
and the covariant components \(B_{R}=B^{R}\), \(B_{Z}=B^{Z}\) and \(B_{\varphi}=R^{2}B^{\varphi}\). By construction we have \(\partial_{\varphi}B=0\) with
\[B=\frac{1}{R}\sqrt{I^{2}+|\nabla\psi_{p}|^{2}}. \tag{41}\]
In FELTOR we have various ways to represent the flux function \(\psi_{p}\) and its derivatives. In this work we use a general solution to the Grad-Shafranov equation using Solov'ev pressure and current profiles [62, 63]
\[\psi_{p}(R,Z) =\mathcal{P}_{\psi}B_{0}R_{0}^{2}\left[A\left(\frac{1}{2}\bar{R}^ {2}\ln\bar{R}-\frac{1}{8}\bar{R}^{4}\right)+\frac{1}{8}\bar{R}^{4}\right.\] \[\left.+\sum_{i=1}^{12}c_{i}\bar{\psi}_{pi}(\bar{R},\bar{Z})\right], \tag{42a}\] \[I(\psi_{p}) =\mathcal{P}_{I}B_{0}R_{0}\sqrt{-2A\frac{\psi_{p}}{\mathcal{P}_{ \psi}B_{0}R_{0}^{2}}+1}, \tag{42b}\]
with \(A\), \(\mathcal{P}_{\psi}\) free constants, \(\mathcal{P}_{I}=\pm\mathcal{P}_{\psi}\) for \(A\neq 0\) and \(\mathcal{P}_{I}\) arbitrary for \(A=0\) (purely toroidal equilibrium current). We introduce \(\bar{R}\equiv R/R_{0}\) and \(\bar{Z}\equiv Z/R_{0}\) where \(R_{0}\) is the major radius and \(B_{0}\) is a reference magnetic field strength. The dimensionless base functions \(\bar{\psi}_{pi}\) are listed in [62].
### Discussion
Since Eqs. (42) is given in terms of analytical base functions we can numerically evaluate \(\psi_{p}(R,Z)\) and \(I(\psi_{p})\) and all their derivatives at arbitrary points to machine precision, which is simple to implement and fast to execute. This translates to an exact representation of the magnetic field and related quantities, for example curvature (22), in code. In particular, the X-point(s) and O-point can be determined to machine precision via a few Newton iterations.
The choice of the coefficients \(c_{i}\) and \(A\) determines the actual form of the magnetic field. We can for example represent single and asymmetric double X-point configurations, force-free states, field reversed configurations and low and high beta tokamak equilibria [62, 63]. The scaling factors \(\mathcal{P}_{\psi}\) and \(\mathcal{P}_{I}\) are mainly introduced to maximize the flexibility e.g. to adapt the solution to experimental equilibria or to reverse the sign of the magnetic field.
If one or more X-points are present, we choose \(c_{1}\) such that \(\psi_{p}(R_{X},Z_{X})=0\) for the X-point closest to the O-point that is the separatrix is given by \(\psi_{p}(R,Z)=0\).
We offer several predefined sets of parameters as well as Mathematica and Python scripts to generate / fit coefficients to experimental equilibria in the [https://github.com/feltor-dev/magneticfielddb](https://github.com/feltor-dev/magneticfielddb) repository. The contained Jupyter Notebooks and Python scripts help setting up appropriate simulation domains as well as wall and sheath regions \(\chi_{w}\) and \(\chi_{s}\) as presented in Section 4.1. See B for more details.
## 4 Simulation setup
### The magnetic flux, the wall and the sheath
The first step in setting up a simulation with FELTOR is to choose an appropriate magnetic field. In this
work we choose to model the COMPASS tokamak and fit the magnetic flux function described in [64] with a Solov'ev equilibrium described in Eq. (42). One X-point is situated at \(R_{X}=460\) mm, \(Z_{X}=-330\) mm with \(\psi_{p}(R_{X},Z_{X})=0\) and the O-point is situated at \(R_{O}=568.78\) mm, \(Z_{O}=32.69\) mm with \(\psi_{p,O}:=\psi_{p}(R_{O},Z_{O})=-18.76\rho_{s}R_{O}B_{0}\) (found with a few iterations of a Newton solver). In Fig. 1a we plot the normalized poloidal flux
\[\rho_{p}=\sqrt{\frac{\psi_{p,O}-\psi_{p}}{\psi_{p,O}}}. \tag{43}\]
In Fig. 1b we plot the chosen wall and sheath functions \(\chi_{w}\) and \(\chi_{s}\), which signify the penalization regions for the immersed boundary conditions in Eq. (35) and Eq. (36). The wall region is given simply as a flux aligned region
\[\chi_{w}(R,Z)=\begin{cases}1\text{ if }&\rho_{p}(R,Z)>\rho_{w}\vee\\ &(\rho_{p}(R,Z)<\rho_{F}\wedge Z<Z_{X})\\ 0\text{ else }\end{cases}. \tag{44}\]
Here we choose \(\rho_{w}=1.15\) for the scrape-off layer and the private flux region at \(\rho_{F}=0.97\). For the sheath region we first define an angular distance \(\varphi_{w}\) of each point \((R,Z)\) to the bounding box via the integration of
\[\frac{dR}{d\varphi}=\frac{b^{R}}{b^{\varphi}},\qquad\frac{dZ}{d\varphi}=\frac {b^{Z}}{b^{\varphi}}, \tag{45}\]
with initial condition \((R,Z)\) until \(R((\varphi_{w}),Z(\varphi_{w}))\) intersects the bounding box. The intersection can be found with a bisection algorithm. The sheath is then given by
\[\chi_{s}(R,Z):=\begin{cases}1\text{ if }\varphi_{w}(R,Z)>\varphi_{0}\\ 0\text{ else }\end{cases}, \tag{46}\]
where we choose \(\varphi_{0}=7/32\). Note that for numerical reasons we implement a continuously differentiable transition at the boundary of the regions \(\Omega_{w}\) and \(\Omega_{s}\).
Both plots in Fig. 1 show the numerical simulation domain in the \(R\)-\(Z\) region as \([R_{0},R_{1}]\times[Z_{0},Z_{1}]\).
### Initial profiles and sources
To initialize our simulation we choose
\[N(R,Z,\varphi,0)= n_{\text{prof}}(R,Z)\] \[:= (n_{\text{peak}}-n_{\text{sep}})\frac{\psi_{p}(R,Z)}{\psi_{p,O}}+ n_{\text{sep}}, \tag{47}\]
equal for both electrons and ions such that the profile given in [64] is approximately reproduced with a peak density of \(n_{\text{peak}}=8.5\cdot 10^{19}\)m\({}^{-3}\) and a separatrix density of \(n_{\text{sep}}=10^{19}\)m\({}^{-3}\). In the SOL the profile exponentially decreases to the background density of \(n_{\text{min}}=0.2\cdot 10^{19}\)m\({}^{-3}\).
The initial parallel velocity for both electrons and ions is zero everywhere except in the scrape-off layer where it varies linearly between \(\pm\sqrt{(T_{e}+T_{i})/m_{i}}\) with the sheath angle coordinate \(\varphi_{w}\) defined in Eq. (45). This is to conform to the sheath boundary conditions in Eq. (38).
The velocity profile is initially symmetric in \(\varphi\) while the toroidally symmetric density profile is perturbed by small fluctuations in order to trigger turbulence.
We define the source profile in Eq. (31) as
\[n_{\text{s}}(R,Z):= n_{\text{prof}}(R,Z)D(R,Z), \tag{48}\] \[D(R,Z):= H_{\alpha}\left(\rho_{p,b}-\rho_{p}(R,Z)\right)\] \[H(Z-Z_{X}).\]
We choose \(\rho_{p,b}=0.55\) for the source region, which is depicted as a dashed line Fig. 1.
### The q-profile
We follow the methods presented in [65] and define the geometric poloidal angle \(\Theta\) as the field-line following parameter around the O-point
\[\Theta=\begin{cases}+\arccos\left[(R-R_{O})/r\right]\text{ for }R\geq R_{O}\\ -\arccos\left[(R-R_{O})/r\right]\text{ for }R<R_{O}\end{cases},\]
with \(r^{2}:=(R-R_{O})^{2}+(Z-Z_{O})^{2}\). We then have with \(\mathbf{B}\) given by Eq. (39) \(B^{\Theta}=\mathbf{B}\cdot\nabla\Theta=-(\psi_{R}(R-R_{O})+\psi_{Z}(Z-Z_{O}))/(r^{2 }R)\). We can then directly integrate any field-line as
\[\frac{\mathrm{d}R}{\mathrm{d}\Theta}=\frac{B^{R}}{B^{\Theta}},\qquad\frac{ \mathrm{d}Z}{\mathrm{d}\Theta}=\frac{B^{Z}}{B^{\Theta}},\qquad\frac{\mathrm{d} \varphi}{\mathrm{d}\Theta}=\frac{B^{\varphi}}{B^{\Theta}},\]
from \(\Theta=0\) to \(\Theta=2\pi\). The safety factor results via
\[q\equiv\frac{1}{2\pi}\oint\frac{B^{\varphi}}{B^{\Theta}}\mathrm{d}\Theta. \tag{49}\]
Fig. 2 shows the q-profile of the chosen equilibrium. As expected the q-profile diverges at the separatrix situated at \(\rho_{p}=1\). This is because \(B^{\Theta}=0\) at the X-point and thus the integration in Eq. (49) diverges. At the O-point around \(\rho_{p}=0\) the q-profile converges to a finite value \(q\approx 1.9\). In the domain between \(\rho_{p}=0.4\) and \(\rho_{p}=0.9\) the value of \(q\) lies between \(2\) and \(3\).
### A parameter scan
We setup parameters for in total \(12\) simulations as two sets of \(6\) simulations each. The first set uses \(T_{i}=0\) while the second set uses \(T_{i}=T_{e}\). The
6 simulations within each set vary the dimensionless plasma resistivity \(\eta\) Eq. (27), while keeping the plasma density \(n_{0}=10^{19}\) m\({}^{-3}\) and \(\rho_{s}=1\) mm constant. This is achieved by changing the electron temperature \(T_{e}\) (to set \(\eta\)) and the magnetic field strength \(B_{0}\) (to keep \(\rho_{s}\propto\sqrt{T_{e}}/B_{0}\) constant) as shown in Table 1. This results in a constant value for the plasma beta \(\beta:=n_{0}T_{e}/\big{(}B^{2}/(2\mu_{0})\big{)}=10^{-4}\). The source strength parameter \(\omega_{s}\) in Eq. (31) is constant for the duration of each simulation and chosen (differently for each simulation) such that the volume integrated source roughly matches the total density flux out of the last closed flux-surface.
We set the dimensionless parallel density diffusion necessary for numerical stability of the FCI scheme to a constant value \(\nu_{\parallel,N}=500\). The perpendicular hyperdiffusion coefficients are set to \(\nu_{\perp,N}=\nu_{\perp,U}=10^{-3}\).
The simulation domain is a rectangle in the \(R\)-\(Z\) plane chosen such that the closed field line region as well as the SOL, wall and sheath regions are captured as displayed in Fig. 1. It is important for the stability of the FCI scheme that the boundary of the wall region does not intersect the boundary of the simulation domain except at the sheath region. The domain is symmetric in the \(\varphi\) direction. The resolution is chosen as 192 cells in \(R\) and 336 cells in \(Z\) direction with 3 polynomial coefficients in each cell in both \(R\) and \(Z\). The number ratio \(N_{R}/N_{Z}\) corresponds approximately
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \(B_{0}/\)T & \(T_{e}/\)eV & \(\omega_{s}^{0}/\)kHz & \(\omega_{s}^{1}/\)kHz \\ \(\eta\) & & & & \\ \hline
1.00e-06 & 1.27 & 77.76 & 1.53 & 1.53 \\
3.00e-06 & 0.97 & 44.90 & 1.39 & 1.39 \\
1.00e-05 & 0.72 & 24.59 & 1.20 & 1.20 \\
3.00e-05 & 0.54 & 14.20 & 1.30 & 1.30 \\
1.00e-04 & 0.40 & 7.78 & 1.35 & 1.93 \\
3.00e-04 & 0.31 & 4.49 & 2.35 & 2.93 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Parameters corresponding to varying the dimensionless plasma resistivity \(\eta\) Eq. (27) while keeping \(n_{0}=10^{19}\)m\({}^{-3}\) and \(\rho_{s}=1\) mm constant. This results in constant \(\beta=10^{-4}\) and various \(B_{0}\) and \(T_{e}\) values. The source strength parameter \(\omega_{s}^{0}\) in Eq. (31) corresponds to \(T_{i}=0\) simulations while \(\omega_{s}^{1}\) corresponds to \(T_{i}=T_{e}\) simulations. We select \(B_{0}\propto\eta^{-1/4}\) and \(T_{e}\propto\eta^{-1/2}\).
Figure 1: Calibration of the simulation box. The normalized magnetic flux \(\rho_{p}=\sqrt{(\psi_{p,O}-\psi_{p})/\psi_{p,O}}\) on the left and the wall and sheath regions on the right. The magnetic flux \(\psi_{p}\) is modified to a constant inside the wall region. On the right plot colours range linearly from 0 to 1. Two contour lines indicating the wall at \(\rho_{p}=0.97\) in the private flux region and \(\rho_{p}=1.15\) in the scrape-off layer region are plotted in solid black lines. The separatrix \(\rho_{p}=1\) and the boundary of the source region at \(\rho_{p}=0.55\) in the core are plotted in black dashed lines.
to the aspect ratio of the simulation domain such that the grid cells are square in \(R\)-\(Z\). In \(\varphi\) we choose \(32\) planes. In total we thus have \(576\cdot 1008\cdot 32\approx 2\cdot 10^{7}\) grid points. Each simulation is run to roughly the same end time of \(100\,000\)\(\Omega_{0}^{-1}\) with exact values displayed in Table 2. The value \(100\,000\) is chosen as a compromise between a reasonable simulation wall time and a long enough, i.e. statistically significant, time series for our analysis in the following Section 5. The end-time in units of ms is however different for each simulation and depends on the magnetic field strength corresponding to the chosen resistivity as depicted in Table 1. Since we keep \(\rho_{s}\propto\sqrt{T_{e}}/B_{0}\) constant, changing the electron temperature \(T_{e}\) yields a corresponding change in \(B_{0}\) and thus \(\Omega_{i0}\).
### Performance observations
The given resolution of \(2\cdot 10^{7}\) grid points corresponds to an array size of \(150\)MB for each of the density, velocity and potential variables. With simulation data written to file at every \(150\Omega_{i0}^{-1}\) the total file size of one simulation is about \(500\)GB. The grid size is about a factor \(5-100\) smaller than is currently used for (five-dimensional) gyro-kinetic simulations [66, 67, 68] but is of similar order of magnitude as other fluid-type simulation runs [44, 35].
Our simulations were run on 16 NVidia V100 GPUs (equivalent to 4 nodes on the M100 GPU cluster). In Table 3 we present the average runtime in seconds per \(\Omega_{i0}^{-1}\) for each simulation with the error being the standard deviation. These timings include the times for input/output and diagnostics but exclude the times for initialization and restarting of the code. Typically we achieve a computation time of \(5-7\)s per \(\Omega_{i0}^{-1}\) but outliers at \(4.6\pm 0.6\)s and \(8.3\pm 0.2\)s exist. The differences may be due to slightly different viscosity parameters that we chose to stabilize some simulations and subsequent smaller or larger simulation time steps. The evaluation of a single right hand side of Eqs. (5) and (6) including solutions of all elliptic equations and evaluation of the parallel advection-diffusion terms takes about \(0.20-0.25\) s in all simulations. The polarization equation (8) is solved in typically \(0.05\) s and less than \(0.1\) s. The right hand side has to be evaluated \(3\) times per time step.
As pointed out in our performance study [46] the observed code performance is bound by memory
\begin{table}
\begin{tabular}{r r r r r} \hline \hline & \multicolumn{3}{c}{\(T_{i}=0\)} & \multicolumn{2}{c}{\(T_{i}=T_{e}\)} \\ & \(t_{\mathrm{end}}/\Omega_{i0}^{-1}\) & \(t_{\mathrm{end}}/\mathrm{ms}\) & \(t_{\mathrm{end}}/\Omega_{i0}^{-1}\) & \(t_{\mathrm{end}}/\mathrm{ms}\) \\ \(\eta\) & & & & \\ \hline
1.00e-06 & 110400 & 1.81 & 111800 & 1.83 \\
3.00e-06 & 110200 & 2.38 & 111200 & 2.40 \\
1.00e-05 & 97500 & 2.84 & 88800 & 2.59 \\
3.00e-05 & 100000 & 3.83 & 100000 & 3.83 \\
1.00e-04 & 89165 & 4.62 & 100000 & 5.18 \\
3.00e-04 & 100000 & 6.82 & 99800 & 6.80 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Simulation end times in units of \(\Omega_{i0}^{-1}\) and in physical units reached after an equal amount of simulation time for all parameters. Simulations are run on 16 NVidia V100 GPUs.
Figure 2: The q-profile as a function of the normalized poloidal flux \(\rho_{p}\) (43). \(q\) diverges at it approaches \(\rho_{p}=1\) (the separatrix) but converges to a finite value \(q\approx 1.9\) at \(\rho_{p}=0\) (the O-point).
bandwidth and memory latencies. We emphasize that due to our structured grid approach our matrix-vector multiplications are almost as fast as vector additions since the matrix elements can be kept in cache. This and the major reduction in memory requirements that comes with it are the major benefits over unstructured grids. Of the total peak performance of 14 400 GB/s our implementation (of vector addition, matrix-vector multiplication, scalar products) reaches on average 70%. We can compare this to the conventional Skylake partition on the Marconi cluster where one node has a theoretical peak bandwidth of 256 GB/s of which our implementation on average (vector addition, matrix-vector multiplications, scalar products) achieves 183 GB/s. With 16 nodes we thus have an available throughput of 4096 GB/s, which is a factor 3.5 less than what is available on 4 nodes on the M100 cluster. We see about a factor 3 in practice, i.e. a runtime of 15 s per \(\Omega_{0}^{-1}\) for the \(\eta=10^{-4}\) simulations and approximately 0.7 s per right hand side evaluation.
## 5 A study of resistivity and temperature
In this Section we analyse the simulations previously setup in Section 4. In Section 5.1 we show selected three-dimensional renderings of the magnetic field, plasma density and parallel current. Following this we establish the flux surface average in Section 5.2 as a diagnostics tool for a numerical verification of the simulations in Section 5.3. We focus on the parallel acceleration in Section 5.4 and mass and energy confinement in Section 5.5.
### Three-dimensional visualisations
Here, we present three-dimensional renderings of the magnetic field and the density and parallel current of the \(\eta=10^{-4}\), \(T_{i}=T_{e}\) simulation. The ParaView visualisation toolkit [69] is used and all results are rendered on a NVidia RTX3090 card. In order to render the \(\varphi\) direction smoothly we face a challenge due to the low resolution of only 32 toroidal planes. To solve the issue we temporarily extend the available data to 384 toroidal planes by interpolating along the magnetic field lines with the methods presented in [65]. This allows for a smooth visualisation of the field-line following structures.
#### 5.1.1 Magnetic field
We begin by showing a three-dimensional rendering of the magnetic streamlines in Fig. 3. We use the streamline tracer module in ParaView [69] to integrate magnetic field lines of Eq. (39) and visualise with the OptiX path tracer using 1000 progressive passes of the ray tracer. A low-opacity iso-contour of \(\rho_{p}=1.10\) is plotted in order to remove spurious visualisation artifacts. A light source is placed approximately in the upper right corner of the viewing space and a flat, white, non-opaque surface is placed at \(Z=-450\) mm in order to aid the lighting of the scene that has an otherwise dark grey background behind the camera. The colour scale is chosen from [71, 70] and is used to represent the magnetic field strength following the "dark-is-more" bias [72] for easier interpretation.
Using ray tracing gives the impression of a photo-realistic image of the magnetic streamlines with an enhanced depth-perception and an easy distinction of inner vs outer streamlines. At the same time, shadows in general and in particular the shadows falling on the "floor" in the lower left corner of the image are visual enhancements that have no actual physical reality.
The streamlines follow a left handed winding with the positive \(\mathbf{B}\) direction clockwise if viewed from the top. Only magnetic streamlines in the scrape-off layer are visible, which originate at the numerical divertor at the bottom. The magnetic field strength is clearly higher on the interior side (high field side) than on the outside (low field side) following the general \(1/R\) dependence of Eq. (41). As mentioned in Section 3 the \(\mathbf{B}\times\mathbf{\nabla}\mathbf{B}\) direction points towards the magnetic X-point and we have a favourable drift direction.
#### 5.1.2 Electron density
The electron density is depicted in Fig. 4. Here, we create an iso-volume for \(n_{e}/n_{0}\geq 0.22\) between the angles 0 and 250\({}^{\circ}\) in the \(\varphi\) direction. This enables the viewer to see both the field-alignment in the scrape-off layer as well as a cross-section of the perpendicular turbulence in the edge and core regions.
As a colour-scale we create a three-colour map with the help of the ColorMoves tool [70] with transitions at 0.8 and between 6 and 7. The three colours can be interpreted as visualisations of scrape-off layer (grey-blue), edge (red-yellow) and core (brown-grey). Here, the core is the region where our particle source is active (cf. the dashed line in Fig. 1). The motivation for choosing such a colour scale for the density is the large data volume spanning almost two orders of magnitude with relatively small fluctuations on top. We follow the colour-name variation metric as promoted by [73] as opposed to a colour scale that varies purely in luminance say. The latter would help to visually establish order, that is darker regions correspond to higher density values. However, we found no single-colour scale that could span the large data volume while maintaining good colour discriminability. We thus sacrifice some uniformity and order at the transition points in favour of a higher discriminative power, i.e. a higher amount of distinct colours. As a result it is not directly intuitive which colour corresponds to higher density values without consulting the colourmap, however, the
turbulent structures in the core and edge as well as the filamentary structures in the scrape-off layer are highly visible.
As was done in Fig. 3 we use the OptiX path tracer in ParaView with 1000 passes to render the final image. As a lighting source we choose a large radius source directed from below the camera to eliminate sharp shadows and increase the contrast between the field-aligned structures in the scrape-off layer. We place a white plane behind the iso-volume (which the camera looks onto) and a light grey coloured background behind the camera. This achieves a uniformly lit scene.
The scene itself shows the largest turbulent fluctuations in the core and edge regions on the low field side, especially at the outboard midplane. Fluctuations on the high field side are smaller in perpendicular extension. This points towards a ballooning mode. Further, we notice that fluctuations are highly elongated vertically at the top of the domain as well as at the bottom around the X-point both in the edge as well as the scrape-off layer. The scrape-off layer fluctuations appear field aligned judging from the form of the contours in between the two poloidal planes.
#### 5.1.3 Parallel current
The next visualisation is the parallel current \(j_{\parallel}=e(N_{i}U_{\parallel,i}-en_{e}u_{\parallel,e})\) in Fig. 5. We create two separate iso-volumes for \(j_{\parallel}\): one for \(j_{\parallel}/(en_{0}c_{s})\geq 0.5\) and one for \(j_{\parallel}/(en_{0}c_{s})\leq 0.5\). Here, we use \(c_{s}=\sqrt{T_{e}/m_{i}}=4.64\cdot 10^{4}\) m/s. Two separate colourmaps are chosen for each region; a blue one for the negative and a red one for the positive values. Both colourmaps begin at \(\pm 0.5\) and are truncated at \(\pm 1\) (actual values lie between \(\pm 4.7\)).
We choose a similar setup as for the density rendering, i.e. a white plane behind the scene with a light grey background behind the camera. A large radius headlight is placed at the camera to illuminate the scene. Again, ray tracing is used to render the final image. In order to guide the viewer we plot a low-opacity iso-contour of \(\rho_{p}=1\) (the separatrix).
The resulting image highlights the localized "field-aligned tubes" in which current flows in the simulation. These tubes have a typical extension of about 5 mm and thus carry a current of approximately \(25en_{0}c_{s}\rho_{s}^{2}\approx 2.5\) A. It is further visible that the current is positive (flow direction clockwise viewed from above) mainly on the high-field side and negative mainly on the low-field
Figure 3: Streamlines of the magnetic field vector \(\mathbf{B}\) integrated and visualised in ParaView [69]. One low-opacity iso-contour of \(\rho_{p}=1.10\) is plotted (corresponding to \(\psi_{p}=4\)). The positive \(\mathbf{B}\) direction is clockwise if viewed from above and the field-line winding is left-handed. \(\mathbf{B}\times\mathbf{\nabla}\mathbf{B}\) points towards the magnetic X-point and we have a favourable drift direction.
side. However, a couple of individual current lines of the opposite signs are discernible in either region. Few current lines exist in the scrape-off layer and only close to the separatrix.
### The flux surface average - profiles and fluctuations
Before we can turn to a verification exercise of our simulations we first need to establish appropriate integration routines. More specifically we here want to compute so called flux-surface averages and integrals. The flux-surface average is defined as a differential volume average according to [59]:
\[\left\langle f\right\rangle(\psi_{p}) := \frac{\partial}{\partial v}\int f\,\mathrm{d}\mathrm{V}, \tag{50}\] \[v(\psi_{p,0}) := \int H(\psi_{p}(R,Z)-\psi_{p,0})H(Z-Z_{X})\,\mathrm{d}\mathrm{V}, \tag{51}\]
where \(H(x)\) is the Heaviside function. In order to accurately integrate Eqs. (50) and (51) we use the methods described in [65]. The first step is to construct a flux aligned coordinate system as we show in Fig. 6.
There are several numerical pitfalls that should be considered when numerically constructing a flux-aligned grid. As pointed out in Reference [74, 65] the volume element in flux-aligned grids diverges and care must be taken when constructing such grids close to the X-point. This is especially true if the flux-surface average of the separatrix (or a surface close to it) is to be computed. We follow [75, 74] for the construction of our grid.
In flux-aligned coordinates \(\eta\), \(\zeta\), \(\varphi\) the flux-surface average simplifies to
\[\left\langle f\right\rangle=\frac{1}{2\pi\oint\sqrt{g}\mathrm{d}\eta}\oint \hskip-10.0pt\int_{0}^{2\pi}f(\zeta(\psi_{p}),\eta,\varphi)\sqrt{g}\mathrm{d }\eta\mathrm{d}\varphi, \tag{52}\]
where \(\sqrt{g}\) is the volume element in the flux aligned coordinate system.
The numerical integration in the \(\varphi\) direction is straightforward. The resulting toroidal average can
Figure 4: The electron density \(n_{e}\) at 5.18ms for \(\eta=10^{-4}\), \(T_{i}=T_{e}=7.8\)eV and \(B_{0}=0.4\)T. We show an iso-volume of \(n_{e}/n_{0}\geq 0.22\) and choose a wave colourmap constructed with the ColorMoves tool from [70] mapped to logarithmic density values. The three colour regions (blue-grey, red-yellow and brown-grey) roughly coincide with the three regions scrape-off layer, edge and core/source region (cf. Fig. 0(b))
be interpolated onto the flux-aligned grid displayed in Fig. 6. Then, Eq. (52) can be used to compute the flux surface average. Since the grid in Fig. 6 exists also outside the last closed flux surface, we can use Eq. (52) to compute flux-surface averages in the scrape-off layer as well.
In Fig. 7 in the top half we show the flux-surface averages of the density \(\langle n_{e}\rangle\) as a function of \(\rho_{p}\) defined in Eq. (43). In fact, we show a time averaged \(\langle n_{e}\rangle\) profile for all simulations. The average profiles for \(T_{i}=0\) and \(T_{i}=T_{e}\) are visibly very similar. For the high resistivity simulations \(\eta=3\cdot 10^{-4}\) and \(\eta=10^{-4}\) (both \(T_{i}=0\) and \(T_{i}=T_{e}\)) the average profile appears linear in \(\rho_{p}\) up to the separatrix at \(\rho_{p}=1\). The remaining simulations have accumulated density in the core at about \(\rho_{p}<0.4\). This is the region where the source \(S_{n_{e}}\) is active and continuously increases the density, which also translates to a large variation amplitude in the core. The edge and scrape-off layer at \(0.95<\rho_{p}<1.075\) are shown enlarged. The density on the separatrix increases with resistivity from \(0.5\cdot 10^{19}\) m\({}^{-3}\) to about \(1.5\cdot 10^{19}\) m\({}^{-3}\) for both \(T_{i}=0\) and \(T_{i}=T_{e}\) simulations. Afterwards, in the scrape-off layer at \(\rho_{p}>1\) the density sharply drops. Notice that the black dashed line in the enlarged region signifies the minimum density \(n_{e,\min}=0.2\cdot 10^{19}\) m\({}^{-3}\) in Eq. (33). The average densities thus cannot reach below the artificially enforced lower boundary. It may be preferable to run simulations with lower \(n_{e,\min}\) to study if the lower resistivity simulations converge at a different value, however then also the parallel viscosities \(\nu_{\parallel}\) must be adapted in Eq. (30) in order to not also lower the CFL condition.
We define the relative fluctuation amplitude as
\[\sigma_{n_{e}}(\rho_{p},t):=\frac{\sqrt{\left((n_{e}-\langle n_{e}\rangle)^{2 }\right)}}{\langle n_{e}\rangle}. \tag{53}\]
In the lower part of Fig. 7 we show the time averaged \(\sigma_{n_{e}}\) for our simulations. Again, both the \(T_{i}=0\) and \(T_{i}=T_{e}\) simulations exhibit similar behavior. The fluctuation levels in the core region lie between \(10^{-3}\) and \(10^{-2}\) at the smallest \(\rho_{p}\) where higher resistivity corresponds to higher fluctuation levels. The relative
Figure 5: The parallel electric current \(j_{\parallel}/(en_{0}c_{s})\) at \(5.18\)ms for \(\eta=10^{-4}\), \(T_{i}=T_{e}=7.8\)eV and \(B_{0}=0.4\)T. We plot two isovolumes \(j_{\parallel}\leq-0.5en_{0}c_{s}\) and \(j_{\parallel}\geq 0.5en_{0}c_{s}\). The colour-scale is cut at \(-1\) and \(1\) respectively. A translucent contour of the separatrix \(\psi_{p}=0\) is shown. Current mainly flows in field-aligned tubes. Each tube has a typical extension of \(5\) mm and thus carries approximately \(25en_{0}c_{s}\rho_{s}^{2}\approx 2.5\) A.
fluctuation amplitudes increase for all simulations to about 15% at the separatrix. There is sharp increase in fluctuations for \(\rho_{p}>1\) to a maximum of 35% for \(T_{i}=0\) and 40% for \(T_{i}=T_{e}\), visible in the enlarged regions of Fig. 7. Furthermore, between about \(1<\rho_{p}<1.01\) the amplitudes for all simulations overlap before they decrease again at about \(\rho_{p}=1.02\). The small resistivity simulations decrease furthest in fluctuation amplitudes.
The observed radial profiles for density and its fluctuations can be tentatively compared with [76] where a non-isothermal drift-fluid model is used to simulate the turbulent dynamics in a limiter configuration using buffer regions to exclude the core region from the simulation domain. There, the fluctuation level at the separatrix peaks only for small resistivities. Furthermore the separatrix densities are highest for smallest resistivities instead of largest resistivities as in our case. This is likely a consequence of how the source term \(S_{N}\) depends on \(\eta\). In the present case the source strength is adapted (see Table 1) such that the density profiles across simulations remain similar, while [76] keeps an absolute source strength.
### Verification of conservation laws
With a reliable way to compute flux-surface averages and volume integration we can now turn to defining a suitable error norm for a numerical verification. First, we again emphasize that due to the turbulence nature of our simulations, we cannot show pointwise convergence. In fact, in Reference [46] it is shown that even computational errors on the order of machine precision in two-dimensional simulation exponentially increase to order one within a short period of time. This means that the occasionally used method of manufactured solution [45, 37, 33, 35] is not suitable for verifying simulation behaviour on a long timescale. We here therefore follow a different strategy where we compute the volume and time integrated error of conservation laws.
Assume that our model equations in Section 2 allow for a local analytical balance equation of the form
\[\sum_{i}t_{i}(R,Z,\varphi,t)=0 \tag{54}\]
that is a sum of individual terms \(t_{i}\) balances to zero. First, we define a time average via
\[\langle t_{i}\rangle_{t}:=\frac{1}{\Delta t}\int_{t_{0}}^{t_{1}}t_{i}(R,Z, \varphi,t)\mathrm{d}t \tag{55}\]
The time interval \([t_{0},t_{1}]\) in Eq. (55) will in the following Section 5.3.1 be manually defined for each simulation by identifying a saturated turbulence state.
Under a further volume integration we can convert the \(t_{i}\) to
\[T_{i}:=\left(\int_{\Omega}t_{i}(R,Z,\varphi,t)\mathrm{d}V\right)_{t} \tag{56}\]
The spatial integration region in Eq. (56) is chosen as the closed field line region \(\Omega:=\{(R,Z,\varphi):Z>Z_{X}\wedge\rho_{p}(R,Z)<1\}\) and shown in colour in Fig. 6. Note that once we have the flux-surface average \(\langle t_{i}\rangle\) on a sufficiently fine grid in \(\psi_{p}\) we can integrate
\[\int_{\Omega}t_{i}dV=\int\left(t_{i}\right)\mathrm{d}v=\left(\frac{\mathrm{d }v}{\mathrm{d}\psi_{p}}\right)^{-1}\int\left\langle t_{i}\right\rangle(\psi_ {p})\mathrm{d}v\]
We then have \(\sum_{i}T_{i}=0\) analytically, however, numerically due to discretization errors we usually have
\[\sum_{i}T_{i}^{\mathrm{num}}=E \tag{57}\]
where \(E\) is the total numerical error and \(T_{i}^{\mathrm{num}}\) is the numerical result given by the discrete version of Eq. (56) computed by storing the individual \(t_{i}^{\mathrm{num}}\) in memory during a simulation. We would consider the
Figure 6: The flux-aligned grid (with \(20\times\) reduced resolution to see the grid points) used for the computation of flux-surface averages and flux-volume integration. The closed field line region \(\Omega\) for the verification is shown in blue and contains a volume of \(0.5\) m\({}^{3}\). The grid allows for a definition of a flux-surface average outside the separatrix.
conservation law well fulfilled numerically, if \(E\) is small compared to the \(T_{i}^{\rm num}\).
The error \(E\) consists of the contributions \(E_{i}\) of the errors of each individual term \(E_{i}=T_{i}^{\rm num}-T_{i}\), i.e. \(E=\sum_{i}E_{i}\). We are interested in the error for each term, however, given \(E\) we a priori cannot deduce \(E_{i}\). In order to get an error estimate nevertheless, we here assume that the error contribution \(E_{i}\) of each term is determined by its magnitude \(|T_{i}^{\rm num}|\). We introduce the relative global error
\[\varepsilon:=\frac{E}{\sum_{i}|T_{i}^{\rm num}|} \tag{58}\]
with which we can define
\[E_{i}:=\varepsilon|T_{i}^{\rm num}| \tag{59}\]
The corrected terms should read
\[T_{i}^{\rm corr}:=T_{i}^{\rm num}-E_{i} \tag{60}\]
It is easy to see that
\[\sum_{i}T_{i}^{\rm corr}=0 \tag{61}\]
An interpretation of \(\varepsilon\) is to signify "the importance of a physical effect on the global dynamics that was not captured by the numerics". In this sense any error below \(1\%\) can be considered excellent, while anything above merits further discussion.
We now analyse the mass conservation in Section 5.3.1, the energy theorem in Section 5.3.2, the parallel momentum balance in Section 5.3.3 and the electron force balance in Section 5.3.4. The resulting relative global errors are presented in Fig. 8.
#### 5.3.1 Mass conservation
The electron density equation (5) directly yields the particle conservation
\[\frac{\partial}{\partial t}n_{e}+\nabla\cdot\mathbf{j}_{n_{e}}-\Lambda_{n_{e}}-S_{ n_{e}}=0 \tag{62}\]
with
\[\mathbf{j}_{n_{e}}= \mathbf{j}_{n_{e},E}+\mathbf{j}_{n_{e},C}+\mathbf{j}_{n_{e},\parallel}+\mathbf{j} _{n_{e},A} \tag{63}\] \[\Lambda_{n_{e}}= \Lambda_{n_{e},\perp}+\Lambda_{n_{e},\parallel} \tag{64}\]
where we split the density flux into the \(\mathbf{E}\times\mathbf{B}\) flux \(\mathbf{j}_{n_{e},E}:=n_{e}\mathbf{\hat{b}}\times\nabla\phi/B\), the curvature flux
Figure 7: The time averaged density profiles (top) and the relative fluctuation amplitudes (bottom) for \(T_{i}=0\) (left) and \(T_{i}=T_{e}\) (right) as a function of \(\rho_{p}\) Eq. (43). The separatrix corresponds to \(\rho_{p}=1\). The edge and scrape-off layer regions \(0.95<\rho_{p}<1.075\) are shown enlarged.
\(-n_{e}T\mathbf{K}/e-m_{e}n_{e}u_{\parallel,e}^{2}\mathbf{K}_{\mathbf{\forall},\mathbf{b}}\), parallel flux \(\mathbf{j}_{n_{e},\parallel}=n_{e}u_{\parallel,e}\mathbf{\hat{b}}\) and magnetic flutter flux \(\mathbf{j}_{n_{e},A}=n_{e}u_{e,\parallel}\mathbf{b}_{\perp}\). The diffusive part consists of \(\Lambda_{n_{e},\perp}=-\mu_{n_{e},\perp}\Delta_{1}^{2}n_{e}\) and \(\Lambda_{n_{e},\parallel}=\mu_{n_{e},\parallel}\Delta_{\parallel}n_{e}\).
In Figure 9 we plot the volume integrated terms of the mass conservation (62) as a function of time for the \(T_{i}=T_{e}\) and \(\eta=10^{-4}\) simulation. First, notice that \(\left(\mathbf{\nabla}\cdot\mathbf{j}\right)=\frac{\mathrm{d}}{\mathrm{d}v}\left(\mathbf{j }\cdot\mathbf{\nabla}v\right)\)[59] and thus
\[\int_{\Omega}\nabla\cdot\mathbf{j}\mathrm{d}V=\int_{\partial\Omega}\mathbf{j}\cdot \mathbf{dA}=\left\langle\mathbf{j}\cdot\mathbf{\nabla}v\right\rangle|_{\rho_{p}=1}, \tag{65}\]
i.e. the volume integral of divergences equals the total flux out of the last closed flux surface or the average radial flux. We immediately see that the two largest actors in this figure are the \(\mathbf{E}\times\mathbf{B}\) flux \(\left\langle\mathbf{j}_{E}\cdot\mathbf{\nabla}v\right\rangle\) on the last closed flux surface and the density source \(\int S_{n_{e}}\mathrm{d}V\), which is constant throughout the simulation. The time derivative of the total mass fluctuates around zero. Note that the remaining terms including the error given by the sum of all terms \(\sum_{i}t_{i}\) are too small to be visibly different from zero in the plot.
Further, notice that the flux surface average \(\left\langle\mathbf{\nabla}\cdot\left(j_{0}\mathbf{\hat{b}}\right)\right\rangle=\frac{ \mathrm{d}}{\mathrm{d}v}\left\langle j_{0}\mathbf{\hat{b}}\cdot\mathbf{\nabla}v\right\rangle=0\) vanishes for any parallel current \(j_{0}\mathbf{\hat{b}}\). Any deviation from zero is thus purely numerical. This applies in particular to the terms \(\mathbf{\nabla}\cdot\mathbf{j}_{\parallel}\) and \(\Lambda_{n_{e},\parallel}\) in Eq. (62). In our recent work in [42] we individually study the deviations from zero in those terms and find them to be negligibly small. We will thus here and in the following ignore parallel terms accepting that they may contribute to the errors visible in Fig. 8.
From the \(\mathbf{E}\times\mathbf{B}\) flux in Fig. 9 we manually identify a time interval where fluctuations appear around a constant average. We do this for all 12 simulations. This allows us to identify suitable \(t_{0}\) and \(t_{1}=t_{\mathrm{end}}\) in Eq. (56) and thus we can compute the relative global error in Eq. (58). We plot the corrected terms (60) together with error bar from Eq. (59) in Fig. 10. The left plot shows simulations with \(T_{i}=0\) for the various resistivities \(\eta\) and the right plot shows corresponding simulations with \(T_{i}=T_{e}\). We can immediately confirm that the \(\mathbf{E}\times\mathbf{B}\) flux as well as the source term
Figure 8: The relative global errors as defined by Eq. (58) of the terms in the mass conservation in Section 5.3.1, the energy theorem in Section 5.3.2, the parallel momentum balance in Section 5.3.3 and the electron force balance in Section 5.3.4 for \(T_{i}=0\) (left) and \(T_{i}=T_{e}\) (right).
Figure 9: The time evolution of volume integrated terms in the mass conservation equation for \(T_{i}=T_{e}\) and \(\eta=10^{-4}\). The length of the shaded regions signifies the time interval which we consider for our statistics while the widths signify the standard deviations within that region.
are the largest terms for all simulations while the time derivative follows with lesser importance. Note here that the density source strength \(\omega_{s}\) in \(S_{n_{e}}\) in Eq. (31) was chosen differently for each simulation. The magnetic flutter term as well as the curvature flux and the perpendicular diffusion terms have negligible importance on the evolution of the global mass balance. We emphasize that this does not necessarily imply negligible importance on the local dynamics just that the volume integrated mass balance is unaffected.
The relative errors in the terms are invisible in this plot, which is why we separately plot these in Fig. 8. There we see that the relative error of the terms in the mass conservation is at an excellent maximal 3% for all simulations and below 1% for simulations with \(\eta>10^{-5}\).
#### 5.3.2 Energy theorem
The terms of the energy theorem are
\[\partial_{t}\mathcal{E}+\nabla\cdot\mathbf{j}_{\mathcal{E}}-\Lambda_{\mathcal{E} }-S_{\mathcal{E}}-R_{\mathcal{E}}=0 \tag{66}\]
with
\[\mathcal{E}= T_{e}n_{e}\ln\left(n_{e}/n_{e0}\right)+T_{i}N_{i}\ln\left(N_{i}/n _{e0}\right)\] \[+\frac{1}{2}\mu_{0}\left(\nabla_{\perp}A_{\parallel}\right)^{2}+ \frac{1}{2}m_{i}N_{i}u_{E}^{2}\] \[+\frac{1}{2}m_{e}n_{e}u_{\parallel,e}^{2}+\frac{1}{2}m_{i}N_{i}U _{\parallel,i}^{2}, \tag{67}\]
\[\mathbf{j}_{\mathcal{E}}= \sum_{s}\left[\left(T\ln(N/n_{e0})+\frac{1}{2}mU_{\parallel}^{2}+ q\psi\right)\mathbf{j}_{N}\right]\] \[+\sum_{z}\left[\frac{m}{q}N_{\parallel}^{2}\mathbf{K}_{\nabla\times \mathbf{b}}+TNU_{\parallel}\left(\mathbf{\hat{b}}+\mathbf{b}_{\perp}\right)\right], \tag{68}\]
\[\Lambda_{\mathcal{E}}= \sum_{s}\left[\left(T\left(1+\ln\left(N/n_{e0}\right)\right)+q \psi+\frac{1}{2}mU_{\parallel}^{2}\right)\Lambda_{N}\right]\] \[+mNU_{\parallel}\Lambda_{U} \tag{69}\]
\[S_{\mathcal{E}}= \sum_{s}\left[\left(T\left(1+\ln\left(N/n_{e0}\right)\right)+q \psi-\frac{1}{2}mU_{\parallel}^{2}\right)S_{N}\right] \tag{70}\]
\[R_{\mathcal{E}}= -\eta_{\parallel}e^{2}n_{e}(U_{\parallel,i}-u_{\parallel,e})(N_{i} U_{\parallel,i}-n_{e}u_{\parallel,e}). \tag{71}\]
where in the energy flux \(\mathbf{j}_{\mathcal{E}}\) we neglect terms containing time derivatives of the electric and magnetic potentials and we sum over all species. The energy density \(\mathcal{E}\) consists of the Helmholtz free energy density for electrons and ions, the \(\mathbf{E}\times\mathbf{B}\) energy density, the parallel energy densities for electrons and ions and the perturbed magnetic field energy density. In \(\Lambda\) we insert the dissipative terms of Section 2.3 and use \(\Lambda_{U}:=\Lambda_{mNU}/mU-U\Lambda_{N}/N\).
The dissipation term can be further simplified to
\[\Lambda_{\mathcal{E}}= -\sum_{s}\nabla\cdot\left[\left(T\left(1+\ln\left(N/n_{e0}\right) \right)+q\psi+\frac{1}{2}mU_{\parallel}^{2}\right)\mathbf{j}_{N,\nu}\right]\] \[-\nabla\cdot(U_{\parallel}\tilde{\mathbf{j}}_{mNU,\nu})\] \[+\tilde{\mathbf{j}}_{mNU,\nu}\cdot\nabla U_{\parallel}+\mathbf{j}_{N,\nu }\cdot\nabla(\ln N/n_{e0}-q\psi) \tag{72}\]
where we use \(\nabla\cdot\tilde{\mathbf{j}}_{mNU,\nu}:=\mu_{U,\downarrow}(-\Delta_{\perp})^{2}u _{\parallel,e}-\mu_{\parallel,e}\Delta_{\parallel}u_{e}\). The dissipation term thus consists of a diffusive energy current under a total divergence and a dissipation contribution. Focusing on the parallel diffusion terms we find for the dissipative contribution:
\[\tilde{\mathbf{j}}_{mNU,\nu}\cdot\nabla U_{\parallel}+\mathbf{j}_{N,\nu}\cdot\nabla( \ln(N/n_{e0})-q\psi)=\] \[-\mu_{\parallel,U}(\nabla_{\parallel}U)^{2}-\mu_{\parallel,N} \frac{(\nabla_{\parallel}N)^{2}}{N}-q\mu_{\parallel,N}\nabla_{\parallel}N \nabla_{\parallel}\psi \tag{73}\]
Figure 10: The mass conservation equation (62): volume integrated and time averaged terms Eq. (56) with error bar Eq. (59) for \(T_{i}=0\) (left) and \(T_{i}=T_{e}\) (right). The error bars are too small to be visible in the plot and are separately shown in Fig. 8.
The first two terms are always negative and thus always dissipate energy. The last term containing the potential vanishes under species summation at least to zeroth order with \(n_{e}\approx N_{i}\) and \(\psi_{i}\approx\phi\).
The term \(R_{\mathcal{E}}\) is approximately quadratic in the sense that \(R_{\mathcal{E}}\approx-\eta_{\parallel}j_{\parallel}^{2}\), which is the familiar Joule heating term. Since we have an isothermal model this term appears as an energy dissipation term. The source term \(S_{\mathcal{E}}\) dissipates parallel kinetic energy \(-0.5mU_{\parallel}^{2}S_{N}<0\) but generates free energy \(\ln NS_{N}>0\).
The integration region in time remains unchanged and we can compute the time and volume integrated terms Eq. (56) with error bar Eq. (59) in Fig. 11. The relative errors of the terms must again be read from Fig. 8 and are below \(1\%\) for all simulations. The global relative error in energy is generally a factor \(2-5\) smaller than the error in mass.
In Fig. 11 we see that the energy source \(S_{\mathcal{E}}\) is the largest (and only) negative contributor in the equation. From Eq. (70) we see that it is in fact the density source \(S_{n_{e}}\) that translates to a source of energy. The magnitude of the energy source decreases by approximately a factor \(10\) from smallest to highest resistivity. Since the density source does not vary much in Fig. 10, this is likely a simple consequence of the decreasing electron temperature in our parameter scan in Table 1. The energy source is balanced by the energy flux out of the last closed flux surface \(\mathbf{j}_{\mathcal{E}}\), the parallel energy dissipation \(\Lambda_{\mathcal{E},\parallel}\), the Joule heat \(R_{\mathcal{E}}\), the perpendicular energy dissipation \(\Lambda_{\mathcal{E},\perp}\) and the energy gain \(\partial_{t}\mathcal{E}\). Few clear trends with resistivity can be inferred from the plot. The parallel energy dissipation is systematically larger than the perpendicular energy dissipation. The resistivity term \(R_{\mathcal{E}}\) becomes relatively less important for smaller resistivities \(\eta\) than for higher resistivities. For \(T_{i}=0\) the energy gain \(\partial_{t}\mathcal{E}\) is most important for small resistivities \(\eta<10^{-4}\) but least important else. The energy flux term \(\mathbf{j}_{\mathcal{E}}\) is most important for \(\eta\geq 10^{-4}\) but small compared to the other terms for \(\eta<10^{-5}\).
#### 5.3.3 Parallel momentum balance
In the parallel momentum equation (6) for ions we insert the mirror force Eq. (12) and use \(-(\mathbf{\hat{b}}+\mathbf{b}_{\perp})\cdot\mathbf{\nabla}\ln B=\mathbf{\nabla}\cdot(\mathbf{\hat {b}}+\mathbf{b}_{\perp})\) to get
\[\frac{\partial}{\partial t} \left(m_{i}N_{i}U_{\parallel,i}\right)+eN_{i}\frac{\partial}{ \partial t}A_{\parallel}+\mathbf{\nabla}\cdot\mathbf{J}_{mNU,i}\] \[+T_{i}(\mathbf{\hat{b}}+\mathbf{b}_{\perp})\cdot\mathbf{\nabla}N_{i}+\frac{m _{i}}{e}N_{i}U_{\parallel,i}T_{i}\mathbf{K}_{\mathbf{\nabla}\times\mathbf{\hat{b}}}\cdot \mathbf{\nabla}\ln B\] \[-F_{mNU,\psi}+R_{\parallel,e}-\Lambda_{mNU,i}=0, \tag{74}\]
with ion momentum current
\[\mathbf{J}_{mNU,i}:= \mathbf{j}_{\parallel}+\mathbf{j}_{A}+\mathbf{j}_{E}+\mathbf{j}_{C}, \tag{75}\] \[\mathbf{j}_{mNU,\parallel}:= m_{i}N_{i}U_{\parallel,i}^{2}\mathbf{\hat{b}},\] \[\mathbf{j}_{mNU,A}:= m_{i}N_{i}U_{\parallel,i}^{2}\mathbf{\hat{b}}_{\perp,}\] \[\mathbf{j}_{mNU,E}:= m_{i}N_{i}U_{\parallel,i}\frac{\mathbf{\hat{b}}\times\mathbf{\nabla} \psi}{B},\] \[\mathbf{j}_{mNU,C}:= \frac{m_{i}}{e}U_{\parallel,i}N_{i}\left(3T_{i}+m_{i}U_{ \parallel,i}^{2}\right)\mathbf{K}_{\mathbf{\nabla}\times\mathbf{\hat{b}}}\] \[+\frac{m_{i}}{e}U_{\parallel,i}N_{i}T_{i}\mathbf{K}_{\mathbf{\nabla}B},\]
as well as resistivity term and the parallel electric force
\[R_{\parallel,e}:= \eta_{\parallel}e^{2}n_{e}(N_{i}U_{\parallel,i}-n_{e}u_{ \parallel,e}), \tag{76}\] \[F_{mNU,\psi}= -eN_{i}(\mathbf{\hat{b}}+\mathbf{b}_{\perp})\cdot\mathbf{\nabla}\psi\] \[-m_{i}N_{i}U_{\parallel,i}\mathbf{K}_{\mathbf{\nabla}\times\mathbf{\hat{b}}} \cdot\mathbf{\nabla}\psi. \tag{77}\]
Note that the total divergences \(\mathbf{\nabla}\cdot\mathbf{j}_{mNU,\parallel}\) and \(\Lambda_{mNU,\parallel}\), parallel flux and viscosity terms, again vanish exactly under the flux-surface average. We plot the terms of the ion momentum equation in the top half of Fig. 12. Again, the error bars are invisible and are separately plotted in Fig. 8. There we find relative errors for \(T_{i}=T_{e}\) between \(10^{-3}\) and \(3\cdot 10^{-2}\). Each term of the ion momentum equation thus has a relative error of maximal \(3\%\). This is true also for \(T_{e}=0\) and \(\eta>10^{-5}\) simulations. However, for \(T_{i}=0\) and \(\eta\leq 10^{-5}\) the relative error climbs to about \(10\%\). This can be reasoned in the smallness of the terms in Fig. 12, i.e. the absolute error of the equation remains the same across simulations but the term \(\sum_{i}|T_{i}^{\mathrm{num}}|\) in Eq. (58) is small for \(T_{i}=0\) and small \(\eta\).
In Fig. 12 the largest positive term is the parallel electric force \(eN_{i}\nabla_{\parallel}\psi\). To this add negative contributions from the gauge term \(eN_{i}\partial_{t}A_{\parallel}\) and the magnetic flutter \(eN_{i}\mathbf{b}_{\perp}\cdot\mathbf{\nabla}\psi\). The resistivity term \(R_{\parallel,e}\), as expected, makes a significant contribution only for large \(\eta>10^{-5}\) for both \(T_{i}=0\) as well as \(T_{i}=T_{e}\). The \(\mathbf{E}\times\mathbf{B}\) flux is the final significant term and decreases in magnitude with \(\eta\). The absolute value is however larger for \(T_{i}=T_{e}\) than for \(T_{i}=0\).
For \(T_{i}=T_{e}\) and small resistivities the term \(m_{i}\partial_{t}N_{i}U_{\parallel,i}\) is the largest positive term. This indicates positive acceleration, while for large resisitivites \(\eta>3\cdot 10^{-5}\) there is acceleration in the opposite direction. For \(T_{i}=0\) the same trend can be observed, however, the magnitude of the term is about a factor \(10\) smaller than for the \(T_{i}=T_{e}\) simulations. We will discuss this further in Section 5.4.
#### 5.3.4 Parallel electron force balance
The parallel electron momentum equation is given by Eq. (74) with electron instead of ion labels. In a plot of the terms analogous to the ion momentum plot Fig. 12 (top) it
turns out that most of the terms are very close to zero. We thus gather only the dominant terms in the electron momentum equation neglecting all terms proportional to the electron mass with \(m_{e}=0\). This leaves the parallel force balance
\[-T_{e}(\mathbf{\hat{b}}+\mathbf{b}_{\perp})\cdot\nabla n_{e}\] \[+en_{e}\left(\left(\mathbf{\hat{b}}+\mathbf{b}_{\perp}\right)\cdot\nabla \phi+\frac{\partial A_{\parallel}}{\partial t}\right)\] \[+R_{\parallel,e}\approx 0 \tag{78}\]
In the bottom half of Fig. 12 we plot the terms of the parallel force balance. The relative global error of this equation is generally the smallest among all the equations that we test. In Fig. 8 we see that the error is of excellent orders \(10^{-4}\) and \(10^{-3}\), which lies in the range of the value for \(m_{e}/m_{i}=2.7\cdot 10^{-4}\). This confirms that at least under volume integration Eq. (78) is very well fulfilled even if it is not analytically exact.
Analogous to the ion momentum equation the largest term in the electron force balance is the parallel electric force \(en_{e}\nabla_{\parallel}\phi\). Notice here that the colours of Fig. 12 (top) and 12 (bottom) coincide for analogous terms. In fact, visually the terms \(en_{e}\nabla_{\parallel}\phi\), \(en_{e}\mathbf{b}_{\perp}\cdot\nabla\phi\) and \(en_{e}\partial_{t}A_{\parallel}\), i.e. all terms of the electric field are indistinguishable from \(eN_{i}\partial_{t}A_{\parallel}\), \(eN_{i}\mathbf{b}_{\perp}\cdot\nabla\psi\) and \(eN_{i}\nabla_{\parallel}\psi\). We will use this to further study the total momentum equation in Section 5.4.
### Parallel Acceleration
Fig. 12 is visually overburdened due to the number of displayed terms and thus hard to physically interpret further. Thus, we here simplify the discussion by focusing on the total momentum balance. First, we see in Fig. 12 that the electron and ion components of the electric field and the resistivity are visually equal. Neglecting those terms we sum the ion and electron momentum equations to get
\[m_{i}\frac{\partial}{\partial t}N_{i}U_{\parallel,i}+\nabla \cdot(\mathbf{j}_{mNU,E}+\mathbf{j}_{mNU,C})\] \[+m_{i}N_{i}U_{\parallel,i}\mathbf{K}_{\nabla\times\mathbf{b}}\cdot\nabla \psi+(T_{e}+T_{i})(\mathbf{\hat{b}}+\mathbf{b}_{\perp})\cdot\nabla n_{e} \tag{79}\]
We further neglect the term \(\nabla\cdot\mathbf{j}_{mNU,A}\) and \(\Lambda_{mNU,\perp}\) and approximate \(T_{i}(\mathbf{\hat{b}}+\mathbf{b}_{\perp})\cdot\nabla N_{i}\approx T_{i}(\mathbf{\hat{b}}+ \mathbf{b}_{\perp})\cdot\nabla n_{e}\). The result is shown in Fig. 13.
The error bars in Fig. 13 are visible in particular in the \(T_{i}=0\) plot, however the plot is easier to interpret than Fig. 12. We now clearly see the positive acceleration in the \(T_{i}=T_{e}\) plot for \(\eta\leq 10^{-4}\). For \(\eta\geq 10^{-4}\) the parallel acceleration is negative. The \(T_{i}=0\) plot shows the same trends but the acceleration is more than a factor 10 smaller than for \(T_{i}=T_{e}\).
Four candidates explain the observed accelerations. The \(\mathbf{E}\times\mathbf{B}\) flux of parallel momentum is negative signifying that positive momentum is lost to the plasma (or negative momentum enters the plasma) via the radial transport. The \(\mathbf{E}\times\mathbf{B}\) flux decreases in magnitude with \(\eta\) for both \(T_{i}=0\) and \(T_{i}=T_{e}\) but is about a factor \(2-4\) larger for \(T_{i}=T_{e}\) than for \(T_{i}=0\). For \(T_{i}=0\) the two terms \(\mathbf{\nabla}\cdot\mathbf{j}_{mNU,C}\) and \(m_{i}N_{i}U_{\parallel,i}\mathbf{K}_{\nabla\times\mathbf{b}}\cdot\nabla\psi\) are close to zero for all \(\eta\). The only remaining term for \(T_{i}=0\) is thus the parallel gradient \(T_{e}(\mathbf{\hat{b}}+\mathbf{b}_{\perp})\cdot\nabla n_{e}\), which remains roughly constant in \(\eta\).
For \(T_{e}=T_{i}\) the term \((T_{e}+T_{i})(\mathbf{\hat{b}}+\mathbf{b}_{\perp})\cdot\nabla n_{e}\) is positive but much smaller than the curvature contribution. The second curvature term \(\nabla\cdot m_{i}N_{i}U_{\parallel,i}\mathbf{K}_{\nabla\times\mathbf{b}}\cdot\nabla\psi\) is strongly negative for \(\eta<10^{-4}\) but jumps to a positive contribution at \(\eta=10^{-4}\) thus
Figure 11: Energy conservation equation (67): the terms Eq. (56) with error bar Eq. (59) for \(T_{i}=0\) (left) and \(T_{i}=T_{e}\) (right). The error bars are too small to be visible in the plot and are separately shown in Fig. 8.
facilitating the associated negative acceleration. The term \(\nabla\cdot\mathbf{j}_{mNU,C}\) in Fig. 13 represents the total flux of ion momentum through the last closed flux surface by curvature drifts, while the term \(m_{i}N_{i}U_{\parallel,i}\mathbf{K}_{\nabla\cdot\mathbf{b}}\cdot\nabla\psi\) appears as a drift correction to the parallel electric force term (11). In our previous theoretical analysis both curvature terms were neglected as small [77] but for \(T_{i}=T_{e}\) each term has similar contribution in magnitude to the radial \(\mathbf{E}\times\mathbf{B}\) momentum flux.
### Mass and energy confinement times
From our analysis of the mass conservation equation in Fig. 10 and the energy conservation equation in Fig. 11 it is straightforward to extract confinement times. As we explained before the volume integral of \(\nabla\cdot\mathbf{j}\) yields the total flux out of the closed fieldline region \(\int\mathbf{j}\cdot\mathbf{dA}\). We thus start with the definition of the total particle number and energy within the confined region
\[M(t) =\int_{\Omega}n_{e}\,\mathrm{dV} \tag{80}\] \[E(t) =\int_{\Omega}\mathcal{E}\,\mathrm{dV} \tag{81}\]
We can then compare these with the total loss of particles and energy. The particle loss is simply the total flux \(\mathbf{j}_{n_{e}}\) (Eq. (63)) integrated over the last closed flux surface. We can neglect the diffusive transport from Fig. 10 as close to zero. The losses of energy consist of the energy flux out of the last closed flux surface, but also of the energy dissipation through
Figure 12: The parallel momentum balance (top) Eq. (74) and the parallel electron force balance (bottom) Eq. (78): the terms Eq. (56) with error bar Eq. (59) for \(T_{i}=0\) (left) and \(T_{i}=T_{e}\) (right). The error bars are too small to be visible in the plot and are separately shown in Fig. 8.
diffusion and the resistivity. We thus define
\[\tau_{M}:= \frac{(M)_{t}}{\left<\int_{\mathrm{LCFS}}\mathbf{j}_{n_{e}\cdot\,\mathrm{ d}\mathbf{A}}\right>_{t}} \tag{82}\] \[\tau_{E}:= \frac{(E)_{t}}{\left<\int_{\mathrm{LCFS}}\mathbf{j}_{\mathcal{E}}\cdot \,\mathrm{d}\mathbf{A}-\int_{\Omega}(\Lambda_{\mathcal{E}}+R_{\mathcal{E}})\, \mathrm{dV}\right>_{t}} \tag{83}\]
In Fig. 14 and 15 we present the resulting values for our simulations. Note that the total particle number \(\left<M\right>_{t}=(2.3\pm 0.1)\cdot 10^{19}\) is roughly constant for all simulations. The error bars are computed from the fluctuation amplitudes of all quantities in Eqs. (82) and (83). The relative numerical errors are negligible at \(1\%\) as established in Section 5.3. Two regimes are visible in both plots with a transition at \(\eta_{\mathrm{crit}}\approx 5\cdot 10^{-5}\) for both \(T_{i}=0\) as well as \(T_{i}=T_{e}\).
The mass confinement times in Fig. 14 reach roughly constant values for \(\eta<3\cdot 10^{-5}\) while for \(\eta>10^{-5}\) there is a decrease of confinement with increasing resistivity. The drop in mass confinement above the critical \(\eta\) could be related to the discussion of the density limit [78, 79] in the operational space of tokamaks. The constant regime should be regarded tentatively as the fluctuations are particularly large in this regime, especially for \(T_{i}=T_{e}\). The values for \(T_{i}=0\) are a factor \(\sqrt{1+T_{i}/T_{e}}\) larger than the ones for \(T_{i}=T_{e}\) within the error bars. We can tentatively fit a power law of
\[\tau_{M}=\frac{c_{M}(n_{0},\rho_{s})}{\sqrt{1+T_{i}/T_{e}}}\begin{cases}1& \text{ for }\eta<5\cdot 10^{-5}\\ \eta^{-1/3}&\text{ for }\eta>5\cdot 10^{-5}\end{cases} \tag{84}\]
where \(c_{M}(n_{0},\rho_{s})\) signifies the unknown dependency on the parameters \(n_{0}\) and \(\rho_{s}\) that we kept constant during our parameter scan. We remind the reader here that the values for both \(T_{e}\) and \(B_{0}\) decrease for increasing \(\eta\) in our parameter scan as seen in Table 1.
For the energy we see a clear maximum in the confinement time at \(\eta=3\cdot 10^{-5}\). The fluctuations are systematically smaller for the energy confinement times than for the particle confinement times. However, the energy confinement times are also approximately a factor 100 smaller than the mass confinement times. This may be due to the fact that we have an isothermal model where Joule heat is not converted to an increase in temperature and is instead lost to the system. A tentative fit reveals
\[\tau_{E}=\frac{c_{E}(n_{0},\rho_{s})}{\sqrt{1+T_{i}/T_{e}}}\begin{cases}\eta^{ +1/4}\text{ for }\eta<3.5\cdot 10^{-5}\\ \eta^{-1/3}\text{ for }\eta>3.5\cdot 10^{-5}\end{cases} \tag{85}\]
where similar to Eq. (84) the factor \(c_{E}(n_{0},\rho_{s})\) encapsulates a yet unknown dependence on the parameters \(n_{0}\) and \(\rho_{s}\).
Figure 14: The mass confinement times \(\tau_{M}\) Eq. (82). The fit is given by Eq. (84).
Figure 13: The sum of electron force balance and the parallel ion momentum equation (Fig. 12) neglecting small terms. The summed electric force is close to zero and drops out as does the resistivity. The error bars in the \(T_{i}=0\) (left) plot become visible for \(\eta\leq 3\cdot 10^{-5}\) while staying invisible for \(T_{i}=T_{e}\) (right).
The existence of a critical value for the plasma resistivity \(\eta_{crit}\approx 5\cdot 10^{-5}\) for both mass and energy confinement points towards two different turbulent regimes above and below the critical value. Various candidates are discussed in the literature with the most likely ones being drift-wave turbulence for small \(\eta\) and resistive ballooning type turbulence for high \(\eta\)[80, 81, 79]. According to Reference [81] the transition between the two regimes happens at the resistive ballooning threshold at \(\alpha_{t,crit}=1\) with turbulence parameter \(\alpha_{t}\coloneqq\eta q^{2}R_{0}/\rho_{s}\approx 5\cdot 10^{3}\eta\). With \(\eta_{crit}=5\cdot 10^{-5}\) we obtain \(\alpha_{t,\text{crit, num }}\approx 0.25\), which is only a factor 4 away from the theoretical prediction. The difference may be explained by geometrical factors like the presence of the X-point.
There is an apparent discrepancy in this explanation however, insofar the transport in drift-wave turbulence reduces for small \(\eta\) (converging to the adiabatic case) and thus the confinement time should increase for decreasing \(\eta\) instead of remaining constant. An explanation for the observed plateau in the mass confinement time could be so-called reactive instabilities, which are independent of \(\eta\) and are due to a finite electron inertia [2]. Reactive instabilities are unphysical insofar they are an artefact of an isothermal gyro-fluid model and have no gyro-kinetic counterpart where Landau damping counteracts the effect of electron inertia. Note that this does not contradict Fig. 12 where the electron inertia effect vanishes under volume integration. Locally, the electron inertia may still be important.
## 6 Conclusion
We present a new version of the three-dimensional gyro-fluid turbulence code FELTOR. 12 simulations covering several milliseconds with different values for plasma resistivity and ion temperature and fixed values for plasma density and gyro-radius are setup, analysed and discussed. An efficient implementation on GPUs allows for simulation runtimes of about 1 week per simulation. FELTOR is verified using volume and time integrated conservation laws, mass, energy, momentum and force balance. Relative errors are generally below 1% for energy conservation and force balance while for mass and momentum conservation the errors climb to about 3% as seen in Fig. 8. Only in the ion momentum balance and for vanishing ion temperature and small resistivity do we see relative errors of about 10%, which is reasoned in the smallness of the parallel acceleration compared to \(T_{i}=T_{e}\) simulations with at the same time equal absolute errors.
We systematically investigate the importance of the terms in the parallel momentum generation where we find that for increasing resistivity the direction of acceleration is swapped. This is caused mainly by an interplay of decreasing \(\mathbf{E}\times\mathbf{B}\) momentum transport and curvature drifts across the separatrix. The analysis of the momentum density \(m_{i}N_{i}U_{\parallel,i}\) is related to intrinsic toroidal rotation in tokamaks and the angular momentum density \(m_{i}N_{i}U_{\parallel,i}R\)[77, 82]. A detailed analysis of rotation profiles and the angular momentum balance is here postponed to future analysis.
Similar transitions from a low resistivity regime to a high resistivity regime happen for the mass and energy confinement times. Beyond the critical resistivity the mass and energy confinement decrease with increasing resistivity. Below it, the mass confinement remains roughly constant, while the energy confinement decreases with decreasing resistivity. This behaviour could be explained by so-called reactive instabilities, which are an artefact of electron inertia in isothermal gyro-fluid models and have no gyro-kinetic counterpart. A dynamic electron temperature should help counteract this effect in future works. The transition from drift-wave turbulence to resistive ballooning roughly coincides with the value predicted by the literature. Further parameter studies in \(\rho_{s}\) and \(n_{0}\) need to clarify the unknown dependence factors \(c_{M}(n_{0},\rho_{s})\) and \(c_{E}(n_{0},\rho_{s})\) in the observed scaling laws for \(\tau_{M}(\eta)\) (84) and \(\tau_{E}(\eta)\) (85).
The capability of running numerically stable simulations for a set of different parameters with FELTOR is an important milestone. We offer a first high level analysis of the run simulations and quantify numerical errors, leaving many questions open for future work as outlined above. Furthermore, various physical model improvements can be added fairly straightforwardly within the FELTOR framework. These include for example, dynamic temperature equations [15], plasma-neutral collisions [25], arbitrary order polarisation terms [19, 16] and more.
Figure 15: The energy confinement times \(\tau_{E}\) Eq. (83). The fit is given by Eq. (85).
## Acknowledgements
We thank A. Kendl for fruitful discussions. This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 -- EUROfusion). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them. This work was supported by the UiT Aurora Centre Program, UiT The Arctic University of Norway (2020). This research was funded in whole or in part by the Austrian Science Fund (FWF) [P 34241-N]. For the purpose of Open Access, the author has applied a CC BY public copyright license to any Author Accepted Manuscript (AAM) version arising from this submission. This work was supported by a research grant (15483) from VILLUM Fonden, Denmark.
## Appendix A General magnetic field expressions
We assume a three-dimensional flat space with arbitrary coordinate system \(\mathbf{x}:=\{x_{0},x_{1},x_{2}\}\), metric tensor \(g\) and volume element \(\sqrt{g}:=\sqrt{\det g}\). Given a vector field \(\mathbf{B(x)}\) with unit vector \(\mathbf{\hat{b}(x)}:=(\mathbf{B}/B)(\mathbf{x})\) we can define various differential operations in table 11.
Explicit expressions for these expressions depend on the choice of the magnetic field and the underlying coordinate system. Note that we have
\[h^{2}=h, \tag{11}\]
\[\nabla\cdot\mathbf{K_{\nabla\cdot\hat{b}}}=-\nabla\cdot\mathbf{K_{\nabla \hat{\nabla}B}} =-\mathbf{K_{\nabla\cdot\hat{b}}}\cdot\nabla\ln B, \tag{12}\] \[\nabla\cdot\mathbf{K} =0,\] (13) \[\mathbf{K} =\mathbf{K_{\nabla B}}+\mathbf{K_{\nabla\times\hat{b}}}\] (14) \[\mathbf{K_{\nabla\cdot\hat{b}}}-\mathbf{K_{\nabla B}} =\frac{1}{B^{2}}(\nabla\times\mathbf{B}),\] (15) \[\nabla\cdot\left(\frac{\mathbf{\hat{b}}\times\nabla f}{B}\right) =\mathbf{K}\cdot\nabla f,\] (16) \[\Delta_{\perp}f =-\nabla_{\perp}^{\dagger}\cdot\nabla_{\perp}f,\] (17) \[\nabla_{\parallel}\ln B =-\nabla\mathbf{\cdot\hat{b}}. \tag{18}\]
The last equality holds with \(\nabla\cdot\mathbf{B}=0\). Furthermore, we have
\[\mathbf{\hat{b}}\cdot(\nabla f\times\nabla g)=b_{i}\varepsilon^{ijk}\partial_{j}f \partial_{k}g/\sqrt{g}, \tag{19}\]
In any arbitrary coordinate system we have
\[(\nabla f)^{i}=g^{ij}\partial_{j}f, \nabla\cdot\mathbf{v}=\frac{1}{\sqrt{g}}\partial_{i}\left(\sqrt{g}v^ {i}\right),\] \[(\mathbf{v}\times\mathbf{w})^{i}=\frac{1}{\sqrt{g}}\varepsilon^{ijk}v_{j }w_{k}. \tag{20}\]
with \(b^{i}\) the contra- and \(b_{i}\) the co-variant components of \(\mathbf{\hat{b}}\), \(\varepsilon^{ijk}\) the Levi-Civita symbols and \(g^{ij}\) the contra-variant elements of the metric tensor.
## Appendix B Data access
The FELTOR code is available freely on GitHub at [https://github.com/feltor-dev/feltor](https://github.com/feltor-dev/feltor) with the latest release tracked on Zenodo [47]. It includes the dg library and the three-dimensional code used for this paper. The magnetic field equilibrium, wall and sheath domains and simulation box are setup using our earlier mentioned [https://github.com/feltor-dev/magneticfielddb](https://github.com/feltor-dev/magneticfielddb) Python repository. The parameter scan is setup using [https://github.com/mwiesenberger/feltorutilities](https://github.com/mwiesenberger/feltorutilities) which in turn is based on the simplesimdb Python package developed at [https://github.com/mwiesenberger/simplesimdb](https://github.com/mwiesenberger/simplesimdb). Simplesimdb is a free simulation database manager in Python that allows to run / submit, access and manage simulations using a unified Python interface. In order to help analyse the simulation data in Python we use xFeltor [https://github.com/feltor-dev/xFELTOR](https://github.com/feltor-dev/xFELTOR), an interface to the xarray Python package and pyFeltor [https://github.com/feltor-dev/pyFeltor](https://github.com/feltor-dev/pyFeltor), an implementation of basic dG numerical methods in Python. All three-dimensional renderings were setup in ParaView [69] the remaining analysis is available as Jupyter Notebooks in [https://github.com/mwiesenberger/data-analysis-3d](https://github.com/mwiesenberger/data-analysis-3d).
|
2309.04461 | Measuring and Improving Chain-of-Thought Reasoning in Vision-Language
Models | Vision-language models (VLMs) have recently demonstrated strong efficacy as
visual assistants that can parse natural queries about the visual content and
generate human-like outputs. In this work, we explore the ability of these
models to demonstrate human-like reasoning based on the perceived information.
To address a crucial concern regarding the extent to which their reasoning
capabilities are fully consistent and grounded, we also measure the reasoning
consistency of these models. We achieve this by proposing a chain-of-thought
(CoT) based consistency measure. However, such an evaluation requires a
benchmark that encompasses both high-level inference and detailed reasoning
chains, which is costly. We tackle this challenge by proposing a
LLM-Human-in-the-Loop pipeline, which notably reduces cost while simultaneously
ensuring the generation of a high-quality dataset. Based on this pipeline and
the existing coarse-grained annotated dataset, we build the CURE benchmark to
measure both the zero-shot reasoning performance and consistency of VLMs. We
evaluate existing state-of-the-art VLMs, and find that even the best-performing
model is unable to demonstrate strong visual reasoning capabilities and
consistency, indicating that substantial efforts are required to enable VLMs to
perform visual reasoning as systematically and consistently as humans. As an
early step, we propose a two-stage training framework aimed at improving both
the reasoning performance and consistency of VLMs. The first stage involves
employing supervised fine-tuning of VLMs using step-by-step reasoning samples
automatically generated by LLMs. In the second stage, we further augment the
training process by incorporating feedback provided by LLMs to produce
reasoning chains that are highly consistent and grounded. We empirically
highlight the effectiveness of our framework in both reasoning performance and
consistency. | Yangyi Chen, Karan Sikka, Michael Cogswell, Heng Ji, Ajay Divakaran | 2023-09-08T17:49:44Z | http://arxiv.org/abs/2309.04461v2 | # Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models
###### Abstract
Vision-language models (VLMs) have recently demonstrated strong efficacy as visual assistants that can parse natural queries about the visual content and generate human-like outputs. In this work, we explore the ability of these models to demonstrate human-like reasoning based on the perceived information. To address a crucial concern regarding the extent to which their reasoning capabilities are fully consistent and grounded, we also measure the reasoning consistency of these models. We achieve this by proposing a chain-of-thought (CoT) based consistency measure. However, such an evaluation requires a benchmark that encompasses both high-level inference and detailed reasoning chains, which is costly. We tackle this challenge by proposing a LLM-Human-in-the-Loop pipeline, which notably reduces cost while simultaneously ensuring the generation of a high-quality dataset. Based on this pipeline and the existing coarse-grained annotated dataset, we build the **CURE\(\phi\)** benchmark to measure both the zero-shot reasoning performance and consistency of VLMs. We evaluate existing state-of-the-art VLMs, and find that even the best-performing model (BLIP-2) is unable to demonstrate strong visual reasoning capabilities and consistency, indicating that substantial efforts are required to enable VLMs to perform visual reasoning as systematically and consistently as humans. As an early step, we propose a two-stage training framework aimed at improving both the reasoning performance and consistency of VLMs. The first stage involves employing supervised fine-tuning of VLMs using step-by-step reasoning samples automatically generated by LLMs. In the second stage, we further augment the training process by incorporating feedback provided by LLMs to produce reasoning chains that are highly consistent and grounded. We empirically highlight the effectiveness of our framework and show a 4% relative improvement in both reasoning performance and consistency.3.
Footnote 3: The data is released at [https://github.com/Yangyi-Chen/CoTConsistency](https://github.com/Yangyi-Chen/CoTConsistency).
## 1 Introduction
Vision-language models (VLMs) have exhibited competence at generating human-like responses by leveraging multimodal instructional data and large language models (LLMs) [25; 33; 74]. A key direction in improving such VLMs is to enable reasonable visual reasoning that extends beyond the immediately perceived information. We thus take a critical look at the reasoning capability of existing VLMs, measuring and improving both their performance and consistency in reasoning. For reasoning performance, we aim to measure whether VLMs can derive high-level inference correctly. For reasoning consistency, we seek to determine the extent to which VLMs can identify the underlying reasoning chains that lead to the high-level inference.
Previous work simplifies the evaluation of reasoning consistency by only considering coarse-grained rationales [72] and relying on human evaluation [35] and similarity measure [64], which lacks scalability and preciseness. Thus, we motivate to establish a new benchmark dataset that provides annotation of the fine-grained reasoning steps to automatically measure the reasoning consistency. However, collecting such a dataset is challenging due to high underlying human effort and may contain inconsistencies among annotators for the reasoning chains [11, 22, 51].
To address this challenge, we propose a LLM-Human-in-the-Loop pipeline for dataset construction. Several recent work has shown that LLMs can effectively follow human instructions to generate high-quality datasets [2, 38, 53, 60]. This pipeline functions by incorporating limited human assistance for providing instructions and filtering rules, enabling LLMs to efficiently generate high-quality datasets in a semi-automatic manner, substantially reducing annotation cost. Based on an existing coarse-grained visual inference dataset Sherlock [14], we establish a benchmark CURE\(\mathbf{\Theta}\) for **C**hain-of-Thought **V**is**U**al **R**easoning **E**valuation. It contains 1,622 human-verified samples of high-level visual inferences and corresponding CoT reasoning chains, intended for zero-shot evaluation. Two examples are presented in Figure 1. Particularly, the CoT reasoning chains consist of progressive subquestions, ranging from recognition (e.g., _What is on the cake?_) to cognition (e.g., _What does each candle represent?_), with the purpose of measuring the reasoning consistency of VLMs. Due to the notorious difficulty of natural language generation evaluation [46, 13], we formulate CURE\(\mathbf{\Theta}\) as a multiple-choice task for the ease of automatic evaluation. Particularly, for each visual input, we assess the reasoning in VLMs by evaluating their overall inference capabilities for a designated area (the bounding box in Figure 1) and their ability to correctly address the intermediate reasoning chain leading to the final inference.
We evaluate the state-of-the-art (SOTA) VLMs (e.g., BLIP-2 [25], miniGPT-4 [74]) on CURE\(\mathbf{\Theta}\). The key conclusions from these evaluations are: (1) The model's success in complex visual inference depends on LLMs components, visual inputs, and instruction finetuning; (2) Even the SOTA VLM (BLIP-2) falls short in comparison to human performance regarding overall visual reasoning performance. In addition, our findings indicate a lack of reasoning consistency. Specifically, the reliability of intermediate reasoning steps cannot be assured, irrespective of the accuracy of the final inference (and vice versa). This suggests VLMs are not always consistent in their reasoning.
To enhance VLMs' reasoning performance and consistency, we propose a two-stage training framework for training rationale-augmented VLMs. In the initial stage, VLMs are trained on vision-language reasoning samples that encompass step-by-step reasoning chains, which are automatically generated by LLMs. However, VLMs may produce rationales that have inconsistencies with the high-level reasoning or that are not grounded in visual content (hallucination) after this stage. Moreover, the scalability of producing helpful rationales is constrained due to reliance on high-quality human-annotated dense captions. Thus, we introduce a subsequent stage that integrates feedback from LLMs to further (1) train VLMs in generating rationales that are sophisticated, consistent, and firmly grounded in the images, and (2) effectively leverage the image-caption pairs sourced from the wild. Experimental results substantiate the effectiveness of the proposed training framework. The relative improvement in both reasoning performance and consistency are approximately 4% compared to the SOTA. We summarize our contributions as follows:
* We frame high-level abductive visual reasoning in a CoT manner that allows us to automatically measure both reasoning performance and consistency with precision.
* We leverage the proposed LLM-Human-in-the-Loop pipeline to curate the CURE\(\mathbf{\Theta}\) benchmark, which reveals the limitations in reasoning performance and consistency exhibited by SOTA VLMs.
Figure 1: Two examples from CURE\(\mathbf{\Theta}\). Besides the high-level inference about the images (e.g., _The girl is turning two years old today._), it also contains CoT reasoning chains to evaluate VLMs’ reasoning performance and consistency. We only show 2 candidate options (of 6 in total) for presentation. More examples are shown in Figure 9.
* We propose a two-stage training framework, including supervised fine-tuning and learning from LLMs feedback, to improve reasoning performance and CoT consistency.
## 2 Related Work
**Vision-Language Pretraining.** VLMs have demonstrated remarkable performance across various downstream tasks, primarily due to their extensive pre-training on large-scale datasets [9; 54; 58]. Initially, VLMs heavily relied on object detectors for image comprehension [28; 52; 34; 24; 30; 29; 30; 29; 73]. Subsequent developments in VLMs research have aimed to bypass the need for resource-intensive object detectors [7; 16; 20], streamline the inference process [15; 65], incorporate more extensive visual data [67; 70; 26; 43], and introduce additional tasks for object grounding during pre-training [17; 70]. As research progresses, efforts are made to design a unified architecture for VLMs, enabling them to handle multiple tasks without requiring task-specific adjustments [62; 57; 25]. In this context, LLMs play a crucial role in the functioning of VLMs. VLMs typically include LLMs as a component to generate coherent language outputs, leveraging large-scale multimodal instruction tuning data for effective alignment of the two modalities [25; 33; 74].
**CoT Reasoning Consistency.** The CoT reasoning approach was initially introduced to enhance the reasoning capabilities of LLMs by prompting them to generate rationales and then answers [63]. This approach is extended to various domains, models, and more complex problems [42; 27; 4; 18; 69; 68; 48]. In addition, the CoT reasoning consistency is effectively utilized to improve the reasoning performance [59]. However, it is still not clear how consistent LLMs reasoning is, given the mixed results in previous work [56; 21; 37; 47; 45].
**Vision-Language Reasoning.** There exists a paucity of comprehensive diagnostic studies concerning VLMs with the aim of quantifying their reasoning consistency, although efforts have been spent on measuring the visual reasoning performance (e.g., Sherlock) [14] and coarse-grained rationale evaluation, including multiple-choice question answering (e.g., VCR) [72], human evaluation of generated rationales [35], and similarity measure between the generated and the ground-truth rationales [64]. Some work has identified the failure of VLMs to accurately answer subquestions that are components of the main problems [44; 19; 49; 61; 35; 64]. For instance, VLMs may correctly determine the significant size of a mountain in an image but erroneously classify it as small when responding to a query such as "Are the mountains small?" [44]. In contrast to the aforementioned studies that focus on coarse-grained rationale evaluation and individual subquestions, we create reasoning chains that consist of coherent subquestions capable of supporting high-level inference. This approach allows us to precisely measure the extent to which reasoning in VLMs is consistent and grounded.
## 3 CURE# Benchmark
We present the CURE# dataset for measuring visual reasoning performance and consistency in VLMs and the LLM-Human-in-the-Loop pipeline adopted to construct it semi-automatically.
Our dataset builds on the Sherlock dataset [14], which measures abductive reasoning by annotating visual clues (text and bounding boxes for perceptual elements) and high-level inference. However, our aim is not only to measure the capacity of VLMs to accurately perform high-level visual inference but also to subsequently ascertain the extent to which the resulting inference is thoroughly substantiated. We thus add two new annotations to enable this: **(1) Reasoning Chains:** We provide fine-grained and precise CoT reasoning containing coherent subquestions that can be chained together to derive the high-level inference provided by Sherlock. **(2) Candidate Answers:** To avoid the long-standing issues in the evaluation of natural language generation [46], we transform the generation task of high-level inference and CoT subquestions into a multiple-choice question answering task by generating plausible but incorrect alternative candidates for each ground truth, as shown in Figure 1.
In this section, we outline the procedure to semi-automatically create CURE# with LLMs and then describe the evaluation metrics adopted to measure reasoning performance and consistency.
### LLM-Human-in-the-Loop Data Generation Pipeline
Our dataset construction pipeline consists of two stages, as illustrated in Figure 2. The first stage aims to generate a preliminary dataset that potentially contains instances of failure, while the second
stage filters out the error cases, similar to the crowdsource dataset collection approaches [31]. In both stages, LLMs carry out the majority of tasks, while human practitioners (the researchers in this case) iteratively correct errors made by LLMs [1, 3, 5]. While we apply this process to collect both **CoT reasoning chains** and **candidate answers**, we illustrate the process using reasoning chain creation and then describe how we apply it to candidate answers. Since we have generalized the process to collect multiple annotation types for CURE\(\cancel{P}\), the adopted pipeline can also be useful to others trying to adopt LLMs for dataset creation.
#### 3.1.1 Stage-1: Generation of a Preliminary Dataset.
We randomly select 10,000 examples from the Sherlock evaluation set to serve as the raw coarse-grained examples. In this stage, the practitioner engineers an initial prompt that basically describes the data LLMs should generate based on each raw example. The dataset description is then fed along with necessary context - the visual clues describing the image and the high-level inference from Sherlock - to generate a small initial dataset of reasoning chains (e.g., for 50 examples). These examples are usually inadequate and look different than intended. Next, the practitioner should carefully examine the generated examples and revise the dataset description accordingly. Through multiple iterations, a curated instruction that contains dataset descriptions and specific requirements can be produced to guide LLMs to generate the full-sized preliminary dataset.
Reasoning ChainsWe use GPT-4 [39] in all dataset generation steps. Our stage-1 prompt for generating reasoning steps is shown in Figure 15. This prompt starts by describing the overall goal, inputs, and outputs we expect from LLMs. It then outlines five principles to ensure LLMs generate meaningful and reasonable subquestions. We also find that the inclusion of an in-context example for a step-by-step demonstration of sample generation significantly enhances the ability of LLMs to generate samples that conform to the specified principles. The resulting preliminary dataset contains fairly uniform reasoning chains for 1.6k examples. Typically the generated subquestions support the high-level inference when chained together, following a progression from perception problems to more complex visual inference, thus adhering to the "from recognition to cognition" practice [72].
Candidate AnswersWe can potentially evaluate whether the outputs from VLMs match or closely resemble ground truth inference or reasoning steps, similar to the practice in previous work [35, 64]. However, this approach has two notable shortcomings: (1) The evaluation of natural language generation has been a persistent challenge, lacking a universally accepted approach [46]; (2) Although we provide ground truth answers for each image, some alternative predictions may also be correct, regarding the nature of abductive reasoning [55]. To address the above issues, we formulate CURE\(\cancel{P}\) as a multiple-choice question answering task, requiring VLMs to select the most likely inference/answer from the six candidates provided. We prompt LLMs using the same stage-1 procedure, but to generate potential candidate inference/answers instead of reasoning steps. These candidate answers maintain relevance to the provided image while incorporating factual inaccuracies when compared to the ground truth. The prompts adopted are shown in Figure 11. After several rounds of our stage-1
Figure 2: The LLM-Human-in-the-Loop dataset construction pipeline consists of the generation and filtering stages. With a small amount of human assistance, LLMs can efficiently create high-quality datasets in a semi-automatic manner.
process, we identify that a simple prompt generates the most appealing examples instead of the complex one used for reasoning chain generation.
#### 3.1.2 Stage-2: Filtering of Inadequate Samples.
Although samples in the preliminary dataset generally adhere to the desired criteria, failures still arise due to inherent limitations in LLMs [1]. However, by drawing explicit attention to common failure modes, we can instruct LLMs to correctly filter out bad example groups. In each round, the practitioner selects a small number of samples and conducts a thorough inspection to extract predominant failure modes. A distinct prompt is then created for each failure mode that requires LLMs to determine whether reasoning chains or sets of candidate answers meet that failure case. This prompt is applied to all remaining preliminary data, removing all examples that LLMs identify as lying in the failure modes. The practitioner then repeats this procedure through multiple iterations until the randomly selected sample of examples no longer exhibits any instances of error. We conduct a total of six iterations to systematically remove groups of samples that displayed common failure modes. The identified failure modes are listed in Table 1, and the prompts are described in Appendix A.
Human Verification.While the filtering stage yields a substantial labor reduction when compared to the initial unfiltered dataset (50% reduction estimated), there still exist some failure cases. For example, our analysis finds that a certain amount of examples in the Sherlock dataset share the same reasoning problem that relies on simplistic visual cues such as sky and lighting conditions to infer weather patterns and differentiate between day and night. This kind of shortcut annotation is documented in previous studies [12; 10; 71]. We motivate to address these concerns since CURE\(\mathcal{O}\) is for evaluation purposes. We hire human annotators to meticulously review the entire created dataset to ensure two primary objectives: (1) Each sample's validity for measuring reasoning performance and consistency; (2) The inclusion of diverse samples in the evaluation dataset. The details of human verification are described in Appendix B.
### Evaluation Metrics
As described in the previous section, we frame CURE\(\mathcal{O}\) as a multiple-choice problem with six potential inference per image and six plausible candidates for every subquestion (reasoning step). Specifically, each image \(I_{i}\) is paired with a high level question \(Q_{h}^{i}\) associated with six candidate inferences \(O_{h}^{i}=\{o_{h1}^{i},o_{h2}^{i},...,o_{h6}^{i}\}\). Additionally, reasoning chains are made up of several questions \(Q_{c}^{i}\). Each question \(q\in Q_{c}^{i}\) is associated with a set of six candidate answers \(O_{q}^{i}=\{o_{q1}^{i},o_{q2}^{i},...,o_{q6}^{i}\}\). We propose a series of metrics that evaluate not only the reasoning ability of the VLMs but also the consistency in their reasoning.
#### 3.2.1 Metrics for Reasoning Performance
**Performance in High-Level Reasoning.** The metric \(R_{h}\) is designed to measure the VLMs' ability in accurately choosing the most probable inference from the candidate pool for each image:
\[R_{h}=\frac{1}{N}\sum_{i=1}^{N}\mathbb{I}(\hat{a}_{h}^{i}=a_{h}^{i}),\quad\hat {a}_{h}^{i}\in\{o_{h1}^{i},o_{h2}^{i},...,o_{h6}^{i}\}, \tag{1}\]
where \(N\) signifies the total number of images, \(\mathbb{I}(x)\) is the indication function that returns 1 if x is true and 0 otherwise, \(\hat{a}_{h}^{i}\) and \(a_{h}^{i}\) are model's chosen answer and ground truth answer respectively.
\begin{table}
\begin{tabular}{c|l} \hline \hline Iteration & Common Failure Modes \\ \hline
1 & The CoT reasoning chains lack consistent subquestions that are capable of deriving the high-level inference. \\
2 & The candidate inference about the image exhibits similarity in meaning with the ground truth inference. \\
3 & The ground truth answers for the subquestions are incorrect due to the occurrence of hallucination in LLMs. \\
4 & The candidate answers for the subquestions are also correct. \\
5 & The problems can be solved directly without relying on visual inputs. \\
6 & The subquestions can contain some words that are irrelevant to the visual inputs. \\ \hline \hline \end{tabular}
\end{table}
Table 1: The identified common failure modes at each iteration.
**Performance in CoT Reasoning.** The metric \(R_{cot}\) is used to evaluate the VLMs' ability to correctly answer all subquestions contained in the reasoning chain for each image:
\[R_{cot}=\frac{1}{N}\sum_{i=1}^{N}\mathbb{I}(\sum_{j=1}^{M}\mathbb{I}(\hat{a}_{j} ^{i}=a_{j}^{i})=M),\quad\hat{a}_{j}^{i}\in\{o_{j1}^{i},o_{j2}^{i},...,o_{j6}^{i}\}, \tag{2}\]
where \(M\) stands for the count of subquestions within the CoT reasoning chain per image, \(\hat{a}_{j}^{i}\) refers to the model's prediction, and \(a_{j}^{i}\) is the ground truth answer.
**Overall Performance in Reasoning.** We propose \(R_{o}\) to measure if VLMs can successfully perform both high-level reasoning and CoT reasoning for every image:
\[R_{o}=\frac{1}{N}\sum_{i=1}^{N}\mathbb{I}(\hat{a}_{h}^{i}=a_{h}^{i})*\mathbb{I }(\sum_{j=1}^{M}\mathbb{I}(\hat{a}_{j}^{i}=a_{j}^{i})=M) \tag{3}\]
where the notations adhere to their previous definitions.
#### 3.2.2 Metrics for Reasoning Consistency
**Consistency in Forward Reasoning.** We define \(C_{f}\) to evaluate the VLMs' capacity to correctly answer the high-level inference question once all subquestions have been correctly addressed:
\[C_{f}=\frac{1}{\sum_{i=1}^{N}s_{i}}\sum_{i=1}^{N}s_{i}\cdot\mathbb{I}(\hat{a}_ {h}^{i}=a_{h}^{i}),\quad\hat{a}_{h}^{i}\in\{o_{q1}^{i},o_{q2}^{i},...,o_{q6}^{ i}\}, \tag{4}\]
where \(s_{i}\) equals 1 if all subquestions for the \(i\)th image have been correctly answered by the VLM, and 0 otherwise, and other notations adhere to their previous definitions.
**Consistency in Backward Reasoning.** We define \(C_{b}\) to evaluate the VLMs' proficiency in correctly answering all subquestions given the successful answering of the high-level inference question:
\[C_{b}=\frac{1}{\sum_{i=1}^{N}h_{i}}\sum_{i=1}^{N}h_{i}\cdot\mathbb{I}(\sum_{j= 1}^{M}\mathbb{I}(\hat{a}_{j}^{i}=a_{j}^{i})=M),\quad\hat{a}_{j}^{i}\in\{o_{j1 }^{i},o_{j2}^{i},...,o_{j6}^{i}\}, \tag{5}\]
where \(h_{i}\) equals 1 if the VLM correctly answers the high-level inference question for the \(i\)th image, and 0 otherwise, and other notations adhere to their previous definitions.
## 4 Dataset Analysis
### Dataset Statistics
CURE(r) contains 1,622 evaluative instances, wherein each instance encompasses an average of 2.91 reasoning chains, also known as subquestions, reflecting a profound commitment to providing rich, complex data for effective analysis. On average, the lengths of the candidate inference, subquestions,
Figure 4: Question distribution.
Figure 3: The word cloud of the visual clues.
and candidate answers in the dataset are 7.05, 9.97, and 2.96, respectively. Note that these elements are products of LLMs, generated based on the visual clues provided by human annotators. We thus present the word cloud of the visual clues regarding the evaluation samples in Figure 3. Upon examination, it becomes apparent that these visual values primarily center around human-oriented concepts. They incorporate information about entities, activities, and occurrences that are directly associated with individuals. This observation provides a partial representation of the data distribution within our dataset, particularly in relation to the target inference, subquestions, and their corresponding answers.
In addition, we delineate the distribution of question types within CURE\(\mathbf{\phi}\) as presented in Figure 4. We find that CURE\(\mathbf{\phi}\) comprises various kinds of questions with the "What" type questions dominating the distribution. This dominance is primarily due to the extensive use of such question in Sherlock for cultivating a holistic comprehension of any given context or subject matter. Indeed, these types of queries are inherently employed to both obtain a detailed narrative of the scenario, as well as to facilitate visual inference based on the perceived information.
### Human Evaluation
We employ human annotators to conduct human evaluation with emphasis on two aspects: (1) What is the level of human performance observed on CURE\(\mathbf{\phi}\)? (2) Do the samples within CURE\(\mathbf{\phi}\) hold validity and can be effectively used for evaluation? We select a sample of 200 instances from CURE\(\mathbf{\phi}\). The annotation details are described in Appendix B. We engage three human annotators to conduct the task of answering multiple-choice questions and provide annotations indicating the presence of any failure mode mentioned in Table 1 or any other unidentified failure modes. The human performance is listed in Table 2. The detailed discussion of the human performance compared with the model performance is in Sec. 6. In the assessment of sample validity, merely 3% of the evaluation samples within the benchmark are found to demonstrate specific issues. Of this subset, 2% of the samples exhibited inconsistent reasoning chains, while 1% of the samples provided incorrect answers for the subquestions. It is worth noting that apart from the issues outlined in Table 1, no other problems have been reported. These findings serve as a validation of the high quality of CURE\(\mathbf{\phi}\), and also demonstrate the effectiveness of our pipeline at identifying unqualified samples.
## 5 Approach
In our preliminary experiments, we identified that VLMs can effectively induct high-level visual inference when provided with complete reasoning chains. We thus propose to train a model capable of generating rationales that can potentially enhance visual reasoning performance and consistency. To this end, we propose a bifurcated training framework that is able to train a VLM to efficiently produce rationales that facilitate high-level visual inference (see Figure 5). In the initial stage of the framework, we aim to train CoTBLIP to generate rationales that contain enough visual details and reasonable inference. We employ Supervised Fine-Tuning (**SFT**) to train CoTBLIP, utilizing visual inference data that comprises CoT reasoning chains and high-level inference automatically generated by LLMs (a.k.a, refined LLaVA dataset [33]). To further mitigate certain issues in generated rationales (e.g., hallucination) and scale up the training process, we introduce the second Reinforcement Learning from LLMs (AI) Feedback (**RLAIF**) stage, where we employ image-caption pairs sourced from the wild to facilitate the training. We select BLIP-2-T5\({}_{xl}\) as our backbone model due to its strong performance on basic vision-language tasks [66; 8]. Consequently, we refer to our rational-generation model as **CoTBLIP**.
We utilize the complex reasoning samples from the LLaVA dataset [33]. The original 77K samples are produced by instructing GPT-4 to generate visual inference using a carefully curated set of five human-annotated captions and bounding boxes associated with images from the COCO Dataset [31]. However, the generated samples consist of repetitive, dialogic expressions that might not be entirely grounded in the images. We thus perform a further post-processing step that prompts LLMs to generate CoT reasoning chains based on the original samples, placing an emphasis on ensuring that these chains are logical, consistent, and succinct. The detailed prompt is shown in Figure 16. We train CoTBLIP on these refined samples using SFT.
Following the SFT training stage, CoTBLIP is competent in generating plausible rationales based on the provided image that could contribute to high-level inference. However, the produced rationales in the inference time might contain inconsistent reasoning chains or contents that are not grounded in the images (hallucination). In addition, the scalability of the SFT training stage is limited due to its
dependence on high-quality human-annotated dense captions, which makes it difficult for this stage to leverage image-caption pairs in the wild. This can lead to lower generalizability on broad range of visual concepts. Therefore, we extend the training into a second stage, optimizing the generation of rationales using feedback from LLMs.
Stage-2: RLAIF.In this stage, we use image-caption pairs sourced from the wild (e.g., SBU Captions [40], CC3M [50]). For each image, CoTBLIP is initially prompted to generate three CoT reasoning chains, leading to high-level visual inference regarding each image. We also note that there is a noticeable variation in the quality of these generated reasoning chains, which necessitates external feedback. Therefore, we use LLMs (GPT-3.5-Turbo in our implementation) to provide feedback on the reasoning chains based on the provided caption, considering three aspects:
* **Sophisticatedness:** The CoT reasoning chains should derive interesting high-level visual inference, instead of trivial visual information (e.g., The image might have been captured during the day.)
* **Consistency:** The reasoning chains should be logically consistent to derive the high-level inference without unsupported assertions or gaps in reasoning.
* **Groundedness:** The extracted visual details in the reasoning chains should be fully grounded in the images, instead of hallucination by CoTBLIP.
The prompt we use is described in Figure 17. We adapt the methods proposed by [41] to facilitate pairwise comparison between two reasoning chains and establish a ranking for the three generated reasoning chains. In addition, we leverage a consistency check to exclude instances in which LLMs exhibit conflicting rankings. We leverage the SBU Captions to generate around 27K LLM preference samples considering the constraints of our available resources. We also demonstrate that increasing the sample size during this stage results in consistent performance improvements in Section 6.4.
Given the LLM preference data, we employ Conditional Reinforcement Learning to train CoTBLIP due to its stability as observed in previous work [36; 32; 23]. Specifically, we introduce two control tokens, namely <Good> and <Bad>. For each sample containing a set of three ranked reasoning chains, we add the <Good> control token to the highest-ranked chain and the <Bad> control tokens to the remaining two chains. In the training time, given an appended control token, we optimize CoTBLIP to maximize the likelihood of the associated reasoning chain. Through this approach, CoTBLIP can learn to distinguish the difference between control tokens and their respective outputs [32]. We note that there is no requirement to perform training for a separate reward model, given that the LLM is capable of fulfilling that role effectively.
Inference.During inference, we initially prompt CoTBLIP to generate rationales. However, it is important to acknowledge that when dealing with CoT subquestions that primarily involve basic visual perceptual problems and text-only inference based on provided visual details, the generated rationales may have limited effectiveness. Thus, the rationales are used exclusively for high-level visual inference. Specifically, these rationales are incorporated before the top-tier question to prompt the downstream VLMs to generate the prediction. In our implementation, we opt for utilizing the original BLIP-2-T5\({}_{xl}\) model to conduct predictions based on the rationales generated by CoTBLIP.
## 6 Experiment
### Model
We evaluate the reasoning performance and consistency of the following models on CURE(r) by employing the metrics introduced in Section 3.2:
Figure 5: The two-stage training framework consisting of SFT and RLAIF.
**Text-only Models**: We evaluate the performance of LLMs without the use of visual inputs. Specifically, we consider GPT-3.5-Turbo-0613 (**Turbo**) for evaluation.
**SOTA VLMs in the Previous Era**: We include the **OFA-Large/Huge**[57], which are the leading VLMs in the previous era that do not incorporate the LLMs component.
**BLIP Family**: We consider the **BLIP-2-OPT\({}_{6.7b}\)/T\({}_{sxl}\)[25]**, which is the first open-source model that effectively utilizes LLMs for vision-language modeling. Additionally, we incorporate **InstructBLIP-T5\({}_{sxl}\)[6]**, which performs instruction tuning on a mixture of vision-language datasets.
**Chat-based VLMs**: We include chat-based VLMs that have undergone extensive training on vision-language instruction tuning data. These models include **LLaVA\({}_{13b}\)[33]** and **miniGPT-4\({}_{13b}\)[74]**.
**Rationale-augmented BLIP-2 (ours)**: As outlined in Sec. 5, we append the generated CoT reasoning chain from CoTBLIP to the frozen BLIP-2-T5\({}_{sxl}\) model and prompt it to predict the answer. Note that this pertains exclusively to high-level visual inference.
### Implementation
Given that none of the VLMs under consideration have been trained on grounded data, it is not feasible to directly incorporate bounding box information into these models We adopt a compromise solution that involves preprocessing the evaluation samples through the automatic incorporation of annotated bounding boxes into the images. In addition, we instruct VLMs to focus on the specific region delineated by the bounding boxes in the prompts provided. We describe the prompts for evaluation in Appendix A. For each top-tier question or subquestion in the reasoning chain, VLMs only need to select one option from candidate answers. So we compare the probability associated with the six answer tokens (a.k.a, "A", "B", "C", "D", "E", "F") to decide VLMs' predictions.
### Experimental Results
We consider the evaluation metrics defined in Sec. 3.2. The experimental results regarding the reasoning performance and consistency are listed in Table 2. We summarize the findings as follows: (1) The model's ability to perform complex visual inference and produce reasonable outputs relies on three crucial elements: LLMs, visual inputs, and instruction fine-tuning. Models solely reliant on text-based information (Turbo), VLMs lacking LLMs components (OFA), and VLMs incorporating LLMs that have not undergone instructional fine-tuning (BLIP-2-OPT) exhibit inadequate performance; (2) The Chat-based VLMs (LLaVA, miniGPT-4) that have been explicitly supervised fine-tuned on synthetic user-interaction response samples exhibit a lack of visual reasoning ability and reasoning consistency. The underlying cause can be ascribed to the informal nature of the chat-style data, which lacks sufficient supervision to facilitate VLMs in acquiring the ability to integrate visual elements effectively for performing high-level visual inference; (3) The existing best-performing model, BLIP-2-T5, still falls short in reasoning performance and consistency, compared to the human evaluation results. This suggests that significant effort is needed to facilitate VLMs in achieving a level of visual reasoning comparable to that of humans in a systematic and consistent manner; (4) Our framework improves VLMs' ability to perform visual reasoning and demonstrate better reasoning consistency to a certain extent. Specifically, we observe
\begin{table}
\begin{tabular}{l|c c c|c c} \hline \hline Metric & \multicolumn{3}{c}{Performance} & \multicolumn{2}{c}{Consistency} \\ \hline Model & \(R_{o}\) & \(R_{h}\) & \(R_{cot}\) & \(C_{b}\) & \(C_{f}\) \\ \hline Random & 0.14 & 16.67 & 0.82 & 0.82 & 16.67 \\ Turbo & 15.97 & 33.42 & 40.26 & 47.79 & 39.66 \\ OFA-Large & 0.12 & 17.63 & 0.62 & 0.70 & 20.0 \\ OFA-Huge & 0.06 & 16.40 & 0.68 & 0.38 & 9.09 \\ BLIP-2-OPT & 0.06 & 14.61 & 0.62 & 0.42 & 10.0 \\ BLIP-2-T5 & 54.56 & 76.82 & **65.66** & 71.03 & 83.10 \\ InstructBLIP-T5 & 54.01 & 76.14 & 65.35 & 70.93 & 82.64 \\ LLaVA & 0.12 & 14.67 & 17.82 & 17.65 & 14.29 \\ miniGPT-4 & 2.10 & 23.12 & 38.75 & 41.80 & 28.81 \\ CoTBLIP (ours) & **56.91** & **80.05** & **65.66** & **71.09** & **86.67** \\ Human & 85.0 & 93.0 & 89.0 & 91.40 & 95.51 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The results of the reasoning performance and consistency. The human performance is averaged among 3 human annotators. We have defined the metrics in Sec. 3.2.
a 4% improvement in both the high-level visual inference and the forward reasoning consistency. CoTBLIP offers a distinct advantage by providing CoT rationales that contain both extracted visual details and potential inference, thereby improving the visual reasoning pertaining to a specific image.
### Further Analysis
Ablation Study.We conduct an ablation study to understand the contribution of the SFT and RLAIF stages. The results are presented in Table 3. We observe that both of these stages contribute to the improvement in reasoning performance and consistency. In particular, we observe further improvements when employing the RLAIF after the SFT stage. For example, the overall reasoning (\(R_{o}\)) for the combined stages is 56.91 compared to 54.93 and 55.06 by the baseline and after SFT stage respectively. This can be attributed to the ability of RLAIF to facilitate enhanced calibration of the generated rationales, thereby augmenting their cohesiveness and substantiated nature. However, using only the RLAIF without the SFT stage negatively impacts performance when contrasted with the results of directly prompting BLIP-2 without training for rationale generation followed by answer prediction. The presence of the SFT stage enables VLMs to generate reasonable rationales. In its absence, CoTBLIP (BLIP-2) is restricted to producing caption-style outputs or trivial rationales that do not contribute significantly to high-level inference. Thus, without the SFT stage for initialization, the training of CoTBLIP with RLAIF is not feasible.
Training Data of the RLAIF Stage.We investigate the impact of varying the amount of training data during the RLAIF stage (see Figure 6). We have omitted the presentation of \(R_{cot}\) as they are identical. Our findings reveal that a continuous expansion of training samples positively impacts the RLAIF training stage of CoTBLIP, regarding both reasoning performance and consistency. These results demonstrate the potential of utilizing web-scale image-captions data to further improve the training, attributing to the scalability of the RLAIF stage.
Backward Reasoning Consistency.We conduct a comprehensive study on the CoT reasoning performance (\(R_{cot}\)) of VLMs, evaluating the extent of performance degradation in answering sub-questions (see Figure 7). We select examples that contain three subquestions for the presentation purpose. We observe that existing VLMs often struggle with the initial visual perceptual problem, which involves basic visual details needed for high-level visual inference. However, these mod
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline Metric & \multicolumn{2}{c}{Performance} & \multicolumn{2}{c}{Consistency} \\ \hline Model & \(R_{o}\) & \(R_{h}\) & \(C_{b}\) & \(C_{f}\) \\ \hline BLIP-2-T5 & 54.93 & 77.68 & 70.71 & 83.66 \\ CoTBLIP & 56.91 & 80.05 & 71.09 & 86.67 \\ - w/o RLAIF & 55.06 & 78.67 & 69.98 & 83.85 \\ - w/o SFT & 54.75 & 77.32 & 70.81 & 83.38 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study of the SFT and RLAIF stages. BLIP-2-T5 refers to prompting BLIP-2 without training to generate rationales. The \(R_{cot}\) metric (omitted here) holds the same across all methods because the generated rationales are only used for high-level visual inference.
els can partially derive the high-level inference when provided with the extracted visual details to some degree, evidenced by the relatively small performance drop when answering the second and third questions. This demonstrates that high-level visual inference derived by VLMs is not entirely grounded in the visual details, leading to a low \(C_{b}\).
Forward Reasoning Consistency.We choose the highest-performing models, specifically BLIP-2 and CoTBLIP, for conducting a qualitative analysis of their forward reasoning consistency. We selected these models since they exhibit significant performance improvements compared to text-only models. We select two examples, shown in Figure 8, to highlight cases where BLIP-2 demonstrates a lack of forward reasoning consistency and where CoTBLIP can potentially offer assistance. We observe that CoTBLIP demonstrates the ability to generate coherent rationales, starting with visual elements that are highly relevant to the image, and subsequently advancing towards more sophisticated visual inference that significantly impacts the prediction. For example, the reasoning chain in the second example in Figure 8 seems to first identify some motorcyclists that are parked on a street in some kind of gathering and then provides the high-level inference indicating that these folks might be part of a community interested in such vehicles. Notably, incorporating the rationales explicitly within the context enhances the reasoning consistency of VLMs.
## 7 Conclusion
This paper motivates to evaluate the reasoning performance and consistency of VLMs. We create a CURE(r) benchmark using a LLM-Human-in-the-Loop pipeline and identify the deficiencies in existing VLMs. To tackle these challenges, we introduce a two-stage training framework that consists of supervised fine-tuning and learning from LLMs feedback. Our method demonstrates promising improvement in VLMs' reasoning performance and consistency.
## Limitation
As shown in Table 2, our proposed CoTBLIP still exhibits a significant gap, regarding the reasoning performance and consistency compared to the human annotators. This indicates substantial efforts are necessary to enable existing VLMs to perform robust visual inference like humans. CoTBLIP currently can only generate general visual inference about the given images, without considering the instructions. Future work is needed to enable CoTBLIP to perform instruction-guided reasoning chain generation that can more effectively facilitate high-level inference.
|
2303.18013 | LaCViT: A Label-aware Contrastive Fine-tuning Framework for Vision
Transformers | Vision Transformers (ViTs) have emerged as popular models in computer vision,
demonstrating state-of-the-art performance across various tasks. This success
typically follows a two-stage strategy involving pre-training on large-scale
datasets using self-supervised signals, such as masked random patches, followed
by fine-tuning on task-specific labeled datasets with cross-entropy loss.
However, this reliance on cross-entropy loss has been identified as a limiting
factor in ViTs, affecting their generalization and transferability to
downstream tasks. Addressing this critical challenge, we introduce a novel
Label-aware Contrastive Training framework, LaCViT, which significantly
enhances the quality of embeddings in ViTs. LaCViT not only addresses the
limitations of cross-entropy loss but also facilitates more effective transfer
learning across diverse image classification tasks. Our comprehensive
experiments on eight standard image classification datasets reveal that LaCViT
statistically significantly enhances the performance of three evaluated ViTs by
up-to 10.78% under Top-1 Accuracy. | Zijun Long, Zaiqiao Meng, Gerardo Aragon Camarasa, Richard McCreadie | 2023-03-31T12:38:08Z | http://arxiv.org/abs/2303.18013v3 | # LaCViT: A Label-aware Contrastive Training Framework for Vision Transformers
###### Abstract
Vision Transformers have been incredibly effective when tackling computer vision tasks due to their ability to model long feature dependencies. By using large-scale training data and various self-supervised signals (e.g., masked random patches), vision transformers provide state-of-the-art performance on several benchmarking datasets, such as ImageNet and CIFAR-10. However, these vision transformers pre-trained over general large-scale image corpora could only produce an anisotropic representation space, limiting their generalizability and transferability to the target downstream tasks. In this paper, we propose a simple and effective Label-aware Contrastive Training framework _LaCViT_, which improves the isotropy of the pretrained representation space for vision transformers, thereby enabling more effective transfer learning amongst a wide range of image classification tasks. Through experimentation over five standard image classification datasets, we demonstrate that _LaCViT_-trained models outperform the original pretrained baselines by around 9% absolute Accuracy@1, and consistent improvements can be observed when applying _LaCViT_ to our three evaluated vision transformers1.
Footnote 1: Codes of the proposed framework will be publicly available upon acceptance
## Introduction
Transformers Vaswani et al. (2017) have achieved much success in the field of computer vision, with well-known models such as ViT Dosovitskiy et al. (2020) and Masked Autoencoders (MAE) He et al. (2021) having been central to advancing the state-of-the-art for many vision tasks, such as image classification and object detection. However, these models share one common limitation: they have little discernible learned inductive biases Dosovitskiy et al. (2020), which are essential properties that could help them to better handle unseen examples and improve the effectiveness.
This limitation is not necessarily problematic when training a single task model with large-scale datasets. However, increasingly researchers and practitioners are focusing on the transfer of models across tasks, as a means to counteract small training sample sizes in a target domain or as a general method to improve effectiveness. Particularly, recent studies have shown that some vision transformers still heavily rely on whole network fine-tuning to achieve performance gains Zhou et al. (2021), showing their lack of transferability from the pretrained vision representation to the target tasks. We believe that the discriminative discrepancy of the representation space between the general pretraining corpora and the target tasks leads to lack of transferability of these vision transformers Dosovitskiy et al. (2020); Peng et al. (2021).
Circa 2021, researchers started to propose improved methods to improve the inductive bias in vision transformers Li et al. (2021); Graham et al. (2021); Wu et al. (2021). Two classes of solutions were identified, namely: 1) injection of knowledge from pre-trained convolutional neural networks Touvron et al. (2021); Xu et al. (2021); or 2) directly adding convolutional layers into the transformer Graham et al. (2021); Wu et al. (2021). The convolutional neural approaches are leveraged due to their translation invariance ability. However, while promising, these workarounds remove some of the native advantages of transformer models that make them attractive for vision tasks, specifically superior training efficiency and scalability. Those native advantages of transformer models brings markedly reduced training times and cost Dosovitskiy et al. (2020); He et al. (2021). Moreover, as the most discriminative information for the target tasks, the task label information is normally ignored in their pretrained representations He et al. (2021); Xie et al. (2022). Hence, it would be advantageous to have an alternative approach to improve the transfer effectiveness of vision transformers without relying on convolutional models/layers while utilising the task labels in the fine-tuning (transfer learning) stage.
This paper proposes a simple but effective label-aware contrastive training framework (named _LaCViT_) to improve the transfer learning capability of vision transformers. In particular, our _LaCViT_ uses a label-aware contrastive learning loss with two training stages to transfer the general pretrained discriminative into the discriminative space of the target task. This label-aware approach enables _LaCViT_-trained models to refine embeddings of samples belonging to the same class. To the best of our knowledge, _LaCViT_ is the first framework that leverages contrastive learning within a vision transformer to improve the transfer learning performance, while mitigating the issues with the model transfer without introducing CNN layers into the transformer (hence
avoiding the associated large increase in training cost).
Note that our _LaCViT_ is a general framework that can be deployed on a series of vision transformer base models for contrastive training. To evaluate our contrastive training framework, we deploy _LaCViT_ on several popular pre-trained vision transformer models (i.e., ViT [14], Data2Vec [1], SimMIM [21] and MAE [15]) over five standard image classification datasets. Our results demonstrate that _LaCViT_-trained models are significantly more effective than the underlying based models (e.g., the _LaCViT_-trained MAE, _LaCViT_-MAE, achieve a 9% absolute Accuracy@1 gain compared with the original MAE on CUB-200-2011 dataset), particularly in the few-shot scenario when smaller numbers of training examples are available. The consistent improvements indicate that _LaCViT_ is a stable training framework that can be applied to enhance a wide range of other vision transformer models. Moreover, our analytical experiments on the MAE based model demonstrate that our _LaCViT_ is able to reshape the pretrained embedding space of MAE into a more isotropy space (i.e., uniformity in all orientations), which enhances the discriminative capability for the target downstream tasks.
The primary contributions of this work are as follows:
* We propose a new contrastive training framework _LaCViT_, that uses task labels train the pretrained vision transformers models to achieve better effectiveness.
* We implement a range of vision transformer models using the _LaCViT_ training framework, most notably one based on the state-of-the-art MAE model, denoted _LaCViT_-MAE.
* Experimental result over five image classification datasets demonstrates that _LaCViT_-MAE significantly outperforms other traditionally fine-tuned models, including MAE.
* We analyse and compare _LaCViT_-MAE model with MAE in terms of isotropy, class similarity and visualization of embeddings, illustrating the impact of _LaCViT_ training on the resultant model.
## Related Work
### Vision Transformers
Several pioneers try to apply transformers [13] on images. For instance, [12] applied self-attention to only the local neighbourhood for each query pixel, while [11] applied attention to only small parts of the image instead. The recent ViT model [14], introduced in 2020, was the first vision transformer model to apply attention globally with minimal modifications to the transformer architecture. In 2021, Masked Autoencoders (MAE) [15] were proposed, which addressed the cost of training via the use of a high image masking strategy with an encoder-decoder self-supervised pre-training schema, which enables MAE to learn how to reconstruct the original image based on only partial observations of that image. This approach reduces the number of pixels that need to be fed into the transformer resulting in faster training. It is the best current solution to reducing training time. Similarly, SimMIM [21] proposed to use masked image modeling to pretrained vision transformers but without a decoder. Data2vec [1] introduces a teach-student mode to pretrain vision transformers by representation learning. These models are normally pretrained based on large-scale image datasets like ImageNet [11], and they can then be 'fine-tuned' with new examples to transfer the pretrained knowledge into the target downstream tasks, a process that is referred to as transfer learning [16]. Although vision transformers dominate in the computer vision domain, the transferability of vision transformers remain unclear. As pointed out in [14], lacking discernible learned inductive biases limits the performance of vision transformers to handle downstream tasks(unseen samples). [13] explores whether the learned representation of the vision transformer is transferable or more than ConvNets' features. They find that ViT lacks the ability to provide transferable representations in the linear evaluation as they did in the whole network fine-tuning, which means they need more training to fit the target task. Based on this motivation, our proposed _LaCViT_ enhances the transferability of vision transformers by reducing the discrepancy of discriminability between general pretrained representation space and the representation space of the target tasks.
### Contrastive Learning
The fundamental idea of contrastive learning is comparing the input samples, which is first proposed by [10]. Therefore, the goal is to make the embeddings for positive pairs (e.g., the examples belonging to a single class) pull together while simultaneously push apart the embeddings of negative pairs(e.g., different classes). In the self-supervised setting, contrastive learning methods train a discriminative model on positive and negative pairs, according to some definitions of similarity. Furthermore, additional labels could be integrated to determine the similarity and dissimilarity of samples(e.g., same class or different classes). Thus, contrastive learning methods provide a simple yet effective approach to learning representations in a discriminative manner in both supervised and self-supervised setups. SimCLR [10] was proposed, which is a simple framework utilising instance-level comparisons for image classification tasks (not using transformers). Notably, SimCLR demonstrated that contrastive learning appeared to be particularly effective at training models when only small numbers of training examples were available. In the same year [14], [15] utilises the label information based on the SimCLR framework (again, not using transformers). This implementation combined the idea of using a soft-nearest neighbours-based loss as introduced in [21, 13], normalising the embeddings and replacing the euclidean distance with the inner product. Therefore, we propose in this paper to adopt a label-ware contrastive learning framework (_LaCViT_) to enhance the transfer learning capability of vision transformers by producing better representations, and we compare _LaCViT_ to N-pair-loss and SimCLR.
## The Proposed Approach
In order to obtain a more discriminative representation space and improve the transfer effectiveness of vision transformers for the target tasks, we propose _LaCViT_, a label-aware contrastive training framework for vision transformers. As shown in Figure 1, our _LaCViT_ consists of two training stages, namely _the label-aware contrastive training stage_ and _the task head fine-tuning stage_.
* **Label-aware contrastive training stage (Stage 1)**: We load pretrained weights and train those weights with contrastive learning loss based on the labels of the target task. This stage involves four main processes in a sequence, namely data augmentation, encoding patches and computing the contrastive loss.
* **Task head fine-tuning stage (Stage 2)**: This stage trains the task head (e.g., a simple linear layer for the classification), which is added on top of the trained vision transformer for the downstream task while freezing the weights trained from stage 1. As shown in Figure 1, during the target task head fine-tuning, we only train the task head. We use trained encoder without the projection head to produce transformed images2 and their associated input embeddings for training. For efficiency reasons, we only generate a single view for each training image. As this is a standard classification head, the cross-entropy is applied as the loss function.
Footnote 2: Note that for this stage, we are not aiming to significantly distort the training images, hence, we only apply random resizing, crop and rotation image transforms, not colour distortion or Gaussian blur.
### Label-aware Contrastive Training Stage
As shown in Figure 1, the vanilla fine-tuning approach (which is applied by most of the existing vision transformers [1, 13]) directly fine-tune the original pretrained parameters through a task head (normally with a cross entropy (CE) loss). However, we argue that the original pretrained vision representation space lacks its discriminability over the target task, resulting in the reduced transferability and effectiveness of these models. The label-aware contrastive training stage of our _LaCViT_ framework aims to reshape the pretrained embedding space into a more isotropic space and eliminate the discrepancy in discriminability between the general pretrained representation space and the representation space of the target tasks. We later show that the improved isotropy of the resultant embedding space leads to better transfer learning capability (see Section Discussion).
In general, our label-aware contrastive training stage works based on a mini-batch of training images and conducts the contrastive prediction task on pairs of augmented examples generated within the mini-batch, which is the same contrastive learning setting as [1]. Each input image in a mini-batch is transformed into two different views (i.e., images) to form a positive pair according to a data augmentation module while other images within the batch are regarded as negatives; then, these positive and negative examples are fed into the encoder to train it with a label-aware contrastive loss, as shown in Stage 1 in Figure 1. As we focus on transfer learning, the encoder can be any state-of-the-art general vision transformer model (e.g., ViT, Data2vec, SimMIM, or MAE) with their weights being pretrained from some general image corpora (e.g., ImageNet). In particular, there are four main components in this stage: (1) data augmentation module; (2) encoder for producing representation vector; (3) projection head for improving representation quality; (4) contrastive training objective, which are detailed below:
**Data augmentation (generating positive pairs for con
Figure 1: **The overview of _LaCViT_, which consists of two training stages: 1) label-aware contrastive training and 2) task head fine-tuning**, compared to the vanilla fine-tuning, which directly fine-tunes the task head. The first contrastive training stage trains the vision transformers based on the labels of the target tasks with a contrastive loss, and in the second stage, _LaCViT_ fine-tunes the task head while keeping the trained encoder parameters from the first stage frozen.
**trastive learning**): Data augmentation is widely used in image pre-processing for a wide range of tasks and it is especially useful for contrastive learning. Chen et al. (2020); Khosla et al. (2020) suggest that a composition of data augmentation operations is critical to produce good representations. Therefore, each augmentation involves the generation of stochastic image transforms with different rotations, random crop (with flip and resize), colour distortion, and Gaussian blur. To ensure there least one positive pair in the mini-batch to calculate the contrastive loss, as shown in Figure 1, we augment each image in a mini-batch into two transformed images (views).
**Encoding**: Following ViT (Dosovitskiy et al., 2020), to enable transformer-based models to process the images, as shown in Stage 1 in Figure 1, after augmentation, we divide each view into regular non-overlapping patches. Those patches are then concatenated into a 1D vector representing the view (ordered top-to-bottom and left-to-right). An encoder (e.g., ViT, MAE, SimMIM) is used to generate feature embeddings for each of the two views of the image, using the 1D vectors as input.
**Nonlinear projection head**: Inspired by Khosla et al. (2020); Wu et al. (2018) to improve representation quality, a projection head \(g(\mathbf{h})\) is added upon the encoder to map representation to the space where contrastive loss is applied. Thus, to implement \(z_{i}=g(\mathbf{h_{i}})=W^{(2)}\sigma(W^{(1)}\mathbf{h_{i}})\), we use two dense layers, where \(\sigma\) is a ReLU function. Khosla et al. (2020) reported that using a nonlinear projection improves the representation quality of the layer before it (\(\mathbf{h}\) is better than \(z\), \(z=g(\mathbf{h})\)), due to loss of information induced by the contrastive loss. The \(z=g(\mathbf{h})\) is trained to be invariant to data transformation, which means \(g\) removes information that could be useful for the downstream task (e.g., color of objects). By using the nonlinear projection, more information can be maintained in \(h\). Thus, the projection head is only used in the Stage 1, to improve the representation quality. These embeddings are then grouped into distinct sets by the training class label of the source image.
**Contrastive Training Objective**: To obtain a more discriminative representation space for the target task, we train the pretrained encoder using a contrastive loss by leveraging label information (i.e. the label-aware contrastive loss). The label-aware contrastive loss enables stronger geographic clustering of samples belonging to the same class in the embedding space, while simultaneously pushing apart clusters of samples from different classes. The advantage of the label-aware contrastive loss is that we compute the contrastive loss based on many positive pairs per anchor in addition to many negative samples, compared to self-supervised contrastive learning that uses only a single positive. This setting allows us to achieve state-of-the-art performance without the need for hard negative mining, which can be challenging to tune properly Li et al. (2019); Bulat, Sanchez-Lozano, and Tzimiropoulos (2021).
The production of the initial embeddings for each training image view above is to provide training data that we can use to reduce the discrepancy of discriminability between the general pretrained representation space and the representation space of the target tasks. In particular, we train the pretrained model for each mini-batch with a label-aware contrastive loss function adapted from Khosla et al. (2020). Formally, the contrastive loss is defined as follows:
\[\mathcal{L}(\mathcal{D}^{*})=\sum_{z_{i}\in\mathcal{D}^{*}}\frac{-1}{|\mathcal{ D}^{**}_{-z_{i}}|}\sum_{z_{p}\in\mathcal{D}^{**}_{-z_{i}}}\log\frac{\exp(z_{i} \cdot z_{p}/\tau)}{\sum_{z_{a}\in\mathcal{D}^{*}_{-z_{i}}}\exp(z_{i} \cdot z_{a}/\tau)},\]
where \(\mathcal{D}^{*}\) represents the mini-batch as a whole comprised of an embedding \(z\) for each image view \(i\), i.e. \(z_{i}\in\mathcal{D}^{*}\) is a view embedding within the mini-batch. The superscript \(+\), e.g. \(\mathcal{D}^{**}\), denotes the set of embeddings comprising only the positive examples for the current class in the mini-batch. The subscript \(-z_{i}\) indicates that this set does not include the embedding \(z_{i}\) (which is used to denote the current image view being evaluated). \(\tau\) is a temperature parameter, which controls the degree of loss applied when two images have the same class but the embeddings are different. A higher value pushes the model to more strongly separate the positive and negative examples.
## Experiments
To evaluate the effectiveness of our proposed _LaCViT_ framework, we conduct experiments under four state-of-the-art pretrained vision transformer models over five image classification datasets. In particular, our research questions are:
* **Q1**: How effective is our proposed _LaCViT_ for the existing vision transformers on the target downstream tasks?
* **Q2**: How do different contrastive learning losses affect the performance of our _LaCViT_ on downstream tasks?
**Datasets**: Five publicly available standard image classification datasets are used in our experiments, with their statistics shown in Table 1. As the volume of available training examples is a confounding variable that impacts performance, we choose datasets of varying sizes to aid in analysis, shown in Table 1. All train/validation splits of our evaluated five datasets are the same as the splits of the previous works Krizhevsky and Hinton (2009); Wah et al. (2011); Nilsback and Zisserman (2008); Parkhi et al. (2012). In the constrastive training stage, we initialise the encoder of our _LaCViT_ using released pretrained weights from either the ImageNet-1k or ImageNet-21k datasets, while in the task head fine-tuning stage we fine-tune the task head using the datasets of the target tasks (excluding ImageNet-1k and ImageNet-21k).
**Model details:** We evaluate our _LaCViT_ under three base models, i.e., ViT Dosovitskiy et al. (2020), Data2vec Baevski et al. (2022), SimMIM Xie et al. (2022) and MAE He et al. (2021), which are the state-of-the-art models in image classification. For the MAE model, we use its one of the smallest pretrained models, i.e. MAE-Base He et al. (2021), in our paper, since it is the fastest model to train with our resources. In order to compare between those models, we also choose Data2vec-Base, SimMIM-Base, ViT-B in our experiment, as they are the same size. Since our _LaCViT_ approach is an additive to the
existing vision transformers, we also compare with these models under their vanilla fine-tuning approach (i.e. fully fine-tune all the model parameters with a cross-entropy loss) to demonstrate the value-add of our _LaCViT_. For brevity, we denote MAE trained with _LaCViT_ as _LaCViT_-MAE while the vanilla fine-tuned one as MAE. Our input for all models are images of dimensions \(224\times 224\) pixels3, with patches of size \(16\times 16\) pixels, following [16].
Footnote 3: For images that are not \(224\times 224\), we will resize them to \(224\times 224\).
**Pre-training details:** Where possible we use pre-training weights generated from the ImageNet-1k dataset by the baseline model authors. However, some prior works only provide weights generated from the larger ImageNet-21k dataset. We expect that ImageNet-21k-derived models will have an advantage over ImageNet-1k-derived models, hence we note the pre-training model used in the 'Seen-dataset' column during our later experiments. Specifically, the SimMIM and MAE models are fully consistent, with pre-training on ImageNet-1k. For ViT, the authors report performances on ImageNet-1k, but only release comparable pre-trained weights for ImageNet-21k. Hence, we report their ImageNet-1k baseline performances, but _LaCViT_-ViT is derived from the provided ImageNet-21k weights, and hence may have a slight advantage in contrast to the other ImageNet-1k-derived models. Similarly, Data2vec only provides pre-trained weights for ImageNet-21k, so the same caveat applies.
**Training details:** For our experiments, we need to train the Data2vec, SimMIM, ViT4, MAE and _LaCViT_-MAE, _LaCViT_-SimMIM, _LaCViT_-ViT. When training _LaCViT_-trained models on an image classification downstream task, we use a batch size of 4096 in the contrastive training stage and 128 for the task head fine-tuning (Stage 1 & 2). In the contrastive training stage (Stage 1), the number of training epochs is 500. As for the task head fine-tuning (Stage 2), it is 100 epochs. The initial learning rate is set to 0.01 with a weight decay of 1e-4 for these two stages. We use a \(\tau\) of 0.1 as the temperature value for the contrastive loss. A nonlinear projection with one additional hidden layer (two dense layers) and ReLU activation [15] is used to project the representation to a 128-dimensional latent space. We choose these hyperparameter as they were shown to be more effective (higher Top-1 accuracy) during evaluation over the training set in our experiments. For MAE, Data2vec and SimMIM, after loading the pre-trained weights, we fine-tune it for the downstream classification task with a linear layer by using stochastic gradient decent (SGD) and cross-entropy loss in the same way as Stage 2 of the _LaCViT_-trained models. In this way, we avoid introducing a confounding variable in our comparison.5 All the reported experimental performances are the average of three runs with different random seeds. Our experiments are conducted on a Ubuntu 20.04 server with three NVIDIA-A6000 GPUs (each has 48 GB memory) and 128 GB main memory 6.
Footnote 5: Note that there is one difference, specifically, we do not freeze the non-head layers when fine-tuning MAE, Data2vec and SimMIM, as it is the common approach in fine-tuning.
Footnote 6: More details are available in the technical appendix.
## Results and Analysis
### Effectiveness of _LaCViT_ on image classification
In this section, we investigate the impact that the introduction of the proposed label-aware contrastive training framework has on image classification performance. In particular, we compare the _LaCViT_-trained models and baseline models that do not include contrastive learning, including _LaCViT_-MAE verse MAE [11], _LaCViT_-SimMIM verse SimMIM [16] and _LaCViT_-ViT verse ViT [16]. The difference between baseline models and _LaCViT_ models is the inclusion of Stage 1 (the contrastive training stage). If contrastive learning is helpful, we expect the addition of Stage 1 to result in increased performance.
Table 2 presents the performance of _LaCViT_ models and baseline models over five open standard image classification datasets of varying sizes. All model effectiveness reported in this paper is measured using validation Top-1 accuracy (higher is better). Our primary comparison is between the SimMIM and MAE baselines and their _LaCViT_ counterparts, as the only variable that changes for these is the addition of _LaCViT_.
To evaluate whether the proposed label-aware contrastive training framework improves a vision transformer's transfer learning capability, we explore whether consistent improvements can be obtained by combining _LaCViT_ with different state-of-the-art vision transformers (ViT, SimMIM and MAE). From Table 2, we can observe that the _LaCViT_-ViT-B model increased accuracy across all datasets we tested over Vit-B and ViT-L. For instance, on the CIFAR-100 and Oxford 102 Flowers datasets, _LaCViT_-ViT-B performs 5.68\(\%\) and 10.6\(\%\) better than ViT-B. The improvement for
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Dataset** & **\# Classes** & **\# Images** & **\# Training** & **\# Validation** \\ \hline CIFAR-100 [14] (Medium) & 100 & 60,000 & 50,000 & 10,000 \\ CIFAR-10 [15] (Medium) & 100 & 60,000 & 50,000 & 10,000 \\ CUB-200-2011 [16] (Small) & 200 & 11,788 & 5,994 & 5,794 \\ Oxford 102 Flower [14] (Small) & 102 & 2040 & 1020 & 1020 \\ Oxford-IIIT pet [17] (Small) & 37 & 7349 & 3680 & 3669 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset Statistics.
the _LaCViT_-SimMIM over SimMIM is also obvious, especially for the CUB-200-2011 dataset and the Oxford 102 Flower dataset (on average a 10.95\(\%\) improvement under Top-1 accuracy). _LaCViT_-MAE is the best model tested, with the highest Top-1 accuracy across nearly all datasets7. The increase in performance between MAE and _LaCViT_-MAE is especially notable for the CUB-200-2011 dataset, where performance increased by 11.65\(\%\). When comparing _LaCViT_-MAE to other _LaCViT_-trained models, _LaCViT_-MAE also has a large performance lead.
Footnote 7: The exception is Oxford-Flowers dataset, where _LaCViT_-ViT-B outperforms _LaCViT_-MAE, however, note that _LaCViT_-ViT-B may have an advantage since it is based on pretrained weights derived from ImageNet-21k rather than ImageNet-1k.
As we can see from these experiments, the addition of _LaCViT_ leads to consistent performance improvements over the baseline models tested and across five datasets of varying sizes. As such, we conclude that the label-aware contrastive training is an effective enhancement for vision transformers.
### Label-aware v.s. unsupervised contrastive learning
In the literature, a number of unsupervised contrastive learning approaches have previously been proposed. Hence, it is important to evaluate whether our semi-supervised contrastive learning formulation _LaCViT_ is better than these other alternatives. To evaluate this, using MAE as a base model, we contrastively train variants of this model using the unsupervised SimCLR [1] and N-pair-loss [13] approaches.
The lower block of Table 2 reports the performance of MAE and its contrastive variants (SimCLR, N-pair-loss and _LaCViT_). As we can see in Table 2, MAE fine-tuned with SimCLR performs slightly worse than _LaCViT_-MAE across datasets except for CUB-200-2011 and CIFAR-100 where the addition of SimCLR dramatically harms performance (maximum of -51.26\(\%\)). We hypothesis that unsupervised contrastive learning (SimCLR is basically a unsupervised version of contrastive loss of _LaCViT_) lacks disentanglement between classes which results in this performance gap. The performance of MAE fine-tuned with N-pair-loss is around 2-4 points less than MAE trained with SimCLR in terms of accuracy except for the Oxford 102 Flower dataset. On the Oxford 102 Flower dataset, MAE trained with N-pair-loss gains 0.65\(\%\) in terms of accuracy. By comparing the experimental results in Table 2, MAE fine-tuned with cross-entropy is overall better than MAE fine-tuned with SimCLR method and MAE fine-tuned with N-pair-loss. This indicates we need to use a label-aware contrastive training framework like _LaCViT_ instead of purely unsupervised approaches.
## Discussion
We early argued that the discriminative discrepancy of the representation space between the general pretraining corpora and the target tasks leads to lack of transferability of these vision transformers. In this paper, we verify this by the analysis of geometrical features of the learned representation spaces by using two metrics, i.e., isotropy [1] and cosine similarity. Indeed, isotropy has previously been used as a metric to evaluate the quality of representations [1], under the assumption that the more geo-distributed representations for items in different classes are in the embedding space the more effective the model will be at distinguishing them. Previous research has indicated that self-supervised learning would lead to more anisotropic representations [11, 13]. We hypothesised that the proposed label-aware constrastive training might be able to re-shape the embedding space geometry to 'push-apart' the representations of classes, resulting in enhanced performance of transfer learning.
To explore this, we analyse three aspects of the MAE and _LaCViT_-MAE models to explore their isotropy: (a) distributions of cosine similarities between pairs of images; (b) isotropy score defined by [14]; and (c) visualization of embedding space by using the t-SNE tool [22].
In Figure 2, we plot the distribution of the pairwise cosine similarities of MAE (Fig. 1(a)) and _LaCViT_-MAE embeddings (Fig. 1(b)). Specifically, we randomly select two classes from the CIFAR-100 dataset and compute the cosine similarity for positive pairs (same class) and negative samples (different class). As we can see from these figures, the similarity of positive and negative pairs of the _LaCViT_-MAE
\begin{table}
\begin{tabular}{c c c c c c c c} & & **CIFAR-10** & **CIFAR-100** & **Cub-200-2011** & **Oxford-Flowers** & **Oxford-Pets** \\ \hline
**Model** & **Seen dataset** & **FT method** & Acc@1 & Acc@1 & Acc@1 & Acc@1 & Acc@1 \\ \hline Data2vec & ImageNet-21k & CE & 98.25 & 89.21 & 85.16 & 91.57 & 94.52 \\ \hline ViT-B & ImageNet-1k & CE & 98.13 & 87.13 & N/A & 89.49 & 93.81 \\ _LaCViT_-ViT-B & ImageNet-21k & _LaCViT_ & 98.95 & 92.08 & 85.45 & **98.98** & 94.22 \\ ViT-L & ImageNet-1k & CE & 97.86 & 86.36 & N/A & 89.66 & 93.64 \\ \hline SimMIM & ImageNet-1k & CE & 98.78 & 90.26 & 76.47 & 83.46 & 94.22 \\ _LaCViT_-SimMIM & ImageNet-1k & _LaCViT_ & 99.02 & 90.67 & 85.56 & 91.86 & 94.76 \\ \hline MAE & ImageNet-1k & CE & 98.28 & 87.67 & 78.46 & 91.67 & 94.05 \\ MAE & ImageNet-1k & SimCLR & 97.53 & 76.01 & 57.91 & 89.22 & 91.15 \\ MAE & ImageNet-1k & N-pair-loss & 95.23 & 73.76 & 52.56 & 89.87 & 87.12 \\ _LaCViT_-MAE & ImageNet-1k & _LaCViT_ & **99.12** & **90.86** & **87.60** & 92.64 & **95.12** \\ \hline \end{tabular}
\end{table}
Table 2: Image classification performance benchmarks over five datasets. For the fine-tuning method, CE refers to use cross-entropy as the loss function with stochastic gradient decent, while _LaCViT_ refers to our proposed label-aware contrastive training framework.
embeddings have better inter-class separation.
Furthermore, we compute the quantitative isotropy score (IS) [13], which is defined as follows:
\[IS(\mathcal{V})=\frac{max_{c\subset C}\sum_{v\subset V}\exp{(C^{T}V)}}{min_{c \subset C}\sum_{v\subset V}\exp{(C^{T}V)}},\]
where \(V\) is a set of vectors, \(C\) is the set of all possible unit vectors (i.e., any \(c\) so that \(||c||\) = 1) in the embedding space. In practice, \(C\) is approximated by the eigenvector set of \(V^{T}V\) (\(V\) is the stacked embeddings of \(V\)). The larger the IS value, the more isotropic an embedding space is (i.e., a perfectly isotropic space obtains an IS score of 1).
As the IS scores are shown in Table 3, the IS score of the _LaCViT_-MAE embeddings is significantly higher than that of the MAE embedding. The latter confirms that _LaCViT_-MAE shapes the semantic space to be more isotropic while embeddings of MAE are anisotropic.
In Figure 3, we visualise the embedding space of MAE and _LaCViT_-MAE embedding space of MAE and _LaCViT_-MAE over ten classes on the CIFAR-10 dataset by using t-SNE. As we can see from figure 2(a), although most of the nodes in the same color (i.e. examples of the same class) are close together in the embedding space, there are still many outliers that are distributed on the wrong clusters, which means a lack of discriminability and it is difficult to distinguish them over the classes. On the contrary, the embeddings of _LaCViT_-MAE, shown in Figure 2(b), classes represented by _LaCViT_-MAE are better separated and clustered (with fewer outliers) compared to the vanilla MAE, which we believe is the main reason that leads in the classification performance improvement in our experiment.
Based on this analysis, we demonstrate that MAE indeed lacks isotropy, i.e. the learned embeddings of images from different classes share the same regions of the embedding space. Through the proposed label-aware contrastive training, the embedding space geometry is re-shaped to 'push-apart' the representations of classes, reducing the discrepancy of discriminability between general pretrained representation space and the representation space of the target task. These results explain the effectiveness of our _LaCViT_ in enhancing the transfer learning capability of vision transformers and verify the observed improvement in our experiment.
## Conclusions
In this paper, we proposed a label-aware contrastive training framework (_LaCViT_) to make transformer-based vision models more effective in the image classification task. In particular, _LaCViT_ transfer the general pretrained discriminative into the discriminative space of the target task, significantly enhancing the performance in the image classification task. Indeed, through experimentation over five open standard datasets of different sizes, we showed that _LaCViT_-MAE could outperform the recent state-of-the-art MAE model by up to 9.14% (Top-1 accuracy), with the largest gains in performance being observed when few training examples were available. Furthermore, we apply _LaCViT_ to other vision transformers such as ViT and SimMIM, and we observe consistent improvement among these models. The result of cosine similarity, isotropy score and visualizations of embeddings space confirm that our improvements come from better representations by reshaping the embedding space geometry. Overall, our proposed _LaCViT_ is simple to use and can be applied to a wide range of vision transformers, yet still achieves state-of-the-art performance, demonstrating how transformer architectures can be better leveraged for the classification task.
Figure 3: Visualization of embedding space of MAE and _LaCViT_-MAE over ten classes on the CIFAR-10 dataset by using t-SNE. A dot refer to a sample and different color denotes different label classes.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Model** & **CIFAR-10** & **CIFAR-100** & **Cub-200-2011** & **Oxford-Flowers** & **Oxford-Pets** \\ \hline MAE & 0.1094 & 0.1504 & 0.0841 & 0.0591 & 0.0155 \\ _LaCViT_-MAE & 0.7746 & 0.9043 & 0.9774 & 0.9578 & 0.9123 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Isotropy score over five datasets for MAE and _LaCViT_-MAE. The higher value is better. Higher isotropy score means a better isotropy and generalisability
Figure 2: Plot of Cosine similarity distribution over two random classes on the CIFAR-10 dataset. Blue and orange mean positive and negative similarities, respectively. |
2309.15587 | Correcting for cutoff dependence in backward evolution of QCD parton
showers | Monte Carlo event generators for hard hadronic collisions depend on the
evolution of parton showers backwards from a high-scale subprocess to the
hadronization scale. The evolution is treated as a branching process with a
sequence of resolvable parton emissions. The criterion of resolvability
involves cutoffs that determine the no-emission probability (NEP) for a given
range of the evolution scale. Existing event generators neglect
cutoff-dependent terms in the NEP that, although formally power-suppressed, can
have significant phenomenological effects. We compute such terms and study
their consequences. One important result is that it is not possible for the
backward shower to faithfully reproduce the cutoff-independent parton
distribution functions (PDFs) used to generate it. We show that the computed
NEP corrections mitigate but do not eliminate this problem. An alternative
approach is to use cutoff-dependent PDFs that are consistent with the
uncorrected NEP. Then one must apply cutoff-dependent corrections to hard
subprocess matrix elements. We compute those corrections to the first
nontrivial order for the Drell-Yan process and for Higgs production by gluon
fusion. | Stefano Frixione, Bryan R. Webber | 2023-09-27T11:44:00Z | http://arxiv.org/abs/2309.15587v2 | # Correcting for cutoff dependence in backward evolution of QCD parton showers
###### Abstract
Monte Carlo event generators for hard hadronic collisions depend on the evolution of parton showers backwards from a high-scale subprocess to the hadronization scale. The evolution is treated as a branching process with a sequence of resolvable parton emissions. The criterion of resolvability involves cutoffs that determine the no-emission probability (NEP) for a given range of the evolution scale. Existing event generators neglect cutoff-dependent terms in the NEP that, although formally power-suppressed, can have significant phenomenological effects. We compute such terms and study their consequences. One important result is that it is not possible for the backward shower to faithfully reproduce the cutoff-independent parton distribution functions (PDFs) used to generate it. We show that the computed NEP corrections mitigate but do not eliminate this problem. An alternative approach is to use cutoff-dependent PDFs that are consistent with the uncorrected NEP. Then one must apply cutoff-dependent corrections to hard subprocess matrix elements. We compute those corrections to the first nontrivial order for the Drell-Yan process and for Higgs production by gluon fusion.
## 1 Introduction
Although the precision of predictions of short-distance cross sections using QCD perturbation theory has greatly increased in recent years, it remains true that their comparison with experimental data relies to a large extent on parton-shower based Monte Carlo event generators1 (MCEGs) for the estimation of non-perturbative and approximate higher-order perturbative effects. The connection between a measured cross section at a hadron collider and that stemming from the short-distance subprocess can be described in an inclusive sense using perturbation theory, factorization theorems and parton distribution functions (PDFs). However, for the more exclusive description required for the estimation of experimental effects a reliable MCEG is essential.
Footnote 1: For a review see [1].
A key component of any MCEG for hadronic collisions is a _backward parton shower_ that links the short-distance subprocess to the incoming hadrons via an iterative parton branching procedure. For reasons of Monte Carlo efficiency, the shower starts at the high virtuality scale of the subprocess, with appropriate parton flavours and momentum fractions, and ends at the lower scale of hadron formation. For example, in the production of a \(Z^{0}\) boson at leading order, the parton showers should be initiated by a quark-antiquark pair of equal flavour with invariant mass within the \(Z^{0}\) line width. If the showers were generated forwards from the hadron scale, as is normally done in PDF evolution, then the efficiency for finding a pair of the same flavour with an appropriate invariant mass would be unacceptably low.
A special feature of the backward parton shower [2; 3] is that it must be "guided" by input PDFs, which are supposed to ensure that the ensemble of parton flavours and momentum fractions in the shower at any intermediate scale remains consistent with those PDFs. Compared to forward evolution, this implies modifications to both the probability of branching as a function of scale, and the distribution of momentum fractions within each branching. However, the branching process necessarily involves a sequence of _resolvable_ parton emissions, defined by some cutoffs, while the PDFs are normally taken from global fits that satisfy evolution equations [4; 5; 6] that contain no such cutoffs. This could give rise to systematic biases that, as far as we are aware, have not been studied so far and are the focus of the present paper.
In sect. 2 we present a general analysis of PDF evolution equations, not limited to QCD or any particular perturbative order but suited to the discussion of issues related to the resolvability of emissions. We pay particular attention to the ambiguities in the treatment of unresolved and virtual contributions, and the choices inherent in their resolution. Section 3 examines the backward MC showering process in this framework, in particular the key concept of the non-emission probability (NEP), which governs the evolution of the shower in a way supposedly consistent with a given set of guiding PDFs. We show that neither of the NEP expressions in current use is formally correct in the presence of cutoffs. However, we find that there is no fully satisfactory formulation of the NEP as long as the guiding PDFs satisfy the normal cutoff-independent evolution equations.
Section 4 applies the general approach of the preceding sections to the case relevant to the most widely-used MCEGs (before any matching or merging), namely that of leading-order QCD. We show results on the NEP formulations in current use, the improved expression derived in sect. 3, and their effects on MC backward evolution. The general conclusion is that, while the improved expression performs best, all formulations fail to achieve satisfactory consistency with the guiding cutoff-independent PDFs.
We therefore turn in sect. 5 to an alternative approach, in which the guiding PDFs obey evolution equations that incorporate the same cutoffs as the backward parton shower. We show that such PDFs can be made exactly consistent with the constraints of flavour and momentum conservation, and verify that the corresponding NEP ensures consistency between the guiding PDFs and the MC results. Of course, if this approach were implemented, the cutoff-dependent PDFs would be specific to the cutoffs employed in a particular MCEG, and would need to be extracted from dedicated global fits. Furthermore, in those
fits the factorization of PDFs and short-distance cross sections implies that the latter will also be modified by cutoff-dependent terms. In sect. 6 we derive a general expression for these cutoff corrections to the first nontrivial order in QCD, and illustrate its application to the processes of lepton pair and Higgs boson production. Finally in sect. 7 we summarize our main results and conclusions.
Appendix A contains a more detailed discussion of the relation between the evolution equations and the backward MC process. Appendix B presents a toy model in which all emissions are unresolvable, designed to illuminate the ambiguities and difficulties in defining the NEP.
## 2 PDF evolution equations
In this section, we recast the evolution equations for the PDFs in a form which is suited to a parton-shower Monte Carlo (MC)-like approach. The evolution variable \(\mu^{2}\) has canonical dimension of mass squared; its specific nature is not relevant here.
The starting point is the evolution equations [4; 5; 6], which we write as follows:
\[\frac{\partial F(x)}{\partial\log\mu^{2}}=\mathbb{O}\otimes_{x}F\,, \tag{1}\]
where
\[\mathbb{O}\otimes_{x}F=\int_{0}^{1}\frac{dz}{z}\,\mathbb{O}(z)\,F(x/z)\,, \tag{2}\]
with the understanding that \(F(x/z)=0\) for \(z<x\). We assume to be working in a \(d\equiv 1+2(N_{u}+N_{d})\)-dimensional flavour space, where \(F\) is a column vector whose \(d\) individual components \((F)_{i}\) are the PDFs \(f_{i}\) of the various partons, and \(\mathbb{O}\) is a \(d\times d\) matrix, whose elements in the \(\overline{\text{MS}}\) factorisation scheme are the splitting kernels; additional terms are present in a non-\(\overline{\text{MS}}\) factorisation scheme. Both \(F\) and \(\mathbb{O}\) are \(x\)-space objects, that depend on \(\mu^{2}\) as well; in the notation, either or both dependences may be included explicitly or understood. It is safe to assume, at least up to the NLO and in any factorisation scheme, that the most general form of \(\mathbb{O}\) is:
\[\mathbb{O}(z)=\left[\mathbb{A}(z)\right]_{+}+\mathbb{B}\,\delta(1-z)+\mathbb{ C}(z)\,, \tag{3}\]
where
\[\left(\mathbb{A}(z)\right)_{ij}=\delta_{ij}A_{i}(z)\,,\qquad\left(\mathbb{B} \right)_{ij}=\delta_{ij}B_{i}\,,\qquad\left(\mathbb{C}(z)\right)_{ij}=C_{ij} (z)\,, \tag{4}\]
with \(1\leq i,j\leq d\) the parton indices; note that \(\mathbb{C}\) is in general non-diagonal. \(A_{i}(z)\) and \(C_{ij}(z)\) are regular functions of \(z\), and \(B_{i}\) are constants in \(z\); all of them depend on \(\mu^{2}\). Typically, \(A_{i}(z)\) diverges when \(z\to 1\), and the plus prescription in eq. (3) regularises that divergence; any divergence at \(z\to 0\) is not regularised. Equation (3) encompasses one of the forms in which the NLO splitting kernels are usually written, namely:
\[\sum_{k=0}^{1}\left(\frac{\alpha(\mu^{2})}{2\pi}\right)^{k+1}\mathbb{P}^{[k]} (z)=\widetilde{\mathbb{A}}(z)\left[\frac{1}{1-z}\right]_{+}+\widetilde{ \mathbb{B}}\,\delta(1-z)+\mathbb{C}(z)\,, \tag{5}\]
with \(\mathbb{A}(z)\) finite at \(z=1\). Indeed, it is a matter of applying the definition of the plus distribution to show that, _when_ the following relationships
\[\mathbb{A}(z)=\frac{\widetilde{\mathbb{A}}(z)}{1-z}\,,\qquad\mathbb{B}= \widetilde{\mathbb{B}}+\int_{0}^{1}dz\,\frac{\widetilde{\mathbb{A}}(z)- \widetilde{\mathbb{A}}(1)}{1-z} \tag{6}\]
hold, then the r.h.s.'s of eqs. (3) and (5) are identical to one another.
In order to proceed, we introduce the following symbols:
\[\Theta^{\text{\tiny IN}}_{ij,z}=\Theta(\epsilon^{\text{\tiny L}}_{ij}<z<1- \epsilon^{\text{\tiny U}}_{ij})\,,\qquad\Theta^{\text{\tiny OUT}}_{ij,z}\equiv 1- \Theta^{\text{\tiny IN}}_{ij,z}=\Theta(z<\epsilon^{\text{\tiny L}}_{ij})+ \Theta(z>1-\epsilon^{\text{\tiny U}}_{ij})\,, \tag{7}\]
with \(1-\epsilon^{\text{\tiny L}}_{ij}-\epsilon^{\text{\tiny U}}_{ij}>0\), \(\epsilon^{\text{\tiny L}}_{ij}>0\), and \(\epsilon^{\text{\tiny U}}_{ij}>0\). The parameters \(\epsilon^{\text{\tiny L}}_{ij}\) and \(\epsilon^{\text{\tiny U}}_{ij}\) are flavour (and possibly scale) dependent cutoffs, which help to define an inner (\(\Theta^{\text{\tiny IN}}_{ij,z}\)) and an outer (\(\Theta^{\text{\tiny OUT}}_{ij,z}\)) region; the former (latter) will be associated with resolved (unresolved) emissions in the \(z\) space for the branching:
\[j(1)\;\longrightarrow\;i(z)+k(1-z)\quad\Longleftrightarrow\quad P_{ij}(z)\,. \tag{8}\]
Equation (7) then implies that \(\epsilon^{\text{\tiny L}}_{ij}\) and \(\epsilon^{\text{\tiny U}}_{ij}\) limit from below the fractional energy of parton \(i\) and recoil system \(k\), respectively. At the LO, the recoil system is a parton itself, unambiguously determined by \(i\) and \(j\), so that it may be denoted by \(k=j\ominus i\). This suggests introducing the \(1+N_{u}+N_{d}\) parameters:
\[\epsilon_{g}\,,\;\epsilon_{u}\,,\;\epsilon_{d}\,,\ldots\,, \tag{9}\]
and setting:
\[\epsilon^{\text{\tiny L}}_{ij}=\epsilon_{i}\,,\qquad\epsilon^{\text{\tiny U}} _{ij}=\epsilon_{j\ominus i}\,. \tag{10}\]
This implies that the lower bounds on the fractional energies depend solely on the individual parton identities, rather than on the splitting types. In keeping with what has been done so far, the quantities defined in eq. (7) can be arranged compactly in two matrices, \(\mathbb{T}^{\text{\tiny IN}}_{z}\) and \(\mathbb{T}^{\text{\tiny OUT}}_{z}\), whose elements are:
\[\left(\mathbb{T}^{\text{\tiny IN}}_{z}\right)_{ij}=\Theta^{\text{\tiny IN}}_{ ij,z}\,,\qquad\left(\mathbb{T}^{\text{\tiny OUT}}_{z}\right)_{ij}=\Theta^{ \text{\tiny OUT}}_{ij,z}\,. \tag{11}\]
For any function \(g(z)\) and pair of parton indices \((i,j)\), we can exploit the following identity:
\[\left[g(z)\right]_{+} = \left[g(z)\,\Theta^{\text{\tiny OUT}}_{ij,z}\right]_{+}+\left[g(z )\,\Theta^{\text{\tiny IN}}_{ij,z}\right]_{+} \tag{12}\] \[= \left[g(z)\,\Theta^{\text{\tiny OUT}}_{ij,z}\right]_{+}+\left(g(z )\,\Theta^{\text{\tiny IN}}_{ij,z}\right)+\left(-\int_{0}^{1}d\omega\,g(\omega )\Theta^{\text{\tiny IN}}_{ij,\omega}\right)\delta(1-z)\,,\]
and rewrite eq. (3) as follows:
\[\mathbb{O}(z)=\left[\mathbb{A}(z)\circ\mathbb{T}^{\text{\tiny OUT}}_{z}\right] _{+}+\mathbb{A}(z)\circ\mathbb{T}^{\text{\tiny IN}}_{z}+\overline{\mathbb{B}} \,\delta(1-z)+\mathbb{C}(z)\circ\mathbb{T}^{\text{\tiny OUT}}_{z}+\mathbb{C}( z)\circ\mathbb{T}^{\text{\tiny IN}}_{z}\,, \tag{13}\]
where by \(\circ\) we have denoted the element-by-element matrix multiplication, e.g.:
\[\left(\mathbb{A}\circ\mathbb{T}\right)_{ij}=\left(\mathbb{A}\right)_{ij}\left( \mathbb{T}\right)_{ij} \tag{14}\]
and:
\[\overline{\mathbb{B}}=\mathbb{B}-\int_{0}^{1}dz\,\mathbb{A}(z)\circ\mathbb{T}_{z }^{\text{\tiny IN}}\,. \tag{15}\]
For those operators for which eq. (6) holds, eq. (15) can be written in the equivalent form2:
Footnote 2: We point out that eq. (13) is unchanged. This implies, in particular, that \(\widetilde{\mathbb{A}}(z)/(1-z)\) is inside the plus prescription in the first term on the r.h.s..
\[\overline{\mathbb{B}}=\widetilde{\mathbb{B}}+\int_{0}^{1}dz\,\frac{ \widetilde{\mathbb{A}}(z)-\widetilde{\mathbb{A}}(1)}{1-z}\circ\mathbb{T}_{z }^{\text{\tiny OUT}}-\int_{0}^{1}dz\,\frac{\widetilde{\mathbb{A}}(1)\circ \mathbb{T}_{z}^{\text{\tiny IN}}}{1-z}\,. \tag{16}\]
With eq. (13), we can write the evolution equations as follows:
\[\frac{\partial F(x)}{\partial\log\mu^{2}}=\mathbb{O}\otimes_{x}F=\mathbb{W} \left[F\right](x)+\mathbb{Z}\left[F\right](x)+\overline{\mathbb{B}}F(x)\,, \tag{17}\]
where:
\[\mathbb{W}\left[F\right](x) = \left(\left[\mathbb{A}\circ\mathbb{T}^{\text{\tiny OUT}}\right] _{+}+\mathbb{C}\circ\mathbb{T}^{\text{\tiny OUT}}\right)\otimes_{x}F\,, \tag{18}\] \[\mathbb{Z}\left[F\right](x) = \left(\left[\mathbb{A}+\mathbb{C}\right]\circ\mathbb{T}^{\text{ \tiny IN}}\right)\otimes_{x}F\,. \tag{19}\]
With the scalar functions introduced in eq. (4), these are \(((F)_{i}=f_{i})\):
\[\left(\left[\mathbb{A}\circ\mathbb{T}^{\text{\tiny OUT}}\right]_ {+}\otimes_{x}F\right)_{i} = \!\!\int_{0}^{1}dz\,A_{i}(z)\Theta_{ii,z}^{\text{\tiny OUT}} \left[\frac{\Theta(z\geq x)}{z}f_{i}\left(\frac{x}{z}\right)-f_{i}(x)\right], \tag{20}\] \[\left(\left(\mathbb{A}\circ\mathbb{T}^{\text{\tiny IN}}\right) \otimes_{x}F\right)_{i} = \!\!\int_{0}^{1}dz\,A_{i}(z)\Theta_{ii,z}^{\text{\tiny IN}}\,\frac{ \Theta(z\geq x)}{z}f_{i}\left(\frac{x}{z}\right),\] (21) \[\left(\left(\mathbb{C}\circ\mathbb{T}^{\text{\tiny OUT}}\right) \otimes_{x}F\right)_{i} = \!\!\sum_{j}\int_{0}^{1}dz\,C_{ij}(z)\Theta_{ij,z}^{\text{\tiny IN }}\,\frac{\Theta(z\geq x)}{z}f_{j}\left(\frac{x}{z}\right). \tag{22}\]
By construction, the r.h.s. of eq. (17) is cutoff-independent. One can show that the contribution to \((\mathbb{W}\left[F\right](x))_{i}\) from the splitting \(j\to ik\) is power-suppressed when \(x<1-\epsilon_{j\ominus i}\). Overall, \((\mathbb{W}\left[F\right](x))_{i}\) cannot be power-suppressed when \(x>1-\epsilon\), with \(\epsilon=\min_{j}\epsilon_{j\ominus i}\), because in that region \((\mathbb{Z}\left[F\right](x))_{i}=0\), since for such \(x\) values one has \(\Theta_{ij,z}^{\text{\tiny IN}}\Theta(z\geq x)=0\) for any \(z\) and \(j\). Therefore, the cutoff dependence of \(\mathbb{W}\left[F\right](x)\) must cancel that of the \(\overline{\mathbb{B}}F(x)\) term, which is in general logarithmic (see e.g. eq. (15)). From a physical viewpoint, \(\mathbb{Z}\left[F\right]\) describes resolved (owing to \(\mathbb{T}^{\text{\tiny IN}}\)) real emissions with \(\max(x,\epsilon)\leq z\leq 1-\epsilon\), while \(\overline{\mathbb{B}}F(x)\) describes virtual emissions (being proportional to \(F(x)\)). The term \(\mathbb{W}\left[F\right]\) is a remainder3 that arises from the fact that the kernels of the evolution equations are not ordinary functions, but distributions that involve subtractions; from a physics viewpoint, it may be associated with branchings resolvable in the scale but not in Bjorken \(x\).
Footnote 3: This distinction between \(\mathbb{Z}\) and \(\mathbb{W}\) is not entirely precise, owing to the possible flavour dependence of the cutoffs, which implies that certain kinematical configurations are resolvable only for certain types of branchings. The underpinning physical picture is nevertheless correct.
With MC applications in mind, eq. (17) can be further manipulated by writing:
\[\overline{\mathbb{B}}(\mu^{2})=\frac{\mu^{2}}{\mathbb{S}(\mu^{2})}\,\frac{ \partial\mathbb{S}(\mu^{2})}{\partial\mu^{2}}\,, \tag{23}\]
with
\[\left(\mathbb{S}(\mu^{2})\right)_{ij}=\delta_{ij}S_{i}(\mu^{2})\,,\qquad\frac{1}{ \mathbb{S}}\equiv(\mathbb{S})^{-1}\quad\Longrightarrow\quad\left(\frac{1}{ \mathbb{S}}\right)_{ij}=\delta_{ij}\frac{1}{S_{i}}\,, \tag{24}\]
and
\[S_{i}(\mu^{2})=\exp\left[\int_{\mu_{0}^{2}}^{\mu^{2}}\frac{d\kappa^{2}}{\kappa^ {2}}\overline{B}_{i}(\kappa^{2})\right]\quad\Longleftrightarrow\quad\mathbb{ S}(\mu^{2})=\exp\left[\int_{\mu_{0}^{2}}^{\mu^{2}}\frac{d\kappa^{2}}{\kappa^{2}} \overline{\mathbb{S}}(\kappa^{2})\right]. \tag{25}\]
In other words, \(\mathbb{S}\) is a diagonal matrix that collects the Sudakov form factors. As such, it may seem that the sign in the exponent in eq. (25) is the opposite w.r.t. the standard one, but in fact this is not the case, as can be understood from eq. (15). With eq. (23), eq. (17) can be cast as follows:
\[\frac{\partial}{\partial\mu^{2}}\left(\frac{1}{\mathbb{S}(\mu^{2})}F(\mu^{2}) \right)=\frac{1}{\mu^{2}\,\mathbb{S}(\mu^{2})}\left(\mathbb{W}\left[F\right] (\mu^{2})+\mathbb{Z}\left[F\right](\mu^{2})\right), \tag{26}\]
which can be put in an integrated form, thus:
\[F(\mu^{2})=\frac{\mathbb{S}(\mu^{2})}{\mathbb{S}(\mu_{0}^{2})}\,F(\mu_{0}^{2} )+\int_{\mu_{0}^{2}}^{\mu^{2}}\frac{d\kappa^{2}}{\kappa^{2}}\frac{\mathbb{S}( \mu^{2})}{\mathbb{S}(\kappa^{2})}\left(\mathbb{W}\left[F\right](\kappa^{2})+ \mathbb{Z}\left[F\right](\kappa^{2})\right), \tag{27}\]
or, alternatively, thus4:
Footnote 4: The r.h.s. of eq. (28) features the multiplication of two column vectors, which is meant as an element-by-element multiplication. Since no confusion is possible with the multiplications that feature the transpose of a column vector, no special symbol has been introduced here.
\[\frac{\mathbb{S}(\mu^{2})}{\mathbb{S}(\mu_{0}^{2})}\,\frac{F(\mu_{0}^{2})}{F( \mu^{2})}=\exp\left[-\int_{\mu_{0}^{2}}^{\mu^{2}}\frac{d\kappa^{2}}{\kappa^{2 }}\frac{1}{F(\kappa^{2})}\left(\mathbb{W}\left[F\right](\kappa^{2})+\mathbb{Z }\left[F\right](\kappa^{2})\right)\right]\,. \tag{28}\]
We stress again that eqs. (27) and (28) are fully equivalent to eq. (17) but, being in an integrated form, they also include the information on the initial conditions (\(F(\mu_{0}^{2})\)). In turn, they are all equivalent to the original evolution equation, eq. (1). Thus, in spite of the fact that they feature cutoff-dependent kernels (\(\mathbb{B}\), \(\mathbb{Z}\), and \(\mathbb{W}\)), the PDFs that solve them are cutoff-independent. In fact, if one were interested only in determining the PDFs, the solution of eq. (1) (best obtained in Mellin space) would be much more straightforward than that of eqs. (27) or (28). The primary interest of the latter equations is in the fact that they are expressed in terms of the same quantities that are used in initial-state parton showers; as such, they can be regarded as giving consistency conditions among these quantities that initial-state parton showers (which assume knowledge of the PDFs) must respect. We shall show later that, in the context of the current approaches used in MCs, this is not quite the case.
### Ambiguities and choices
One of the ingredients of the manipulation of the PDF evolution equations is the definition of the Sudakov form factors. We point out that, by means of eq. (23), we have defined them by exponentiating the entire virtual term that appears in eq. (17). This is not
mandatory, and in fact it may lead to problems (see e.g. sect. 5). In the context of a more flexible approach, we start by writing the rightmost term on the r.h.s. of eq. (17) as follows:
\[\overline{\mathbb{B}}^{\textsc{OUT}}F(x)+\overline{\mathbb{B}}^{\textsc{IN}}F (x)\,, \tag{29}\]
for any two quantities \(\overline{\mathbb{B}}^{\textsc{OUT}}\) and \(\overline{\mathbb{B}}^{\textsc{IN}}\) such that:
\[\overline{\mathbb{B}}=\overline{\mathbb{B}}^{\textsc{OUT}}+\overline{ \mathbb{B}}^{\textsc{IN}}\,. \tag{30}\]
Then, we define the Sudakov factors thus
\[\mathbb{S}(\mu^{2})=\exp\left[\int_{\mu_{0}^{2}}^{\mu^{2}}\frac{d\kappa^{2}}{ \kappa^{2}}\overline{\mathbb{B}}^{\textsc{IN}}(\kappa^{2})\right], \tag{31}\]
rather than with eq. (25), and we include the contribution from \(\overline{\mathbb{B}}^{\textsc{OUT}}\) in the \(\mathbb{W}[F]\) functional. In order to do that, and also in view of future use (see also appendix A), it turns out to be convenient to introduce the two evolution operators:
\[\mathbb{O}^{\textsc{OUT}}(z) = \left[\mathbb{A}(z)\circ\mathbb{T}_{z}^{\textsc{OUT}}\right]_{ +}+\overline{\mathbb{B}}^{\textsc{OUT}}\,\delta(1-z)+\mathbb{C}(z)\circ \mathbb{T}_{z}^{\textsc{OUT}}\,, \tag{32}\] \[\mathbb{O}^{\textsc{IN}}(z) = \mathbb{A}(z)\circ\mathbb{T}_{z}^{\textsc{IN}}+\overline{ \mathbb{B}}^{\textsc{IN}}\,\delta(1-z)+\mathbb{C}(z)\circ\mathbb{T}_{z}^{ \textsc{IN}}\,, \tag{33}\]
which, loosely speaking, account for emissions in the outer (unresolved) and inner (resolved) regions, respectively. By construction (see eq. (13)):
\[\mathbb{O}(z)=\mathbb{O}^{\textsc{OUT}}(z)+\mathbb{O}^{\textsc{IN}}(z)\quad \Longrightarrow\quad\frac{\partial F(x)}{\partial\log\mu^{2}}=\mathbb{O}^{ \textsc{OUT}}\otimes_{x}F+\mathbb{O}^{\textsc{IN}}\otimes_{x}F\,, \tag{34}\]
and
\[\mathbb{W}\left[F\right](x) = \mathbb{O}^{\textsc{OUT}}\otimes_{x}F \tag{35}\] \[= \left(\left[\mathbb{A}\circ\mathbb{T}^{\textsc{OUT}}\right]_{+}+ \mathbb{C}\circ\mathbb{T}^{\textsc{OUT}}\right)\otimes_{x}F+\overline{ \mathbb{B}}^{\textsc{OUT}}F(x)\,,\] (36) \[\mathbb{Z}\left[F\right](x)+\overline{\mathbb{B}}^{\textsc{IN}}F (x) = \mathbb{O}^{\textsc{IN}}\otimes_{x}F\,. \tag{37}\]
As was anticipated, owing to eq. (30) the expression of \(\mathbb{W}[F]\) in eq. (36) is in general not the same as that in eq. (18), while that of \(\mathbb{Z}[F]\) is still given by eq. (19). The crucial thing is that, by taking into account the redefinition of the Sudakov factor and of the \(\mathbb{W}[F]\) functional, the integrated form of the evolution equation is still given by eq. (27) or eq. (28).
While eq. (29) is so far largely arbitrary, given the interpretation of \(\mathbb{W}\) it is wise to require that:
\[\lim_{\epsilon\to 0}\overline{\mathbb{B}}^{\textsc{OUT}}=0\,,\qquad\epsilon=\{ \epsilon_{ij}^{\textsc{L}},\epsilon_{ij}^{\textsc{U}}\}_{ij}\,. \tag{38}\]
This constraint implies that the Sudakov form factors that one would obtain by choosing different \(\overline{\mathbb{B}}^{\textsc{OUT}}\) would differ from one another by terms suppressed by powers of the cutoffs. In terms of the quantities that appear in eq. (3), the above can be rewritten by exploiting eq. (15), thus:
\[\overline{\mathbb{B}}^{\textsc{OUT}}=\mathbb{B}^{\textsc{OUT}}\,,\qquad \overline{\mathbb{B}}^{\textsc{IN}}=\mathbb{B}^{\textsc{IN}}-\int_{0}^{1}dz \,\mathbb{A}(z)\circ\mathbb{T}_{z}^{\textsc{IN}}\,, \tag{39}\]
with
\[\mathbb{B}=\mathbb{B}^{\textsc{out}}+\mathbb{B}^{\textsc{in}}\,,\qquad\lim_{ \epsilon\to 0}\mathbb{B}^{\textsc{out}}=0\,. \tag{40}\]
We stress that associating the _entire_ second term on the r.h.s. of eq. (15) with \(\overline{\mathbb{B}}^{\textsc{in}}\) is merely a sensible choice, but a choice nevertheless. For example, we could have associated a \((1-\epsilon)\) fraction of it with \(\overline{\mathbb{B}}^{\textsc{in}}\), and the remaining \(\epsilon\) fraction with \(\overline{\mathbb{B}}^{\textsc{out}}\). In the following, we shall not exploit this option, and always employ eqs. (39) and (40), so that the flexibility in choosing \(\overline{\mathbb{B}}^{\textsc{out}}\) will be entirely controlled by the choice of \(\mathbb{B}^{\textsc{out}}\).
In the cases where eq. (6) holds, eqs. (32) and (33) become:
\[\mathbb{O}^{\textsc{out}}(z) = \left[\frac{\widetilde{\mathbb{A}}(z)}{1-z}\circ\mathbb{T}^{ \textsc{out}}_{z}\right]_{+}+\overline{\mathbb{B}}^{\textsc{out}}\,\delta(1-z )+\mathbb{C}(z)\circ\mathbb{T}^{\textsc{out}}_{z}\,, \tag{41}\] \[\mathbb{O}^{\textsc{in}}(z) = \frac{\widetilde{\mathbb{A}}(z)}{1-z}\circ\mathbb{T}^{\textsc{ in}}_{z}+\overline{\mathbb{B}}^{\textsc{in}}\,\delta(1-z)+\mathbb{C}(z) \circ\mathbb{T}^{\textsc{in}}_{z}\,, \tag{42}\]
where, by taking eq. (16) into account:
\[\overline{\mathbb{B}}^{\textsc{out}}=\widetilde{\mathbb{B}}^{\textsc{out}}+ \int_{0}^{1}dz\,\frac{\widetilde{\mathbb{A}}(z)-\widetilde{\mathbb{A}}(1)}{1 -z}\circ\mathbb{T}^{\textsc{out}}_{z}\,,\qquad\overline{\mathbb{B}}^{\textsc{ in}}=\widetilde{\mathbb{B}}^{\textsc{in}}-\int_{0}^{1}dz\,\frac{ \widetilde{\mathbb{A}}(1)\circ\mathbb{T}^{\textsc{in}}_{z}}{1-z}\,, \tag{43}\]
with
\[\widetilde{\mathbb{B}}=\widetilde{\mathbb{B}}^{\textsc{out}}+\widetilde{ \mathbb{B}}^{\textsc{in}}\,,\qquad\lim_{\epsilon\to 0}\widetilde{\mathbb{B}}^{ \textsc{out}}=0\,. \tag{44}\]
Here, the same remark made after eq. (40) applies: namely, the association of the two rightmost terms of eq. (16) with \(\overline{\mathbb{B}}^{\textsc{out}}\) and \(\overline{\mathbb{B}}^{\textsc{in}}\), respectively, as is done in eq. (43) is a choice we shall always adhere to, and for the operators of this form the flexibility in choosing \(\overline{\mathbb{B}}^{\textsc{out}}\) will be controlled by the choice of \(\overline{\mathbb{B}}^{\textsc{out}}\).
There is an easy way to enforce the conditions in eqs. (40) and (44) that is, once again, quite arbitrary, but that allows an easy interpretation from a physical viewpoint, and leads to the Sudakov form factors which are typically adopted at the LO in QCD (see sect. 4). Namely, one finds functions \(b_{ij}(\omega)\) and \(\tilde{b}_{ij}(\omega)\) which are bounded from above and below, and are such that:
\[B_{j}=\int_{0}^{1}d\omega\sum_{i}b_{ij}(\omega)\,,\qquad\widetilde{B}_{j}= \int_{0}^{1}d\omega\sum_{i}\tilde{b}_{ij}(\omega)\,, \tag{45}\]
and defines:
\[B_{j}^{\textsc{out}}=\int_{0}^{1}d\omega\sum_{i}b_{ij}(\omega)\,\Theta^{ \textsc{out}}_{ij,\omega}\,,\qquad\widetilde{B}_{j}^{\textsc{in}}=\int_{0}^{1 }d\omega\sum_{i}\tilde{b}_{ij}(\omega)\,\Theta^{\textsc{out}}_{ij,\omega}\,. \tag{46}\]
Note that in the case where eq. (6) holds, \(\mathbb{B}\) (and therefore the functions \(b_{ij}(\omega)\)) need not necessarily be introduced. If one still finds it convenient to do so (e.g. to use both the form of eq. (3) and that of eq. (5)), eqs. (6) and (45) imply:
\[\int_{0}^{1}d\omega\,b_{ii}(\omega)=\int_{0}^{1}d\omega\,\tilde{b}_{ii}(\omega )+\int_{0}^{1}dz\,\frac{\widetilde{A}_{i}(z)-\widetilde{A}_{i}(1)}{1-z}\,. \tag{47}\]
Clearly, the easiest way to achieve this and be consistent with eqs. (39)-(44) is to work with a local version of eq. (47), namely:
\[b_{ii}(z)=\tilde{b}_{ii}(z)+\frac{\widetilde{A}_{i}(z)-\widetilde{A}_{i}(1)}{1-z}\,. \tag{48}\]
We anticipate that at the LO in QCD the choice of the \(b_{ij}(\omega)\) and \(\tilde{b}_{ij}(\omega)\) functions along the lines presented above leads to Sudakov form factors expressed as integrals of the LO splitting kernels (see sect. 4 for more details). However, this discussion should render it clear that, even at the LO in QCD, this is a choice that is not dictated by any fundamental principle, but by convenience and ease of interpretation.
The separation of the virtual terms in eq. (29) stemming from eq. (30) encompasses the case where such a separation is not considered. Even after making a definite choice for the cutoffs, one can continuously pass from one scenario to the other by means of the replacements5
Footnote 5: Although in general the parameter \(\lambda\) can be flavour-dependent, for our purposes such a dependence can be neglected.
\[\overline{\mathbb{B}}^{\textsc{\tiny{OUT}}}\;\longrightarrow\;\lambda \overline{\mathbb{B}}^{\textsc{\tiny{OUT}}}\,,\qquad\overline{\mathbb{B}}^{ \textsc{\tiny{IN}}}\;\longrightarrow\;\overline{\mathbb{B}}-\lambda\overline {\mathbb{B}}^{\textsc{\tiny{OUT}}}\,, \tag{49}\]
with \(0\leq\lambda\leq 1\) in all quantities that feature a dependence on \(\overline{\mathbb{B}}^{\textsc{\tiny{OUT}}}\) and/or \(\overline{\mathbb{B}}^{\textsc{\tiny{IN}}}\).
## 3 Monte Carlo backward evolution
When an MC generates initial-state parton showers, the PDFs are thought to be given. They are employed to "guide" the backward evolution, and thus increase the efficiency of the latter. One usually assumes that consistency demands that the longitudinal momentum left after all branchings have occurred be distributed according to the given PDFs (this identification holds in a statistical sense; it is exact only after an infinite number of showers have been carried out). However, since only _resolved_ branchings (i.e., those with \(\epsilon<z<1-\epsilon\)) can be generated, the identification above can be true only in the resolved region. In fact, as we shall show, even in the resolved region MCs are generally not able to reconstruct the PDFs. Ultimately, this arises from the fact that the PDF evolution equations are expressed as convolution integrals, and thus the derivative w.r.t. the scale of the PDF at a given \(x\) receives contributions from all \(z\)'s, with \(x\leq z\leq 1\). In other words, the unresolved region feeds into the resolved region as well as itself. This is unavoidable: for PDF evolution, the separation between the resolved and unresolved regions is totally arbitrary, and has no bearing on the final form of the PDFs.
Conversely, MCs cannot function without a clear separation between resolved and unresolved regions, i.e. without the introduction of cutoffs. As is well known, this leads to the possibility of generating showers by means of an iterative Markovian process, one of whose key ingredients is the inversion of the so-called non-emission probability (NEP henceforth), which gives one the scale at which the next parton branching occurs. The usual argument adopted for deriving the NEP associated with initial-state emissions exploits a partonic picture of the PDFs, whereby these "count" the number of partons at any
given values of the Bjorken \(x\) and scale. As was said before, one identifies the \(\mathbb{Z}\) and \(\mathbb{W}\) contributions to the PDF evolution as associated with resolvable branchings and with branchings resolvable in \(\mu\) but not in \(z\), respectively. Therefore, the number of partons of flavour \(i\) that do not undergo branchings of any type in the range \((\mu_{0}^{2},\mu^{2})\) is equal to:
\[\frac{S_{i}(\mu^{2})}{S_{i}(\mu_{0}^{2})}\,f_{i}(x,\mu_{0}^{2})\,, \tag{3.1}\]
while that of partons that either do not branch, or branch in a manner unresolvable in \(x\), is equal to:
\[\frac{S_{i}(\mu^{2})}{S_{i}(\mu_{0}^{2})}\,f_{i}(x,\mu_{0}^{2})+\int_{\mu_{0}^{ 2}}^{\mu^{2}}\frac{d\kappa^{2}}{\kappa^{2}}\frac{S_{i}(\mu^{2})}{S_{i}(\kappa^ {2})}\left(\mathbb{W}\left[F\right]\right)_{i}(x,\kappa^{2})\,. \tag{3.2}\]
The NEP is defined as the fraction of partons that do not branch in a resolvable manner between any two scales. The difference between forward and backward evolution is simply the reference relative to which that fraction is measured, because the elementary branching mechanism must not be affected by the direction of the evolution. For an evolution in the range \((\mu_{0}^{2},\mu^{2})\), if the evolution is forwards (backwards) the reference is \(f_{i}(x,\mu_{0}^{2})\) (\(f_{i}(x,\mu^{2})\)). Thus, eq. (3.2) leads to the non-emission probabilities for the forward and backward evolution of a parton of type \(i\),
\[\text{Forward:}\qquad\text{NEP}_{i} =\frac{S_{i}(\mu^{2})}{S_{i}(\mu_{0}^{2})}+\frac{1}{f_{i}(x,\mu_{0 }^{2})}\,\int_{\mu_{0}^{2}}^{\mu^{2}}\frac{d\kappa^{2}}{\kappa^{2}}\frac{S_{i }(\mu^{2})}{S_{i}(\kappa^{2})}\left(\mathbb{W}\left[F\right]\right)_{i}(x, \kappa^{2}), \tag{3.3}\] \[\text{Backward:}\qquad\text{NEP}_{i} =\frac{S_{i}(\mu^{2})}{S_{i}(\mu_{0}^{2})}\,\frac{f_{i}(x,\mu_{0}^ {2})}{f_{i}(x,\mu^{2})}+\frac{1}{f_{i}(x,\mu^{2})}\,\int_{\mu_{0}^{2}}^{\mu^{2 }}\frac{d\kappa^{2}}{\kappa^{2}}\frac{S_{i}(\mu^{2})}{S_{i}(\kappa^{2})}\left( \mathbb{W}\left[F\right]\right)_{i}(x,\kappa^{2}). \tag{3.4}\]
Here we are concerned with the backward case, which will henceforth always be implied. Then from eqs. (3.4) and (2.27) one also obtains:
\[\text{NEP}_{i}=1-\frac{1}{f_{i}(x,\mu^{2})}\,\int_{\mu_{0}^{2}}^{\mu^{2}} \frac{d\kappa^{2}}{\kappa^{2}}\frac{S_{i}(\mu^{2})}{S_{i}(\kappa^{2})}\left( \mathbb{Z}\left[F\right]\right)_{i}(x,\kappa^{2})\,, \tag{3.5}\]
consistently with the meaning of the \(\mathbb{Z}[F]\) functional. In a backward evolution, which proceeds from larger to smaller scales, starting from a given \(\mu^{2}\) one obtains the "next" scale \(\mu_{0}^{2}<\mu^{2}\) by solving for \(\mu_{0}^{2}\) the equation
\[r=\text{NEP}_{i}\,, \tag{3.6}\]
with \(0<r<1\) a uniform random number, and \(i\) given. After selecting the branching channel and its momentum fraction, the procedure is iterated until a \(\mu_{0}^{2}\) value is obtained that is smaller than some pre-defined threshold (the so-called hadronization scale). However, MCEGs do not literally solve eq. (3.6), but either [2]
\[r=\text{NEP}_{i}^{\text{(R)}}\equiv\frac{S_{i}(\mu^{2})}{S_{i}(\mu_{0}^{2})} \,\frac{f_{i}(x,\mu_{0}^{2})}{f_{i}(x,\mu^{2})}\,, \tag{3.7}\]
or [3]
\[r=\text{NEP}_{i}^{\text{(E)}}\equiv\exp\left[-\int_{\mu_{0}^{2}}^{\mu^{2}} \frac{d\kappa^{2}}{\kappa^{2}}\frac{1}{f_{i}(x,\kappa^{2})}\left(\mathbb{Z} \left[F\right]\right)_{i}(x,\kappa^{2})\right]\,, \tag{3.8}\]
where the superscript R or E indicates that a ratio or an exponential approximation for the NEP has been used, respectively. The crucial thing, which follows directly from the evolution equations as given in eq. (27) or eq. (28), is that:
\[\text{if}\qquad\mathbb{W}\left[F\right]=0\qquad\text{then}\qquad\text{NEP}_{i} =\text{NEP}_{i}^{\text{(R)}}=\text{NEP}_{i}^{\text{(E)}}\,. \tag{29}\]
Thus, in the resolved region the solution of eq. (16) coincides with that of eqs. (17) or (18) up to terms suppressed by powers of the cutoffs (since in that region \(\mathbb{W}\) vanishes with the cutoffs). This is the reason why, in standard current practice, eq. (17) and eq. (18) are considered equivalent to one another - effects suppressed by powers of the cutoffs are systematically neglected. This is in fact a dangerous position to take, given that MC cutoffs are often not particularly small, their effects can accumulate over the course of evolution, and there is no other mechanism that forces \(\mathbb{W}\) to vanish bar the vanishing of the cutoffs.
It is therefore instructive to see how the three NEP expressions considered above differ from each other when power-suppressed cutoff effects are not neglected. We start by considering the probability distribution of the scale of the next branching which, according to eqs. (16), (17), and (18), is given by the derivative w.r.t. \(\mu_{0}^{2}\) of the respective NEP. By direct computation, and by employing the evolution equations (27) and (28), we obtain:
\[\frac{\partial}{\partial\log\mu_{0}^{2}}\,\text{NEP}_{i}=\frac{1} {f_{i}(x,\mu^{2})}\frac{S_{i}(\mu^{2})}{S_{i}(\mu_{0}^{2})}\left(\mathbb{Z} \left[F\right]\right)_{i}(x,\mu_{0}^{2})\,, \tag{30}\] \[\frac{\partial}{\partial\log\mu_{0}^{2}}\,\text{NEP}_{i}^{\text{ (R)}}=\frac{1}{f_{i}(x,\mu^{2})}\frac{S_{i}(\mu^{2})}{S_{i}(\mu_{0}^{2})} \Big{[}\big{(}\mathbb{W}\left[F\right]\big{)}_{i}(x,\mu_{0}^{2})+\big{(} \mathbb{Z}\left[F\right]\big{)}_{i}(x,\mu_{0}^{2})\Big{]},\] (31) \[\frac{\partial}{\partial\log\mu_{0}^{2}}\,\text{NEP}_{i}^{\text{ (E)}}=\] (32) \[\qquad\qquad\frac{1}{f_{i}(x,\mu^{2})}\frac{S_{i}(\mu^{2})}{S_{ i}(\mu_{0}^{2})}\left(\mathbb{Z}\left[F\right]\right)_{i}(x,\mu_{0}^{2})\,\exp \!\left[\int_{\mu_{0}^{2}}^{\mu^{2}}\frac{d\kappa^{2}}{\kappa^{2}}\frac{1}{f_ {i}(x,\kappa^{2})}\left(\mathbb{W}\left[F\right]\right)_{i}(x,\kappa^{2}) \right].\]
Equation (30), since it factors out \(\mathbb{Z}\) that is non-null only for resolved emissions, shows that \(\text{NEP}_{i}\) is consistent with the requirement that the NEP be associated with the fraction of partons that do not branch in a resolvable manner. This may seem to be the case also for \(\text{NEP}_{i}^{\text{(E)}}\), but in fact the exponentiated \(\mathbb{W}\) term in eq. (32) introduces a spurious extra cutoff dependence w.r.t. the evolution generated by means of \(\text{NEP}_{i}\). Finally, in eq. (31) the \(\mathbb{W}\) and \(\mathbb{Z}\) contributions are on the same footing: this is because, as the comparison between eqs. (1) and (2) shows, \(\text{NEP}_{i}^{\text{(R)}}\) is actually the NEP for no branchings, regardless whether they are resolved or unresolved in \(x\).
In appendix A we discuss in detail the implications of eqs. (30)-(32) for the requirement that MC backward evolution allows one to reconstruct the PDFs given in input to the parton shower. The bottom line is that, in practice, such a reconstruction always fails. It can be made to _formally_ succeed with \(\text{NEP}_{i}^{\text{(R)}}\), while if \(\text{NEP}_{i}\) is adopted one can reconstruct PDFs where all non-resolved contributions are consistently neglected; the same is true for \(\text{NEP}_{i}^{\text{(E)}}\) if a branching-by-branching reweighting is applied.
The above suggests that \(\text{NEP}_{i}\) and \(\text{NEP}_{i}^{\text{(E)}}\) are closer to each other than either is to \(\text{NEP}_{i}^{\text{(R)}}\). This can be also seen in another way, by considering the differences between any
two of these quantities. From eqs. (10) and (11) we obtain:
\[{\rm NEP}_{i}^{\rm(R)}-{\rm NEP}_{i}=-\frac{S_{i}(\mu^{2})}{f_{i}(\mu^{2})}\,\int_{ \mu_{0}^{2}}^{\mu^{2}}\frac{d\kappa^{2}}{\kappa^{2}}\frac{1}{S_{i}(\kappa^{2})} \left(\mathbb{W}\left[F\right]\right)_{i}(\kappa^{2})\equiv\mathcal{O}(\alpha_ { S})\,, \tag{25}\]
whereas from eqs. (10) and (11):
\[{\rm NEP}_{i}^{\rm(E)}-{\rm NEP}_{i}\] \[\quad=\frac{S_{i}(\mu^{2})}{f_{i}(\mu^{2})}\,\int_{\mu_{0}^{2}}^{ \mu^{2}}\frac{d\kappa^{2}}{\kappa^{2}S_{i}(\kappa^{2})}\left(\frac{f_{i}(\mu_ {0}^{2})}{f_{i}(\kappa^{2})}\frac{S_{i}(\kappa^{2})}{S_{i}(\mu_{0}^{2})}-1 \right)\left(\mathbb{W}\left[F\right]\right)_{i}(\kappa^{2})\] \[\quad\quad\quad+\frac{1}{2}\,\frac{f_{i}(\mu_{0}^{2})}{f_{i}(\mu^ {2})}\,\frac{S_{i}(\mu^{2})}{S_{i}(\mu_{0}^{2})}\left(\int_{\mu_{0}^{2}}^{\mu^ {2}}\frac{d\kappa^{2}}{\kappa^{2}}\frac{1}{f_{i}(\kappa^{2})}\left(\mathbb{W} \left[F\right]\right)_{i}(\kappa^{2})\right)^{2}+\ldots\] \[\quad=-\frac{S_{i}(\mu^{2})}{f_{i}(\mu^{2})}\,\int_{\mu_{0}^{2}}^ {\mu^{2}}\frac{d\kappa^{2}}{\kappa^{2}f_{i}(\kappa^{2})}\int_{\mu_{0}^{2}}^{ \kappa^{2}}\frac{d\rho^{2}}{\rho^{2}S_{i}(\rho^{2})}\Big{(}\big{(}\mathbb{W} \left[F\right]\big{)}_{i}(\rho^{2})+\big{(}\mathbb{Z}\left[F\right]\big{)}_{i} (\rho^{2})\Big{)}\big{(}\mathbb{W}\left[F\right]\big{)}_{i}(\kappa^{2})\] \[\quad\quad\quad+\frac{1}{2}\,\frac{f_{i}(\mu_{0}^{2})}{f_{i}(\mu^ {2})}\,\frac{S_{i}(\mu^{2})}{S_{i}(\mu_{0}^{2})}\left(\int_{\mu_{0}^{2}}^{\mu^ {2}}\frac{d\kappa^{2}}{\kappa^{2}}\frac{1}{f_{i}(\kappa^{2})}\left(\mathbb{W} \left[F\right]\right)_{i}(\kappa^{2})\right)^{2}+\ldots \tag{26}\] \[\equiv\mathcal{O}(\alpha_{ S}^{2})\,, \tag{27}\]
where the ellipsis represents terms with three or more \(\mathbb{W}\) terms, and we have used eqs. (28) and (29). The powers of \(\alpha_{ S}\) in eqs. (25) and (27) stem from having regarded both the PDFs and the Sudakovs as quantities of perturbative \(\mathcal{O}(1)\), while both \(\mathbb{W}\) and \(\mathbb{Z}\) are of \(\mathcal{O}(\alpha_{ S})\) (see eqs. (2.18) and (2.19)).
We finally note that, when power-suppressed effects are not neglected, the simple probabilistic interpretation upon which MCs rely to perform initial-state backward evolution may lose validity. In all cases, this can be seen to come from the fact that \(\mathbb{W}[F]\) has no definite sign. Thus, from eq. (29) one sees that \({\rm NEP}_{i}^{\rm(R)}\) is not necessarily monotonic, and from eq. (28) that \({\rm NEP}_{i}\) is not necessarily positive. In both cases, this implies that, in some regions of the phase space (typically, at large Bjorken \(x\)), these NEPs are actually not cumulative probability distributions, and therefore that the solution of eq. (11) or eq. (12) may not exist, or may not be unique. As far as \({\rm NEP}_{i}^{\rm(E)}\) is concerned, it is positive definite, monotonic, and bounded by one; however, as was discussed in relation to eq. (25), its physical interpretation is unclear.
We conclude by remarking that, while \({\rm NEP}_{i}\) may turn out to be negative, it generally is positive. One can in fact turn the requirement that it be positive into a tool to determine the cutoffs in a physically-meaningful manner, in the sense of limiting the impact of non-resolvable emissions to an extent that allows one to recover a probabilistic interpretation.
Another way to approach the problem is to acknowledge the fact that PDFs and initial-state parton showers are inherently incompatible at some level, and to construct MC-specific PDFs by means of which all issues are removed _ab initio_. This option will be discussed in sect. 5.
The LO QCD case
The general approach of sets. 2 and 3 can be applied to the case which is currently the most relevant to MC simulations, namely that where only the LO evolution kernels are considered. In order to simplify our discussion, we assume all quarks to be massless, and ignore complications due to the presence of mass thresholds; thus, we shall not need to specify the individual flavours.
We denote the LO kernels, i.e. the elements of \(\mathbb{P}^{[0]}\) in eq. (5), as follows6:
Footnote 6: Bearing in mind that at the LO there are no \(q\bar{q}\) kernels, in the notation we need not distinguish quarks and antiquarks.
\[P_{qq}(z) = C_{ F}\left(\frac{1+z^{2}}{1-z}\right)_{+}\,, \tag{10}\] \[P_{gq}(z) = C_{ F}\,\frac{1+(1-z)^{2}}{z}\,,\] (11) \[P_{qg}(z) = T_{ F}\left(z^{2}+(1-z)^{2}\right)\,,\] (12) \[P_{gg}(z) = 2C_{ A}\left(\frac{z}{(1-z)_{+}}+\frac{1-z}{z}+z(1-z)\right)+ \gamma(g)\delta(1-z)\,, \tag{13}\]
with \((N_{ F}=N_{u}+N_{d})\):
\[\gamma(g)=\frac{11C_{ A}-4T_{ F}N_{ F}}{6}\,. \tag{14}\]
We also denote by \(\hat{P}_{ij}\) the ordinary function obtained from the kernels \(P_{ij}\) above by discarding the endpoint contributions (i.e. by turning plus distributions into ordinary functions, and by ignoring contributions proportional to \(\delta(1-z)\)). From eqs. (10)-(13) one can read off the quantities introduced in eqs. (4) and (5), since at this order:
\[\big{(}\mathbb{O}(z)\big{)}_{ij}=\frac{\alpha_{ S}}{2\pi}\Big{(} \mathbb{P}^{[0]}\Big{)}_{ij}\equiv\frac{\alpha_{ S}}{2\pi}\,P_{ij}(z)\,. \tag{15}\]
Thus:
\[\frac{2\pi}{\alpha_{ S}}\,A_{q}(z)=C_{ F}\,\frac{1+z^{2}}{1-z}\,, \quad B_{q}=0\,,\quad C_{qq}(z)=0\,, \tag{16}\] \[\frac{2\pi}{\alpha_{ S}}\,C_{gq}(z)=C_{ F}\,\frac{1+(1-z)^{2}}{z}\,,\] (17) \[\frac{2\pi}{\alpha_{ S}}\,C_{qg}(z)=T_{ F}\left(z^{2}+(1-z)^{2}\right)\,,\] (18) \[\frac{2\pi}{\alpha_{ S}}\,\widetilde{A}_{g}(z)=2C_{ A}\,z\,,\quad\frac{2\pi}{\alpha_{ S}}\,\widetilde{B}_{g}=\gamma(g)\,,\quad\frac{2\pi}{\alpha_{ S}}\,C_{gg}(z)=2C_{ A}\left(\frac{1-z}{z}+z(1-z)\right). \tag{19}\]
The case of a quark is straightforward: in view of eq. (16), by making the simplest choice7
\(B_{q}^{\rm OUT}=0\), eq. (2.39) leads to:
\[\overline{B}_{q}^{\rm OUT} = 0\,, \tag{4.11}\] \[\overline{B}_{q}^{\rm IN} = -\frac{\alpha_{ S}}{2\pi}\int_{0}^{1}dz\hat{P}_{qq}(z)\Theta_{ qq,z}^{\rm IN}\] (4.12) \[\equiv -\frac{\alpha_{ S}}{2\pi}\int_{0}^{1}dz\,\frac{1}{2}\left(\hat{P}_{ qq}(z)\Theta_{qq,z}^{\rm IN}+\hat{P}_{qq}(z)\Theta_{gq,z}^{\rm IN}\right)\,, \tag{4.13}\]
with the form in eq. (4.13) identical to that in eq. (4.12) thanks to the \(z\leftrightarrow 1-z\) symmetry of both the splitting kernels and their respective integration limits (owing to eq. (2.10)). The corresponding quark Sudakov factor is obtained by inserting \(\overline{B}_{q}^{\rm IN}\) into eq. (2.31). The case of the gluon is slightly more involved. We use:
\[\frac{2\pi}{\alpha_{ S}}\tilde{b}_{gg}(z) = \frac{C_{ A}}{1-z}+\frac{C_{ A}}{z}-\frac{1}{2}\,\hat{P}_{gg}(z) \equiv C_{ A}\Big{(}2-z+z^{2}\Big{)}\,, \tag{4.14}\] \[\frac{2\pi}{\alpha_{ S}}\,\tilde{b}_{qg}(z) = -\frac{1}{2}\hat{P}_{qg}(z)\,, \tag{4.15}\]
which indeed satisfies eq. (2.45) given \(\gamma(g)\) of eq. (4.5) (note that the sum over flavours includes both quarks and antiquarks). With this, eqs. (2.43) and (4.10) lead to:
\[\overline{B}_{g}^{\rm OUT} = -\frac{\alpha_{ S}}{2\pi}\,C_{ A}\int_{0}^{1}dz\,z(1-z)\Theta_{gg,z}^{\rm OUT}-\frac{\alpha_{ S}}{2\pi}\,\frac{1}{2}\sum_{q,\bar{q}}\int_{0}^{1}dz\,\hat{P}_{qg}(z)\Theta_{qg,z}^{ \rm OUT}\,, \tag{4.16}\] \[\overline{B}_{g}^{\rm IN} = -\frac{\alpha_{ S}}{2\pi}\,\frac{1}{2}\int_{0}^{1}dz\left(\hat{P}_{ gg}(z)\Theta_{gg,z}^{\rm IN}+\sum_{q,\bar{q}}\hat{P}_{qg}(z)\Theta_{qg,z}^{ \rm IN}\right)\,. \tag{4.17}\]
The form of eq. (4.17) stems from exploiting:
\[\int_{0}^{1}dz\,\Theta_{gg,z}^{\rm IN}\left(-\frac{2C_{ A}}{1-z}+\frac{C_{ A}}{1-z}+\frac{C_{ A}}{z}\right)=0\,, \tag{4.18}\]
which is due to the fact that (see eq. (2.10)):
\[\epsilon_{gg}^{\rm L}=\epsilon_{g}\,,\quad\epsilon_{gg}^{\rm U}=\epsilon_{g} \quad\Longrightarrow\quad\Theta_{gg,z}^{\rm IN}=\Theta(\epsilon_{g}<z<1- \epsilon_{g})\,. \tag{4.19}\]
If the range in \(z\) defined by \(\Theta_{gg,z}^{\rm IN}\) were not symmetric under \(z\leftrightarrow 1-z\), \(\overline{B}_{g}^{\rm IN}\) would still be well defined, but eqs. (2.43), (2.46), and (4.14) would not lead to a result solely expressed in terms of \(\hat{P}_{gg}\) for the part proportional to \(C_{ A}\). Finally, we point out that eqs. (4.13) and (4.17), which enter the quark and gluon Sudakov form factors (in the latter case, only when \(\lambda=1\)), respectively, have the usual form of the integrals of the splitting kernels over the inner region. The reader is encouraged to bear in mind that this is a consequence of several arbitrary choices, which we have outlined in sect. 2.1.
By using the results above, those of eqs. (19) and (36), and the replacement of eq. (49), we obtain the following after some trivial algebra:
\[\left(\mathbb{Z}\left[F\right]\right)_{q}\!\!(x) = \frac{\alpha_{s}}{2\pi}\int_{0}^{1}\frac{dz}{z}\,\Theta(z\geq x) \left[\Theta^{\textsc{\tiny{IN}}}_{qq,z}\hat{P}_{qq}(z)f_{q}\left(\frac{x}{z} \right)+\Theta^{\textsc{\tiny{IN}}}_{qg,z}\hat{P}_{qg}(z)f_{g}\left(\frac{x}{z }\right)\right], \tag{50}\] \[\left(\mathbb{W}\left[F\right]\right)_{q}\!\!(x) = \frac{\alpha_{s}}{2\pi}\int_{0}^{1}dz\left\{\Theta^{\textsc{ \tiny{OUT}}}_{qq,z}\hat{P}_{qq}(z)\left[\frac{1}{z}\,f_{q}\left(\frac{x}{z} \right)\Theta(z\geq x)-f_{q}(x)\right]\right.\] (51) \[\qquad\qquad\qquad\qquad\left.+\,\frac{1}{z}\,\Theta^{\textsc{ \tiny{OUT}}}_{qg,z}\hat{P}_{qq}(z)f_{g}\left(\frac{x}{z}\right)\Theta(z\geq x )\right\}+\lambda\overline{B}^{\textsc{\tiny{OUT}}}_{q}f_{q}(x)\,,\]
and:
\[\left(\mathbb{Z}\left[F\right]\right)_{g}\!\!(x) = \frac{\alpha_{s}}{2\pi}\int_{0}^{1}\frac{dz}{z}\,\Theta(z\geq x) \left[\Theta^{\textsc{\tiny{IN}}}_{gg,z}\hat{P}_{gg}(z)f_{g}\left(\frac{x}{z} \right)+\sum_{q,\bar{q}}\Theta^{\textsc{\tiny{IN}}}_{gq,z}\hat{P}_{gq}(z)f_{q} \left(\frac{x}{z}\right)\right], \tag{52}\] \[\left(\mathbb{W}\left[F\right]\right)_{g}\!\!(x) = \frac{\alpha_{s}}{2\pi}\int_{0}^{1}dz\left\{\Theta^{\textsc{ \tiny{OUT}}}_{gg,z}\left[\frac{\hat{P}_{gg}(z)}{z}f_{g}\left(\frac{x}{z}\right) \Theta(z\geq x)-\frac{2C_{A}z}{1-z}\,f_{g}(x)\right]\right.\] (53) \[\qquad\qquad\left.+\,\frac{\Theta(z\geq x)}{z}\sum_{q,\bar{q}} \Theta^{\textsc{\tiny{OUT}}}_{gq,z}\hat{P}_{gq}(z)f_{q}\left(\frac{x}{z} \right)\,\right\}+\lambda\overline{B}^{\textsc{\tiny{OUT}}}_{g}f_{g}(x)\,.\]
We note that, owing to eq. (4.1), the last term on the r.h.s. of eq. (51) is null, independent of the value of \(\lambda\); the reader must bear in mind that this is a choice (see footnote 7). If one chooses \(\lambda=1\), eqs. (4.1) and (4.1) allow one to rewrite eq. (53) in the seemingly more familiar form:
\[\left(\mathbb{W}\left[F\right]\right)_{g}\!\!(x) = \frac{\alpha_{s}}{2\pi}\int_{0}^{1}dz\left\{\Theta^{\textsc{ \tiny{OUT}}}_{gg,z}\hat{P}_{gg}(z)\left[\frac{1}{z}f_{g}\left(\frac{x}{z} \right)\Theta(z\geq x)-\frac{1}{2}f_{g}(x)\right]\right. \tag{54}\] \[\qquad+\sum_{q,\bar{q}}\left(\frac{1}{z}\,\Theta^{\textsc{ \tiny{OUT}}}_{gq,z}\hat{P}_{gq}(z)f_{q}\left(\frac{x}{z}\right)\Theta(z\geq x )-\frac{1}{2}\,\Theta^{\textsc{\tiny{OUT}}}_{qg,z}\hat{P}_{qg}(z)f_{g}\left(x \right)\right)\right\}\!.\]
Again, here a simplification has been made thanks to the fact that the analogue of eq. (4.1) holds with \(\Theta^{\textsc{\tiny{IN}}}_{gg,z}\to\Theta^{\textsc{\tiny{OUT}}}_{gg,z}\) there, given eq. (4.1). Moreover, we observe that this is also a direct consequence of the fact that the subtraction term in eq. (53) is proportional to \(z/(1-z)\), as opposed to \(1/(1-z)\) - the definition of \(\overline{\mathbb{B}}\) respects the convention for the plus prescription mentioned in footnote 2.
Equation (54) does not offer any specific advantages w.r.t. eq. (53). In addition to being valid only when \(\lambda=1\), it may seem to feature uncancelled divergences stemming from the second term in the integrand. In fact, this is not the case, as one can easily see by regularising the integral. However, such a regularisation is not practical in the context of numerical computations. A better alternative is to exploit the \(z\leftrightarrow 1-z\) symmetry of the \(\hat{P}_{gg}(z)\) and \(\hat{P}_{qg}(z)\) kernels and eq. (4.1) (as well as its analogue for the \(g\to q\bar{q}\) branching), and to obtain a manifestly-finite integral by means of either of the formal replacements:
\[\frac{1}{2}f_{g}\left(x\right)\;\longrightarrow\;\Theta\!\left(z\geq\frac{1}{2} \right)f_{g}\left(x\right)\,,\qquad\frac{1}{2}f_{g}\left(x\right)\; \longrightarrow\;z\,f_{g}\left(x\right)\,, \tag{55}\]
in the second and fourth terms of the integrand.
### Results on backward evolution
We present here some results obtained within the leading-order framework outlined above. Since our objective is to illustrate issues raised in previous sections, rather than to perform realistic phenomenology, we consider two cases of a universal, flavour-independent cutoff \(\epsilon_{ij}^{\mbox{\tiny L}}=\epsilon_{ij}^{\mbox{\tiny U}}=\epsilon\). The first is relatively large and scale-independent, \(\epsilon=0.1\), while the second is slightly more realistic from a parton-shower MC viewpoint, being scale dependent and defined as \(\epsilon=~{}(2~{}\mbox{GeV})/q\), where \(q\) is the mass scale relevant to the current computation (e.g. in the Sudakov factor of eq. (31), \(q=\sqrt{\kappa^{2}}\)). For the leading-order PDFs we adopt the CT18LO set of ref. [7]; the argument of \(\alpha_{ S}\) is taken to be a mass-scale squared and, in keeping with ref. [7], we have \(\alpha_{ S}(m_{ Z}^{2})=0.135\).
Figure 1 shows the resulting true NEP (3.4) (black, solid) and the approximations \(\mbox{NEP}^{(\mbox{\tiny R})}\) (3.7) (blue, dashed) and \(\mbox{NEP}^{(\mbox{\tiny E})}\) (3.8) (red, dotted), for up-quarks and gluons as a function of \(\mu_{0}\), with \(\mu=100\) GeV. In each panel,
Figure 1: Non-emission probability (NEP) for backward evolution of up quarks and gluons with \(\mu=100\) GeV, according to NEP (3.4) (black, solid), \(\mbox{NEP}^{(\mbox{\tiny R})}\) (3.7) (blue, dashed) and \(\mbox{NEP}^{(\mbox{\tiny E})}\) (3.8) (red, dotted). The three sets of curves correspond to \(x=0.01\) (lowest), \(0.1\), and \(0.5\) (highest). The open circles (green) show the NEP computed with cutoff-dependent PDFs, to be discussed in sect. 5.
the three sets of curves are for \(x=0.01\), \(0.1\), and \(0.5\), from lowest to highest, respectively. One sees that, for the range of \(\mu_{0}\) shown, \(\mathrm{NEP}^{(\mathrm{E})}\) is closer to the true NEP than \(\mathrm{NEP}^{(\mathrm{R})}\), as could be anticipated from eqs. (3.13) and (3.15). \(\mathrm{NEP}^{(\mathrm{R})}\) becomes a poorer approximation with increasing \(x\) (because there \(\mathbb{W}\left[F\right]\) tends to be large and negative), eventually possibly becoming non-monotonic and/or greater than unity at high \(x\) (the latter e.g. in the case of the down quark, which is not shown in the figure).
Figures 2 and 3 show results of MC backward evolution from 1 TeV to 10 GeV using the NEP (3.4) (black crosses), and the approximation \(\mathrm{NEP}^{(\mathrm{R})}\) (3.7) (blue vertical crosses) or \(\mathrm{NEP}^{(\mathrm{E})}\) (3.8) (red boxes). Here \(10^{7}\) unweighted MC events were generated starting at \(\mu=1\) TeV, with a probability distribution of momentum fraction \(x\) and flavour \(i\)
\[\frac{dP_{i}}{dx}=xf_{i}(x,\mu^{2})\,, \tag{4.26}\]
using the momentum sum rule
\[\sum_{j}\int_{0}^{1}dx\,xf_{j}(x,\mu^{2})=1 \tag{4.27}\]
as normalization. Following the selection of the next branching scale \(\mu_{0}\) according to the relevant NEP, the momentum fraction \(x^{\prime}\) and flavour \(j\) of the branching parent was chosen according to the distribution
\[\frac{dP_{j}}{dx^{\prime}}=\frac{1}{x^{\prime}}\Theta^{\mathrm{ IN}}_{ij,x/x^{\prime}}\hat{P}_{ij}(x/x^{\prime})f_{j}(x^{\prime},\mu_{0}^{2}) \Bigg{/}\sum_{k}\int_{x}^{1}\frac{dz}{z}\,\Theta^{\mathrm{ IN}}_{ik,z}\hat{P}_{ik}(z)f_{k}(x/z,\mu_{0}^{2})\,. \tag{4.28}\]
Note that this implies that only resolvable emissions were generated; although this is the standard practice, it is not necessarily what the various NEPs employed here would dictate - more details on this point are given in app. A (see in particular eqs. (A.29) and (A.30)). For computational speed, evolution was discretized on a \((500,70)\)-node grid in \((x,\mu)\). The procedure was iterated until the next branching scale fell below 10 GeV.
In the case of \(\mathrm{NEP}^{(\mathrm{R})}\), we have seen that it may be non-monotonic or larger than one, in which case the solution of eq. (3.7) was chosen larger than or equal to the value of \(\mu_{0}\) for which \(\mathrm{NEP}^{(\mathrm{R})}\) has its minimum. This may account for a part of the large discrepancies between the results of using \(\mathrm{NEP}^{(\mathrm{R})}\) and the true NEP or \(\mathrm{NEP}^{(\mathrm{E})}\) at high \(x\).
Generally speaking, all versions of the NEP perform poorly in reproducing the backward evolution of the PDFs, especially outside the intermediate region \(0.01<x<0.1\); the true NEP performs best at higher \(x\). However, the fundamental problem remains that the PDF evolution generated by the MC results from the accumulation of recoils against resolved emissions, whereas the actual evolution results from both resolved and unresolved emissions8.
Footnote 8: For an early prescription to account for unresolved emissions in an average manner, see ref. [8].
One possible approach that avoids this problem, while introducing others, is to guide the backward MC with cutoff-dependent PDFs that are generated by resolved emissions alone. We consider this approach in detail in the following section.
Figure 2: Data points show up-quark and gluon PDFs at 10 GeV after MC backward evolution from 1 TeV, guided by the CT18LO PDFs using the NEP (3.4) (black crosses), and the approximation \(\rm{NEP^{(R)}}\) (3.7) (blue vertical crosses) or \(\rm{NEP^{(E)}}\) (3.8) (red boxes). Solid curves show the cutoff-independent PDFs at 10 GeV. Also shown (dashed) are the cutoff-independent PDFs at the starting scale of 1 TeV, to illustrate the amount of evolution. As in fig. 1, open circles (green) show results obtained with cutoff-dependent PDFs, to be discussed in sect. 5.
## 5 PDF evolution with cutoff
In sect. 2 we have shown that the correct form of the backward initial-state radiation NEP is that of eq. (3.4). However, since the NEP is associated with the resolution-dependent (non-) emission of resolvable partons, there is an inconsistency in using it to generate backward shower evolution guided by PDFs that obey the resolution-independent evolution equations (2.1). Barring some _ad hoc_ reweighting procedure, this results in discrepancies between the guiding PDFs and those generated by backward evolution, which we have illustrated in the
Figure 3: As in fig. 2, but with a logarithmic \(x\) scale.
previous section.
An alternative approach is suggested by eq. (3.9). Namely, one defines a new type of PDFs, which we denote by \(F^{(\epsilon)}\), that obey the following evolution equation9:
Footnote 9: This possibility was pointed out but not explored in ref. [2].
\[\frac{\partial F^{(\epsilon)}(x)}{\partial\log\mu^{2}}=\mathbb{O}^{\textsc{IN}} \otimes_{x}F^{(\epsilon)}\,. \tag{5.1}\]
As the notation suggests, such PDFs depend on the cutoffs \(\epsilon=\{\epsilon^{\textsc{L}}_{ij},\epsilon^{\textsc{U}}_{ij}\}_{ij}\). However, in view of eqs. (2.34) and (2.35), and of the characteristics of \(\mathbb{W}[F]\) (see in particular eqs. (2.36) and (2.38)), we expect that \(F^{(\epsilon)}\) and \(F\) will differ, _in the resolved region_, only by terms suppressed by some powers of the cutoffs; conversely, in the unresolved region the differences between the two are in general logarithmic in the cutoffs.
### Flavour and momentum conservation
One interesting question that immediately emerges in the case of cutoff-dependent evolution, eq. (5.1), is whether flavour and momentum are conserved. By working again at the LO, we obtain for the integrated non-singlet contribution of quark flavour \(q\):
\[\frac{\partial}{\partial\log\mu^{2}}\int_{0}^{1}dx\Big{(}f_{q}^{(\epsilon)}(x )-f_{\bar{q}}^{(\epsilon)}(x)\Big{)}=\frac{\alpha_{S}}{2\pi}\int_{0}^{1}dz \left(\mathbb{O}^{\textsc{IN}}(z)\right)_{qq}\int_{0}^{1}dy\Big{(}f_{q}^{( \epsilon)}(y)-f_{\bar{q}}^{(\epsilon)}(y)\Big{)}\,, \tag{5.2}\]
having used the identity:
\[\int_{0}^{1}dx\,g\otimes_{x}h=\int_{0}^{1}dz\,g(z)\int_{0}^{1}dy\,h(y)\,. \tag{5.3}\]
With eqs. (2.33) and (4.7) we obtain:
\[\int_{0}^{1}dz\left(\mathbb{O}^{\textsc{IN}}(z)\right)_{qq} = \int_{0}^{1}dz\,A_{q}(z)\Theta^{\textsc{IN}}_{qq,z}+\overline{B} ^{\textsc{IN}}_{q} \tag{5.4}\] \[= 0\] (5.5) \[= B^{\textsc{IN}}_{q}\,. \tag{5.6}\]
The result of eq. (5.5) stems from eq. (4.12), whereas that of eq. (5.6) is what we would have obtained if we had not chosen \(B^{\textsc{IN}}_{q}=0\) (see eq. (2.39)).
This gives us the opportunity to discuss a general property of the Sudakov definition in the context of MC-compatible PDF evolution equations. In particular, one observes that this definition is to a certain extent always arbitrary. In the standard case, such an arbitrariness is associated with the choice of the resolved region - in practice, with the choices of the cutoffs and of the functional dependence upon them of the borders of the resolved region. In the case of cutoff-dependent evolution, in addition to the above there is the freedom associated with the choice of the parameters \(\mathbb{B}^{\textsc{OUT}}\) (or \(\widetilde{\mathbb{B}}^{\textsc{OUT}}\), the two being related to each other by relationships such as eq. (2.47)), given \(\mathbb{B}\) and the constraints of eq. (2.40) (or given \(\widetilde{\mathbb{B}}\) and the constraints of eq. (2.44)). Once these choices have been made, what is exponentiated (\(\overline{\mathbb{B}}^{\textsc{IN}}\)) is determined unambiguously by eq. (2.39) or eq. (2.43).
In the case of a quark, \(B_{q}=0\), and eq. (40) implies that \(B_{q}^{\rm IN}\) can be set equal to any function of the cutoffs that vanishes with them. Thus, while eq. (45) shows that the cutoff-dependent evolution conserves flavour given the choice of eq. (41), eq. (46) can be seen as constraining \(B_{q}^{\rm IN}\) by _imposing_ that flavour be conserved. In other words: the additional freedom of the cutoff-dependent evolution w.r.t. the standard one discussed above, namely that associated with terms suppressed by powers of the cutoffs, can be exploited to impose physical conditions (such as flavour conservation) that, at variance with the case of standard evolution, may not necessarily emerge in a natural manner.
Turning to the case of momentum conservation10, we write:
Footnote 10: The issue of momentum conservation in PDF evolution with a cutoff, in that case due to a modified argument of \(\alpha_{S}\), was considered in ref. [9].
\[\frac{\partial}{\partial\log\mu^{2}}\int_{0}^{1}dx\,x\sum_{i}f_{i }^{(\epsilon)}(x)= \tag{47}\] \[\qquad\frac{\alpha_{S}}{2\pi}\int_{0}^{1}dz\,z\Big{(}\big{(} \mathbb{O}^{\rm IN}(z)\big{)}_{gg}+\sum_{q,\bar{q}}\big{(}\mathbb{O}^{\rm IN }(z)\big{)}_{qq}\Big{)}\int_{0}^{1}dy\,y\,f_{g}^{(\epsilon)}(y)\] \[\qquad+\frac{\alpha_{S}}{2\pi}\sum_{q}\int_{0}^{1}dz\,z\Big{(} \big{(}\mathbb{O}^{\rm IN}(z)\big{)}_{qq}+\big{(}\mathbb{O}^{\rm IN}(z)\big{)} _{gq}\Big{)}\int_{0}^{1}dy\,y\Big{(}f_{q}^{(\epsilon)}(y)+f_{\bar{q}}^{( \epsilon)}(y)\Big{)},\]
having used the identity:
\[\int_{0}^{1}dx\,x\,g\otimes_{x}h=\int_{0}^{1}dz\,z\,g(z)\int_{0}^{1}dy\,y\,h(y )\,. \tag{48}\]
Let us start by considering the integral over \(x\) in the third line of eq. (47). By proceeding as was done for the manipulations of the flavour-conservation case, we obtain:
\[\int_{0}^{1}dz\,z\Big{(}\big{(}\mathbb{O}^{\rm IN}(z)\big{)}_{qq} +\big{(}\mathbb{O}^{\rm IN}(z)\big{)}_{gq}\Big{)}= \tag{49}\] \[\qquad\int_{0}^{1}dz\,z\,\big{[}A_{q}(z)\Theta^{\rm IN}_{qq,z}+C _{gq}(z)\Theta^{\rm IN}_{gq,z}\big{]}+B^{\rm IN}_{q}-\int_{0}^{1}dz\,A_{q}(z) \Theta^{\rm IN}_{qq,z}=B^{\rm IN}_{q}\,,\]
where the rightmost side follows from the direct computation of the integrals that appear in the central expression. Since \(B^{\rm IN}_{q}=0\) as we have discussed above, the integral on the l.h.s. of eq. (49) is thus equal to zero. Turning to the integral over \(z\) in the second line of eq. (47), we have:
\[\int_{0}^{1}dz\,z\Big{(}\big{(}\mathbb{O}^{\rm IN}(z)\big{)}_{gg}+ \sum_{q,\bar{q}}\big{(}\mathbb{O}^{\rm IN}(z)\big{)}_{qg}(z)\Big{)}= \tag{50}\] \[\qquad\int_{0}^{1}dz\,z\left[\left(\frac{\widetilde{A}_{g}(z)}{1 -z}+C_{gg}(z)\right)\Theta^{\rm IN}_{gg,z}+\sum_{q,\bar{q}}C_{qg}(z)\Theta^{ \rm IN}_{qg,z}\right]+\widetilde{B}^{\rm IN}_{g}-\int_{0}^{1}dz\,\frac{ \widetilde{A}_{g}(1)}{1-z}\Theta^{\rm IN}_{gg,z}\,.\]
By using eqs. (40), (41), and (45) (the latter two in eq. (46)), one sees that the integral on the l.h.s. of eq. (50) is equal to zero. Combined with the null result in eq. (49), this is the analogue of eq. (45), and concludes the proof that the cutoff-dependent
evolution conserves the momentum. Conversely, we can proceed by analogy with eq. (102), and impose the l.h.s. of eq. (101) to be equal to zero in order to determine \(\widetilde{B}_{g}^{\text{\tiny{IN}}}\). If we then adopt the local form (47):
\[\widetilde{B}_{g}^{\text{\tiny{IN}}}=\int_{0}^{1}dz\left[\tilde{b}_{gg}(z) \Theta^{\text{\tiny{IN}}}_{gg,z}+\sum_{q,\bar{q}}\tilde{b}_{qg}(z)\Theta^{\text {\tiny{IN}}}_{qg,z}\right], \tag{103}\]
we obtain
\[\frac{2\pi}{\alpha_{{}_{S}}}\tilde{b}_{gg}(z) = 2C_{{}_{A}}\,z\Big{(}2-z+z^{2}\Big{)}\,, \tag{104}\] \[\frac{2\pi}{\alpha_{{}_{S}}}\tilde{b}_{qg}(z) = -z\hat{P}_{qg}(z)\,, \tag{105}\]
leading via eq. (32) to the following expression for the integrand in the exponent of the gluon Sudakov form factor:
\[\overline{B}_{g}^{\text{\tiny{IN}}}=-\frac{\alpha_{{}_{S}}}{2\pi}\int_{0}^{1} dz\,z\left(\hat{P}_{gg}(z)\Theta^{\text{\tiny{IN}}}_{gg,z}+\sum_{q,\bar{q}}\hat{P}_{ qg}(z)\Theta^{\text{\tiny{IN}}}_{qg,z}\right)\,. \tag{106}\]
Although different from eq. (50), this expression is equally valid, since the two integrands differ by a function that integrates to zero. In an analogous manner, from eq. (100) we would obtain for the quark
\[\overline{B}_{q}^{\text{\tiny{IN}}}=-\frac{\alpha_{{}_{S}}}{2\pi}\int_{0}^{1} dz\,z\left(\hat{P}_{qq}(z)\Theta^{\text{\tiny{IN}}}_{qq,z}+\hat{P}_{gq}(z)\Theta^{ \text{\tiny{IN}}}_{gq,z}\right)\,, \tag{107}\]
which again coincides with eq. (105), in spite of having a different integrand. We point out that the strict equality of the results for \(\overline{B}_{g}^{\text{\tiny{IN}}}\) stemming from eqs. (106) and (50), and of those for \(\overline{B}_{q}^{\text{\tiny{IN}}}\) from eqs. (107) and (105), relies among other things on the symmetry properties of the \(\Theta^{\text{\tiny{IN}}}_{ij,z}\) functions. On the other hand, the integrands of eqs. (106) and (107), at variance with those of eqs. (50) and (105), do not have a \(z\to 0\) singularity when \(\epsilon\to 0\). This implies that they lead to finite quantities also when completely removing the constraints enforced by the lower cutoffs; such quantities can then be employed to define Sudakov factors that differ from those used thus far by terms suppressed by powers of the cutoff11.
Footnote 11: For examples of Sudakov form factors in a different context, whose definitions do differ from one another by cutoff-suppressed terms, see e.g. app. A of ref. [10].
### Results on cutoff-dependent PDFs
Figure 4 shows examples of cutoff-dependent PDFs corresponding to the two cutoff choices discussed in sect. 4.1. Starting from the CT18LO set at scale \(\mu=100\) GeV, the PDFs were evolved forwards to \(1\) TeV and backwards to \(10\) GeV using eq. (100) in place of (1), with the flavour- and momentum-conserving formulation described above.
As expected, the cutoff-dependent PDFs generally evolve more slowly with increasing scale than the true PDFs, thus being generally softer below the starting scale and harder
above it. The relative differences grow with increasing \(x\) and are largest in the unresolved region \(x>1-\epsilon\), where the PDFs are however very small. The scale-dependent cutoff naturally leads to PDFs much closer to the cutoff-independent ones at high scales.
Non-emission probabilities for backward evolution guided by the cutoff-dependent
Figure 4: Cutoff-dependent PDFs evolved from the CT18LO set at 100 GeV backwards to 10 GeV (red, dashed) and forwards to 1 TeV (blue, dot-dashed), compared to cutoff-independent PDFs (solid) at 10 GeV (red), 100 GeV (black) and 1 TeV (blue). The ratio plots show the cutoff-dependent PDFs relative to the cutoff-independent ones at the same scale.
PDFs, chosen to coincide with the CT18LO set at 10 GeV, are shown by green circles in fig. 1 as a function of \(\mu_{0}\) with \(\mu=100\) GeV. Since by construction \(\mathbb{W}\left[F^{(\epsilon)}\right]\equiv 0\), all of the expressions for the NEP are now equivalent, by virtue of eq. (11). Empirically, the NEP for cutoff-dependent PDFs appears closest to \(\text{NEP}^{(\text{E})}\) computed from cutoff-independent PDFs.
Results of backward MC evolution guided by the cutoff-dependent PDFs are also shown by green circles in figs. 2 and 3. There, in contrast to fig. 4 but analogously to fig. 1, the cutoff-dependent PDFs were chosen to coincide with the CT18LO set at 10 GeV and evolved upwards to 1 TeV, where they were used as the starting distributions for the backward MC. In this way, the backward-generated MC distributions at 10 GeV should agree with the CT18LO set. Compared to the results using cutoff-independent PDFs, agreement is indeed greatly improved at all \(x\) values. The small residual systematic discrepancies are most likely due to accumulated errors from our discretization of the backward evolution.
## 6 Cutoff-dependent cross sections
The cutoff-dependent PDFs emerging from eq. (10) imply that short-distance cross sections must be cutoff-dependent too, in order for the l.h.s. of the factorisation formula to be cutoff independent12. We shall assume in what follows that the cutoff dependence of the PDFs is solely due to their evolution. This implies that, at the scale chosen as the starting point for PDF evolution, the initial conditions must be cutoff independent; this is not mandatory, but doing otherwise would require some modeling assumptions for the initial conditions. In order to determine the cutoff-dependent terms of the cross section, we consider the generic factorisation formula for one incoming leg, starting from the cutoff-independent case:
Footnote 12: Up to terms one perturbative order higher than those included in the computation of the short distance cross sections.
\[\sigma=F^{\text{T}}\star\hat{\Sigma}\equiv\sum_{i}\int_{0}^{1}dx\,f_{i}(x)\, \hat{\sigma}_{i}(x)\,. \tag{12}\]
Here, we have denoted by \(\hat{\Sigma}\) the column vector that collects all of the (subtracted) short-distance cross sections \(\hat{\sigma}_{i}\equiv(\hat{\Sigma})_{i}\), whereas \(\sigma\) is the hadron-level cross section that results from the sum over all of the partonic processes in eq. (12). The RGE invariance of \(\sigma\) under factorisation-scale variation is:
\[0=\frac{\partial\sigma}{\partial\log\mu^{2}}=\frac{\partial F^{\text{T}}}{ \partial\log\mu^{2}}\star\hat{\Sigma}+F^{\text{T}}\star\frac{\partial\hat{ \Sigma}}{\partial\log\mu^{2}}=(\mathbb{O}\otimes F)^{\text{T}}\star\hat{ \Sigma}+F^{\text{T}}\star\frac{\partial\hat{\Sigma}}{\partial\log\mu^{2}}\,. \tag{13}\]
It is a matter of algebra to show that, for any functions \(g\), \(h\), and \(l\), the following identity holds:
\[\left(g\otimes h\right)\star l=g\star\left(h\star l\right)=h\star\left(g\star l \right). \tag{14}\]
Equation (13) then implies:
\[0=F^{\text{T}}\star\frac{\partial\hat{\Sigma}}{\partial\log\mu^{2}}+F^{\text{ T}}\star\left(\mathbb{O}^{\text{T}}\star\hat{\Sigma}\right). \tag{15}\]
This equation must be true for any PDFs, and therefore:
\[\frac{\partial\hat{\Sigma}}{\partial\log\mu^{2}}=-\mathbb{O}^{\rm T}\star\hat{ \Sigma}\quad\Longleftrightarrow\quad\frac{\partial\hat{\sigma}_{i}(x)}{ \partial\log\mu^{2}}=-\sum_{j}\int_{0}^{1}dy\left(\mathbb{O}(y)\right)_{ji} \hat{\sigma}_{j}(xy)\,. \tag{100}\]
By writing the perturbative expansion of the short-distance cross sections as follows:
\[\hat{\Sigma}=\hat{\Sigma}^{[0]}+\frac{\alpha_{ S}}{2\pi}\, \hat{\Sigma}^{[1]}+\ldots \tag{101}\]
where all terms \(\hat{\Sigma}^{[i]}\) include a factor \(\alpha_{ S}^{b}\), with \(b\) a process-dependent constant (e.g. \(b=0\) and \(b=2\) for dilepton and Higgs production, respectively), and by working with LO kernels where eq. (101) holds, eq. (100) implies:
\[\hat{\Sigma}^{[1]}=-\log\frac{\mu^{2}}{q_{0}^{2}}\,\left(\frac{2\pi}{\alpha_{ S}}\mathbb{O}^{\rm T}\star\hat{\Sigma}^{[0]}\right)+C\equiv-\log\frac{\mu^{2}}{q_ {0}^{2}}\,\left(\mathbb{P}^{[0]\rm T}\star\hat{\Sigma}^{[0]}\right)+C\,, \tag{102}\]
with \(q_{0}\) an arbitrary reference scale, and \(C\) a column vector of \(\mu\)-independent integration constants. The determination of \(C\) can be done by means of an explicit cross section calculation. For example, it can be read from the FKS formalism [11; 12], where a term in the same form as the leftmost one on the r.h.s. of eq. (102) is contained in the so-called \((n+1)\)-body degenerate contributions.
The derivation above can be repeated verbatim for the cutoff-dependent PDFs and short-distance cross sections. Denoting the latter by \(\hat{\Sigma}^{(\epsilon)}\), owing to eq. (100) the analogue of eq. (102) reads as follows:
\[\hat{\Sigma}^{(\epsilon)[1]}=-\log\frac{\mu^{2}}{q_{0}^{2}}\,\left(\frac{2\pi }{\alpha_{ S}}\big{(}\mathbb{O}^{\rm IN}\big{)}^{\rm T}\star\hat{\Sigma}^{( \epsilon)[0]}\right)+C^{(\epsilon)}\,. \tag{103}\]
Under our assumptions concerning the cutoff dependence discussed at the beginning of this section, we may now set:
\[\hat{\Sigma}^{(\epsilon)[0]}=\hat{\Sigma}^{[0]}\,. \tag{104}\]
Furthermore, by choosing \(q_{0}\) to coincide with the starting scale of the PDF evolution, at \(\mu=q_{0}\) we must have:
\[\hat{\Sigma}^{(\epsilon)[1]}=\hat{\Sigma}^{[1]}\quad\Longrightarrow\quad C^ {(\epsilon)}=C\,, \tag{105}\]
and therefore, for a generic scale value:
\[\hat{\Sigma}^{(\epsilon)[1]}=\hat{\Sigma}^{[1]}+\log\frac{\mu^{2}}{q_{0}^{2}} \,\left[\frac{2\pi}{\alpha_{ S}}\left(\mathbb{O}-\mathbb{O}^{\rm IN }\right)^{\rm T}\star\hat{\Sigma}^{[0]}\right]\equiv\hat{\Sigma}^{[1]}+\log \frac{\mu^{2}}{q_{0}^{2}}\,\left(\frac{2\pi}{\alpha_{ S}}\big{(}\mathbb{O}^{\rm OUT}\big{)}^{\rm T}\star\hat{\Sigma}^{[0]} \right)\,. \tag{106}\]
Equation (106) allows one to obtain the sought cutoff-dependent short-distance cross sections given the cutoff-independent ones. The rightmost term on the r.h.s. of eq. (106) is, as expected, suppressed by powers of the cutoff; we shall call it the cutoff correction.
We note that by iteration of this procedure one can obtain the cutoff correction to any perturbative order, in terms of contributions of lower orders to the short-distance cross section and the cutoff-dependent and cutoff-independent evolution kernels.
### Results for cross sections
#### 6.1.1 Drell-Yan process
As a first illustration of the use of cutoff-dependent PDFs with a cutoff-corrected short-distance cross section, we consider the photon-induced \(\mathcal{O}(\alpha^{2})\) and \(\mathcal{O}(\alpha^{2}\alpha_{ S})\) contributions to the cross section for lepton pair production as a function of pair invariant mass \(M_{ll}\) at fixed hadronic collision energy \(\sqrt{s}\).
Some results for \(pp\) collisions at \(\sqrt{s}=13\) TeV are shown in fig. 5. As in sect. 4.1, we use the CT18LO leading-order PDFs, and we do so with both LO (\(\mathcal{O}(\alpha^{2})\)) and NLO (\(\mathcal{O}(\alpha^{2}+\alpha^{2}\alpha_{ S})\)) short-distance cross sections, for the latter of which we employ the \(\overline{\text{MS}}\) factorisation scheme. We again consider two cases of a universal, flavour-independent cutoff \(\epsilon_{ij}^{\text{ L}}=\epsilon_{ij}^{\text{ U}}=\epsilon\): one relatively large and scale-independent, \(\epsilon=0.1\), the other scale-dependent, \(\epsilon=~{}(2~{}\text{GeV})/q\), with \(q\) the relevant mass scale. The reference scale \(q_{0}\), at which the cutoff-dependent and cutoff-independent PDFs are identical, is set equal to 10 GeV in the upper plots and to 100 GeV in the lower ones. The scale for evaluation of the PDFs, \(\alpha_{ S}\), and NLO corrections is taken to be \(\mu^{2}=M_{ll}^{2}\) throughout.
At leading order there is no cutoff correction to the short-distance cross section and so the discrepancies between the cutoff-dependent (blue, dashed) and cutoff-independent (black, solid) LO results simply reflect those between the corresponding PDFs. Since the cutoff-dependent PDFs evolve more slowly, the resulting hadronic cross section initially falls below the true (i.e. obtained with cutoff-independent PDFs) LO value for \(M_{ll}>q_{0}\) but eventually rises above it at higher values of \(M_{ll}\) (higher \(x\)). Correspondingly, for \(q_{0}=100\) GeV, it lies above the true LO value when \(M_{ll}<q_{0}\).
At next-to-leading order, the cutoff correction comes into play and reduces the discrepancy between the cutoff-dependent and true NLO results. The reduction is strong around the reference scale \(M_{ll}\sim q_{0}\), but only modest above and very rapidly deteriorating below \(q_{0}\). For the scale-dependent cutoff with a low reference scale (the upper right plot), the effect of the cutoff correction vanishes much more rapidly than that of the difference in PDFs at high \(M_{ll}\). This is because the relevant scale in the cutoff correction is the local value \(q=M_{ll}\), whereas the difference in PDFs results from the accumulation of cutoff effects over the whole range from \(q_{0}\) to \(M_{ll}\).
In summary, the comparison of the LO and NLO results shows that the NLO cutoff correction of eq. (6.1.1) partly compensates for the differences between the cutoff-independent predictions and those one would have obtained by employing cutoff-dependent PDFs without the inclusion of such a correction in the short-distance cross sections. In general, the use of cutoff-dependent PDFs together with the correction (6.1.1) gives results for the Drell-Yan cross section that are relatively close to the cutoff-independent NLO predictions, provided the reference scale \(q_{0}\) is close to, or not too far below, the dilepton mass. We point out that, at this level of accuracy, a more systematic assessment of the compensation mechanism just mentioned would require the definition of a proper NLO cutoff-dependent PDF set.
#### 6.1.2 Higgs boson production
Since the Drell-Yan process is quark dominated, we consider as a second example the gluon fusion contribution to Higgs boson hadroproduction as a function of the hadronic collision energy \(\sqrt{s}\).
Figure 5: Drell-Yan cross section at \(\sqrt{s}=13\) TeV (photon-induced contribution only), calculated using cutoff-dependent PDFs at leading order (blue, dashed) and next-to-leading order (red, dot-dashed), compared to corresponding results using cutoff-independent PDFs (solid).
Figure 6 shows results (in the infinite top mass approximation) for \(pp\) collisions at \(\sqrt{s}=1-100\) TeV. The PDFs, \(\alpha_{ S}\), cutoffs and factorisation scheme are as in sect. 6.1.1, but now the scale used in their evaluation is fixed at \(\mu^{2}=m_{h}^{2}\). Thus the differences between the cutoff-dependent (blue, dashed) and cutoff-independent (black, solid) LO results simply reflect the different \(x\) dependences of the corresponding gluon PDFs. Since by construction
Figure 6: Higgs cross section at \(\sqrt{s}=1-100\) TeV (gluon fusion contribution only), calculated using cutoff-dependent PDFs at leading order (blue, dashed) and next-to-leading order (red, dot-dashed), compared to the corresponding results using cutoff-independent PDFs (solid).
the cutoff-dependent and -independent PDFs coincide at the reference scale \(q_{0}\), and the former evolve more slowly, the corresponding LO result falls below the true (i.e. obtained with cutoff-independent PDFs) value at high \(\sqrt{s}\) (small \(x\)) as long as \(q_{0}<m_{h}\). This discrepancy is naturally much smaller for \(q_{0}=100\;\mathrm{GeV}\;\sim m_{h}\) than for \(q_{0}=10\;\mathrm{GeV}\).
The next-to-leading order corrections to Higgs production are very large, comparable to the leading order. The relative differences between the cutoff-dependent (red, dot-dashed) and true cutoff-independent (black, solid) NLO results are reduced compared to leading order. Again they are naturally much smaller for \(q_{0}=100\;\mathrm{GeV}\) than for \(q_{0}=10\;\mathrm{GeV}\).
## 7 Conclusions
Our aim in this paper has been to study the extent to which the use of PDFs to guide backward MC parton showering can be a consistent procedure. We have shown that it cannot be fully so if normal cutoff-independent PDFs are used, even if the non-emission probabilities (NEPs) currently in use in Monte Carlo event generators (MCEGs) are corrected to account for the fact that they generate only resolved parton emissions. The cutoffs inherent in the resolution criteria lead to inconsistencies that are formally power-suppressed in the resolved region. Nevertheless, these can accumulate to have large effects when showers evolve over a wide range of scales, and increase with \(x\).
As an alternative, formally more consistent approach, we have considered the use of cutoff-dependent PDFs, together with short-distance cross sections that include compensating cutoff corrections. We have illustrated the extent to which this compensation works at NLO in lepton pair and Higgs boson production.
Obviously, if the use of cutoff-dependent PDFs for event generation at hadron colliders is to be pursued, global PDF fits tailored to the sets of cutoffs in the widely used MCEGs would need to be performed. In principle this seems a straightforward matter of using the cutoff-dependent PDF evolution kernels and corresponding subprocess cutoff corrections; at leading order, this could be a worthwhile improvement on the current practice. Beyond leading order, however, the whole concept of guided backward parton showering needs further clarification.
## Acknowledgements
We are grateful to Torbjorn Sjostrand and Mike Seymour for valuable comments on the manuscript. This work has been partially supported by UK STFC HEP Theory Consolidated grant ST/T000694/1. SF thanks the CERN TH division for the kind hospitality during the course of this work.
## Appendix A PDF reconstruction with MC backward evolution
In order to see how MC initial-state showers reconstruct PDFs, we first need to find a solution of the evolution equations that renders the comparison with MC-derived results
as easy as possible. To this end, we employ eqs. (32) and (33), and rewrite the latter as follows:
\[\mathbb{O}^{\textsc{IN}}(z)=\mathbb{O}^{\textsc{IN}}_{R}(z)+\overline{\mathbb{B} }^{\textsc{IN}}\delta(1-z)\,, \tag{104}\]
so that:
\[\mathbb{Z}\left[F\right](x)=\mathbb{O}^{\textsc{IN}}_{R}\otimes_{x}F\,. \tag{105}\]
In other words, \(\mathbb{O}^{\textsc{IN}}_{R}\) is the contribution to the inner-region evolution operator \(\mathbb{O}^{\textsc{IN}}\) due to real (as opposed to virtual) emissions. With eqs. (32) and (105) one obtains:
\[M\Big{[}\mathbb{W}\left[F\right]\Big{]} = M\Big{[}\mathbb{O}^{\textsc{OUT}}\Big{]}M\Big{[}F\Big{]}\equiv \mathbb{O}^{\textsc{OUT}}_{N}\,F_{N}\,, \tag{106}\] \[M\Big{[}\mathbb{Z}\left[F\right]\Big{]} = M\Big{[}\mathbb{O}^{\textsc{IN}}_{R}\Big{]}M\Big{[}F\Big{]} \equiv\mathbb{O}^{\textsc{IN}}_{R,N}\,F_{N}\,, \tag{107}\]
where by \(M[g]\equiv g_{N}\) we have denoted the Mellin transform of a function \(g(x)\). Thus, the Mellin transform of eq. (27) reads as follows:
\[F_{N}(\mu^{2})=\frac{\mathbb{S}(\mu^{2})}{\mathbb{S}(\mu_{0}^{2})}\,F_{N}(\mu_ {0}^{2})+\int_{\mu_{0}^{2}}^{\mu^{2}}\frac{d\kappa^{2}}{\kappa^{2}}\frac{ \mathbb{S}(\mu^{2})}{\mathbb{S}(\kappa^{2})}\left(\mathbb{O}^{\textsc{OUT}}_ {N}(\kappa^{2})+\mathbb{O}^{\textsc{IN}}_{R,N}(\kappa^{2})\right)F_{N}(\kappa^ {2})\,. \tag{108}\]
Equation (108) is a Volterra equation of the second kind13, which is formally solved by a Neumann series:
Footnote 13: Its kernel is separable in the two relevant variables (\(\mu^{2}\) and \(\kappa^{2}\)), which leads to (at least in a one-dimensional flavour space) a closed-form solution; this, however, is not of particular interest here, and will not be considered.
\[F_{N}(\mu^{2})=\sum_{k=0}^{\infty}F_{N}^{(k)}(\mu^{2})\,, \tag{109}\]
with:
\[F_{N}^{(0)}(\mu^{2}) =\!\frac{\mathbb{S}(\mu^{2})}{\mathbb{S}(\mu_{0}^{2})}\,F_{N}(\mu _{0}^{2})\,, \tag{110}\] \[F_{N}^{(k)}(\mu^{2}) =\!\int_{\mu_{0}^{2}}^{\mu^{2}}\Bigg{[}\prod_{p=1}^{k}\frac{d \kappa_{p}^{2}}{\kappa_{p}^{2}}\,\Theta\left(\kappa_{p+1}^{2}\leq\kappa_{p}^ {2}\leq\kappa_{p-1}^{2}\right)\frac{\mathbb{S}(\kappa_{p-1}^{2})}{\mathbb{S}( \kappa_{p}^{2})}\] \[\qquad\qquad\times\left(\mathbb{O}^{\textsc{OUT}}_{N}(\kappa_{p} ^{2})+\mathbb{O}^{\textsc{IN}}_{R,N}(\kappa_{p}^{2})\right)\!\Bigg{]}\frac{ \mathbb{S}(\kappa_{k}^{2})}{\mathbb{S}(\mu_{0}^{2})}\,F_{N}(\mu_{0}^{2})\,, \tag{111}\]
where the matrix product in eq. (111) has a left-to-right order, i.e. the elements corresponding to \(p=1\) (\(p=k\)) are the leftmost (rightmost) ones, and we have defined:
\[\kappa_{0}^{2}=\mu^{2}\,,\qquad\kappa_{k+1}^{2}=\mu_{0}^{2}\,. \tag{112}\]
Equations (A.7) and (A.8) can be easily transformed back to the \(x\) space:
\[F^{(0)}(x,\mu^{2}) =\frac{\mathbb{S}(\mu^{2})}{\mathbb{S}(\mu_{0}^{2})}\,F(x,\mu_{0}^{2 })\,,\] (A.10) \[F^{(k)}(x,\mu^{2}) =\int_{0}^{1}\left[\prod_{q=1}^{k+1}dz_{q}\right]\delta\Bigg{(}x- \prod_{q=1}^{k+1}z_{q}\Bigg{)}\] (A.11) \[\times\int_{\mu_{0}^{2}}^{\mu^{2}}\Bigg{[}\prod_{p=1}^{k}\frac{d \kappa_{p}^{2}}{\kappa_{p}^{2}}\,\Theta\left(\kappa_{p+1}^{2}\leq\kappa_{p}^{2 }\leq\kappa_{p-1}^{2}\right)\frac{\mathbb{S}(\kappa_{p-1}^{2})}{\mathbb{S}( \kappa_{p}^{2})}\] \[\qquad\qquad\times\left(\mathbb{O}^{\text{OUT}}(z_{p},\kappa_{p}^ {2})+\mathbb{O}_{R}^{\text{IN}}(z_{p},\kappa_{p}^{2})\right)\Bigg{]}\frac{ \mathbb{S}(\kappa_{k}^{2})}{\mathbb{S}(\mu_{0}^{2})}\,F(z_{k+1},\mu_{0}^{2})\,.\]
The presence of a Dirac \(\delta\) in eq. (A.11) complicates its manipulation. We therefore introduce the \(k\) independent variables:
\[y_{i}=\prod_{j=i+1}^{k+1}z_{j}\,,\qquad 1\leq i\leq k\,,\] (A.12)
and the dummy variable
\[y_{0}\equiv x\,,\] (A.13)
so that
\[y_{i}=z_{i+1}y_{i+1}\quad(\text{for }0\leq i\leq k-1)\,,\qquad y_{k}=z_{k+1}\,.\] (A.14)
In this way, by using the identity:
\[1=\int_{0}^{1}\left(\prod_{i=1}^{k-1}dy_{i}\,\delta\big{(}y_{i}-z_{i+1}y_{i+1} \big{)}\right)dy_{k}\,\delta\big{(}y_{k}-z_{k+1}\big{)}\,,\] (A.15)
eq. (A.11) becomes:
\[F^{(k)}(z,\mu^{2}) =\!\int_{0}^{1}\left[\prod_{q=1}^{k}\frac{dy_{q}}{y_{q}}\right] \int_{\mu_{0}^{2}}^{\mu^{2}}\Bigg{[}\prod_{p=1}^{k}\frac{d\kappa_{p}^{2}}{ \kappa_{p}^{2}}\,\Theta\left(\kappa_{p+1}^{2}\leq\kappa_{p}^{2}\leq\kappa_{p-1 }^{2}\right)\frac{\mathbb{S}(\kappa_{p-1}^{2})}{\mathbb{S}(\kappa_{p}^{2})}\] (A.16) \[\qquad\qquad\times\left(\mathbb{O}^{\text{OUT}}\left(\frac{y_{p- 1}}{y_{p}},\kappa_{p}^{2}\right)+\mathbb{O}_{R}^{\text{IN}}\left(\frac{y_{p-1}} {y_{p}},\kappa_{p}^{2}\right)\right)\Bigg{]}\frac{\mathbb{S}(\kappa_{k}^{2})}{ \mathbb{S}(\mu_{0}^{2})}\,F(y_{k},\mu_{0}^{2})\,.\]
Note that from eq. (A.14):
\[x\equiv y_{0}\leq y_{1}\leq\ldots y_{k-1}\leq y_{k}\,,\] (A.17)
which are automatically enforced by the requirement that the first arguments of \(\mathbb{O}^{\textsc{OUT}}\) and \(\mathbb{O}_{R}^{\textsc{IN}}\) in eq. (A.16) be less than one. For a given parton identity \(i_{0}\), eq. (A.16) gives:
\[f_{i_{0}}^{(1)}(x,\mu^{2}) =\] (A.18) \[\qquad\times\left((\mathbb{O}^{\textsc{OUT}})_{i_{0}i_{1}}\! \left(\frac{x}{y_{1}},\kappa_{1}^{2}\right)+(\mathbb{O}_{R}^{\textsc{IN}})_{i _{0}i_{1}}\!\left(\frac{x}{y_{1}},\kappa_{1}^{2}\right)\right)\frac{S_{i_{1}}( \kappa_{1}^{2})}{S_{i_{1}}(\mu_{0}^{2})}\,f_{i_{1}}(y_{1},\mu_{0}^{2})\,,\] \[f_{i_{0}}^{(2)}(x,\mu^{2}) =\] (A.19) \[\qquad\times\left((\mathbb{O}^{\textsc{OUT}})_{i_{0}i_{1}}\! \left(\frac{x}{y_{1}},\kappa_{1}^{2}\right)+(\mathbb{O}_{R}^{\textsc{IN}})_{i _{0}i_{1}}\!\left(\frac{x}{y_{1}},\kappa_{1}^{2}\right)\right)\frac{S_{i_{1}}( \kappa_{1}^{2})}{S_{i_{1}}(\kappa_{2}^{2})}\] \[\qquad\times\left((\mathbb{O}^{\textsc{OUT}})_{i_{1}i_{2}}\! \left(\frac{y_{1}}{y_{2}},\kappa_{2}^{2}\right)+(\mathbb{O}_{R}^{\textsc{IN}}) _{i_{1}i_{2}}\!\left(\frac{y_{1}}{y_{2}},\kappa_{2}^{2}\right)\right)\frac{S_{ i_{2}}(\kappa_{2}^{2})}{S_{i_{2}}(\mu_{0}^{2})}\,f_{i_{2}}(y_{2},\mu_{0}^{2})\,,\]
and so forth.
We now assume that the parton type \(i_{0}\), momentum fraction \(x\), and two scales \(\mu^{2}\) and \(\mu_{0}^{2}\) (with \(\mu_{0}^{2}\leq\mu^{2}\)) are given, and we want to compute the probability that, starting the evolution at \((x,\mu^{2})\), one eventually (i.e. after an arbitrary number of backward emissions, including none) ends up emitting at a scale lower than \(\mu_{0}^{2}\). Such a probability is the sum of the probabilities \(p_{k}\) associated with \(k\) emissions, with \(0\leq k\leq\infty\). As far as \(k=0\) is concerned, \(p_{0}\) is equal to one minus the probability of emitting at scales larger than \(\mu_{0}^{2}\), in turn equal to the non-emission probability in \((\mu_{0}^{2},\mu^{2})\) at \(x\). Thus, one needs to start with a definite choice for the latter; we begin by considering \(\textsc{NEP}^{(\textsc{R})}\) of eq. (3.7). Hence:
\[p_{0} = \,\textsc{NEP}^{(\textsc{R})}_{i_{0}}(x,\mu_{0}^{2},\mu^{2})= \frac{S_{i_{0}}(\mu^{2})}{S_{i_{0}}(\mu_{0}^{2})}\frac{f_{i_{0}}(x,\mu_{0}^{2} )}{f_{i_{0}}(x,\mu^{2})}\,.\] (A.20) \[= \frac{f_{i_{0}}^{(0)}(x,\mu^{2})}{f_{i_{0}}(x,\mu^{2})}\,,\] (A.21)
having used eq. (A.10) in the second line. The case \(k=1\) corresponds to one emission at a scale \(\kappa_{1}^{2}\in(\mu_{0}^{2},\mu^{2})\), followed by one emission below \(\mu_{0}^{2}\). As far as the probability associated with the former is concerned, one first determines the scale at which it occurs by solving
\[r=\textsc{NEP}^{(\textsc{R})}_{i_{0}}(x,\kappa_{1}^{2},\mu^{2})\] (A.22)
for \(\kappa_{1}^{2}\), given a uniform random number \(r\). The solution is discarded if \(\kappa_{1}^{2}<\mu_{0}^{2}\), which gives the correct normalisation. Indeed, the distribution in \(\log\kappa_{1}^{2}\) induced by eq. (A.22) is:
\[\frac{\partial}{\partial\log\kappa_{1}^{2}}\,\textsc{NEP}^{(\textsc{R})}_{i_{ 0}}(x,\kappa_{1}^{2},\mu^{2})\,,\qquad\mu_{0}^{2}\leq\kappa_{1}^{2}\leq\mu^{2}\,,\] (A.23)
so that the total probability for such an emission is:
\[\int_{\mu_{0}^{2}}^{\mu^{2}}d\log\kappa_{1}^{2}\frac{\partial}{ \partial\log\kappa_{1}^{2}}\,\textsc{NEP}^{(\textsc{R})}_{i_{0}}(x,\kappa_{1}^ {2},\mu^{2}) = \,\textsc{NEP}^{(\textsc{R})}_{i_{0}}(x,\mu^{2},\mu^{2})-\textsc {NEP}^{(\textsc{R})}_{i_{0}}(x,\mu_{0}^{2},\mu^{2})\] (A.24) \[= \,1-\textsc{NEP}^{(\textsc{R})}_{i_{0}}(x,\mu_{0}^{2},\mu^{2})\,.\]
As was already reported in eq. (3.11), by means of an explicit computation we obtain:
\[\frac{\partial}{\partial\log\kappa_{1}^{2}}\,\mathrm{NEP}_{i_{0}}^{(\mathrm{R})}( x,\kappa_{1}^{2},\mu^{2})=\frac{1}{f_{i_{0}}(x,\mu^{2})}\frac{S_{i_{0}}(\mu^{2})}{S_{i_{0}} (\kappa_{1}^{2})}\Big{[}\big{(}\mathbb{W}\left[F\right]\big{)}_{i_{0}}(x, \kappa_{1}^{2})+\big{(}\mathbb{Z}\left[F\right]\big{)}_{i_{0}}(x,\kappa_{1}^{2 })\Big{]}.\] (A.25)
Having computed the probability density for an emission at \(\kappa_{1}^{2}>\mu_{0}^{2}\), we need to multiply it by the probability of the next emission occurring below \(\mu_{0}^{2}\). This is functionally the same quantity as was computed for the \(k=0\) case in eq. (A.20); presently, we must use that result by replacing \(\mu^{2}\to\kappa_{1}^{2}\), and \(z\) and \(i_{0}\) with the momentum fraction \(y_{1}\) and parton type \(i_{1}\) that have resulted from the branching at \(\kappa_{1}^{2}\), which may have changed w.r.t. the original \(z\) and \(i_{0}\). In order to determine these, and the probabilities associated with their choices, we introduce the functions:
\[\mathcal{P}_{ij}(y;x,\kappa^{2})=\int_{y}^{1}\frac{d\omega}{\omega}\left[( \mathbb{O}^{\textsc{OUT}})_{ij}\left(\frac{x}{\omega},\kappa^{2}\right)+( \mathbb{O}_{R}^{\textsc{IN}})_{ij}\left(\frac{x}{\omega},\kappa^{2}\right) \right]f_{j}(\omega,\kappa^{2})\,,\] (A.26)
with \(y\geq x\); these are such that:
\[\sum_{j}\mathcal{P}_{ij}(x;x,\kappa^{2})=\big{(}\mathbb{W}\left[F\right]\big{)} _{i}(x,\kappa^{2})+\big{(}\mathbb{Z}\left[F\right]\big{)}_{i}(x,\kappa^{2})\,.\] (A.27)
We first define \(i_{1}\) to be the smallest index that fulfills the following inequality:
\[r\leq\frac{\sum_{j}^{j\leq i_{1}}\mathcal{P}_{i_{0}j}(x;x,\kappa_{1}^{2})}{ \sum_{j}\mathcal{P}_{i_{0}j}(x;x,\kappa_{1}^{2})}\,,\] (A.28)
with \(r\) a uniform random number; in this way, the probability of obtaining a given \(i_{1}\) is equal to:
\[\frac{\mathcal{P}_{i_{0}i_{1}}(x;x,\kappa_{1}^{2})}{\big{(}\mathbb{W}\left[F \right]\big{)}_{i_{0}}(x,\kappa_{1}^{2})+\big{(}\mathbb{Z}\left[F\right]\big{)} _{i_{0}}(x,\kappa_{1}^{2})}\,,\] (A.29)
owing to eq. (A.27). Next, we determine \(y_{1}\) by solving for it the equation:
\[1-r=\frac{\mathcal{P}_{i_{0}i_{1}}(y_{1};x,\kappa_{1}^{2})}{\mathcal{P}_{i_{0} i_{1}}(x;x,\kappa_{1}^{2})}\,,\] (A.30)
with \(r\) a uniform random number. Thus, the probability distribution associated with \(y_{1}\) is:
\[-\frac{\partial}{\partial y_{1}}\,\frac{\mathcal{P}_{i_{0}i_{1}}( y_{1};x,\kappa_{1}^{2})}{\mathcal{P}_{i_{0}i_{1}}(x;x,\kappa_{1}^{2})}=\] (A.31) \[\qquad\qquad\frac{1}{\mathcal{P}_{i_{0}i_{1}}(x;x,\kappa_{1}^{2}) }\frac{1}{y_{1}}\left[\big{(}\mathbb{O}^{\textsc{OUT}})_{i_{0}i_{1}}\left( \frac{x}{y_{1}},\kappa_{1}^{2}\right)+(\mathbb{O}_{R}^{\textsc{IN}})_{i_{0}i_{ 1}}\left(\frac{x}{y_{1}},\kappa_{1}^{2}\right)\right]f_{i_{1}}(y_{1},\kappa_{1 }^{2})\,.\]
We finally obtain the sought probability by multiplying the results of eqs. (A.25), (A.29), (A.31), and (A.20) (with \(\mu^{2}\to\kappa_{1}^{2}\), \(i_{0}\to i_{1}\), and \(x\to y_{1}\) in the latter), by integrating over all possible intermediate scales \(\kappa_{1}^{2}\) and momentum fractions \(y_{1}\), and by summing over all
possible parton types \(i_{1}\):
\[p_{1} = \sum_{i_{1}}\int_{0}^{1}dy_{1}\int_{\log\mu_{0}^{2}}^{\log\mu^{2}}d \log\kappa_{1}^{2}\] (A.32) \[\quad\times\frac{1}{f_{i_{0}}(x,\mu^{2})}\frac{S_{i_{0}}(\mu^{2}) }{S_{i_{0}}(\kappa_{1}^{2})}\Big{[}\big{(}\mathbb{W}\left[F\right]\big{)}_{i_{ 0}}(x,\kappa_{1}^{2})+\big{(}\mathbb{Z}\left[F\right]\big{)}_{i_{0}}(x,\kappa_ {1}^{2})\Big{]}\] \[\quad\times\frac{\mathcal{P}_{i_{0}i_{1}}(x;x,\kappa_{1}^{2})}{ \big{(}\mathbb{W}\left[F\right]\big{)}_{i_{0}}(x,\kappa_{1}^{2})+\big{(} \mathbb{Z}\left[F\right]\big{)}_{i_{0}}(x,\kappa_{1}^{2})}\] \[\quad\times\frac{1}{\mathcal{P}_{i_{0}i_{1}}(x;x,\kappa_{1}^{2}) }\frac{1}{y_{1}}\left[\left(\mathbb{O}^{\textsc{OUT}}\right)_{i_{0}i_{1}} \left(\frac{x}{y_{1}},\kappa_{1}^{2}\right)+(\mathbb{O}_{R}^{\textsc{IN}})_{i_ {0}i_{1}}\left(\frac{x}{y_{1}},\kappa_{1}^{2}\right)\right]f_{i_{1}}(y_{1}, \kappa_{1}^{2})\] \[\quad\times\frac{S_{i_{1}}(\kappa_{1}^{2})}{S_{i_{1}}(\mu_{0}^{2 })}\frac{f_{i_{1}}(y_{1},\mu_{0}^{2})}{f_{i_{1}}(y_{1},\kappa_{1}^{2})}\,.\]
Therefore, from eq. (A.18):
\[p_{1}=\frac{f_{i_{0}}^{(1)}(x,\mu^{2})}{f_{i_{0}}(x,\mu^{2})}\,.\] (A.33)
This procedure can manifestly be iterated, to obtain:
\[p_{k}=\frac{f_{i_{0}}^{(k)}(x,\mu^{2})}{f_{i_{0}}(x,\mu^{2})}\quad\implies \quad\sum_{k=0}^{\infty}p_{k}=1\,,\] (A.34)
having exploited the inverse Mellin transform of eq. (A.6).
Equation (A.34) shows that an evolution generated by means of \(\textsc{NEP}^{(\textsc{R})}\) of eq. (3.7) and of the functions of eq. (A.26) for the backward steps in the scale and \(x\) spaces, respectively, allows one to recover the PDF used to guide the evolution. However, it should be clear that this conclusion is affected by a number of fallacies, that have to do with the NEP possibly being non-monotonic and not bounded from above by one, as well as the probabilities of eq. (A.29) possibly being negative. Both aspects have ultimately to do with the fact that \(\textsc{NEP}^{(\textsc{R})}\) of eq. (3.7) also accounts for non-resolvable contributions; as such, it is consistent that it be in agreement (although only formally) in the sense of eq. (A.34) with the PDF, whose form is determined by both resolved and non-resolved contributions. Moreover, note that eq. (A.26) is _not_ what is current employed in practical MC implementations, which rather corresponds to that form with the \(\mathbb{O}^{\textsc{OUT}}\) contribution removed (see e.g. eq. (4.28)): the proof above shows that, by doing so, one does not recover the PDF after the evolution.
One can repeat this procedure by adopting the true NEP of either eq. (3.4) or eq. (3.5) (the two coincide). The analogue of eq. (A.25) is (see eq. (3.10)):
\[\frac{\partial}{\partial\log\kappa_{1}^{2}}\,\textsc{NEP}_{i_{0}}(x,\kappa_{1 }^{2},\mu^{2})=\frac{1}{f_{i_{0}}(x,\mu^{2})}\frac{S_{i_{0}}(\mu^{2})}{S_{i_{0 }}(\kappa_{1}^{2})}\left(\mathbb{Z}\left[F\right]\right)_{i_{0}}(x,\kappa_{1} ^{2})\,.\] (A.35)
Because of this result, the analogues of the functions \(\mathcal{P}_{ij}\) to be employed in this case are obtained from those in eq. (A.26) by removing the \(\mathbb{O}^{\textsc{OUT}}\) contribution there. By doing so, one arrives at the analogue of eq. (A.34), which reads:
\[p_{k}=\frac{1}{f_{i_{0}}(x,\mu^{2})}\left(f_{i_{0}}^{(k)}(x,\mu^{2})\Big{|}_{ \mathbb{O}^{\textsc{OUT}}\to 0}\right)\quad\implies\quad\sum_{k=0}^{ \infty}p_{k}\neq 1\,.\] (A.36)
This result need not be surprising: \(\text{NEP}_{i}\) correctly accounts for resolved emissions only, while as was already said an actual PDF includes non-resolved contributions. It may appear that the PDF could be recovered in the context of this evolution by including a branching-by-branching correction factor equal to:
\[\left[(\mathbb{O}^{\text{OUT}})_{i_{p-1}i_{p}}\left(\frac{y_{p-1}}{y_{p}},\kappa_ {p}^{2}\right)+(\mathbb{O}_{R}^{\text{IN}})_{i_{p-1}i_{p}}\left(\frac{y_{p-1}} {y_{p}},\kappa_{p}^{2}\right)\right]\Bigg{/}(\mathbb{O}_{R}^{\text{IN}})_{i_{p -1}i_{p}}\left(\frac{y_{p-1}}{y_{p}},\kappa_{p}^{2}\right)\,, \tag{111}\]
as well as correcting by means of the ratio \(\text{NEP}_{i_{0}}^{(\text{R})}/\text{NEP}_{i_{0}}\) the zero-emission contribution. Unfortunately, eq. (111) does not work: \(\mathbb{O}^{\text{OUT}}\) and \(\mathbb{O}_{R}^{\text{IN}}\) have non-overlapping supports in the \(x\) space, and here \(y_{p}\) has been generated by using only the latter operator; it follows that the ratio of eq. (111) is in practice always equal to one.
Finally, when adopting \(\text{NEP}^{(\text{E})}\) (10) for the NEP, the analogue of eq. (103) reads as follows (see eq. (30)):
\[\frac{\partial}{\partial\log\kappa_{1}^{2}}\,\text{NEP}_{i_{0}}^{ (\text{E})}(x,\kappa_{1}^{2},\mu^{2})= \tag{112}\] \[\frac{1}{f_{i_{0}}(x,\mu^{2})}\frac{S_{i_{0}}(\mu^{2})}{S_{i_{0} }(\kappa_{1}^{2})}\left(\mathbb{Z}\left[F\right]\right)_{i_{0}}(x,\kappa_{1}^ {2})\,\exp\!\left[\int_{\kappa_{1}^{2}}^{\mu^{2}}\frac{d\kappa^{2}}{\kappa^{2 }}\frac{1}{f_{i_{0}}(x,\kappa^{2})}\left(\mathbb{W}\left[F\right]\right)_{i_{0 }}(x,\kappa^{2})\right],\]
having employed eq. (28). The similarity of this result with that of eq. (102) suggests that also in this case one can obtain a PDF that stems from keeping only resolved emissions by including a branching-by-branching correction factor equal to:
\[\exp\left[\int_{\kappa_{p}^{2}}^{\kappa_{p-1}^{2}}\frac{d\kappa^{2}}{\kappa^{ 2}}\frac{1}{f_{i_{p-1}}(y_{p-1},\kappa^{2})}\left(\mathbb{W}\left[F\right] \right)_{i_{p-1}}(y_{p-1},\kappa^{2})\right]. \tag{113}\]
Since \(\mathbb{W}[F]\) has no definite sign, this factor can be larger or smaller than one. This is connected with the fact that the \(\mathcal{O}(\alpha_{S}^{2})\) coefficient in eq. (100) has no definite sign, and its cumulative effect over successive backward branchings is therefore typically smaller than naive coupling-constant power counting would suggest, a point borne out in practice by the results shown in fig. 1.
## Appendix B An academic model
A different perspective on the three forms of NEP considered in this paper can be obtained in the context of an academic model, defined so that the only branchings are of virtual origin. This can be achieved by setting:
\[\mathbb{A}=\mathbb{C}=0\,. \tag{114}\]
Before proceeding, we stress that eq. (114) defines the virtual contribution in an unique manner only because one understands eq. (3) so that, from eq. (15)
\[\overline{\mathbb{B}}=\mathbb{B}\,. \tag{115}\]
A different model which still sets the real splitting kernels equal to zero can be defined as follows:
\[\widetilde{\mathbb{A}}=\mathbb{C}=0 \tag{111}\]
which understands the form of eq. (5), so that from eq. (16):
\[\overline{\mathbb{B}}=\widetilde{\mathbb{B}}\,. \tag{112}\]
Thus: \(\mathbb{A}=0\) implies eq. (110), while \(\widetilde{\mathbb{A}}=0\) implies eq. (112), with \(\mathbb{B}\) and \(\widetilde{\mathbb{B}}\) still related to one another by eq. (6), with \(\widetilde{\mathbb{A}}\neq 0\) there. For example, in the case of the gluon at the LO in QCD:
\[A_{g}(z)=0 \implies \frac{2\pi}{\alpha_{s}}\,\overline{B}_{g}=-\frac{C_{A}+4T_{F}N_{ F}}{6}\,, \tag{113}\] \[\widetilde{A}_{g}(z)=0 \implies \frac{2\pi}{\alpha_{s}}\,\overline{B}_{g}=\gamma(g)\,. \tag{114}\]
Having clarified this point, we can solve the PDF evolution equations and consider the MC-generated backward evolution for the model of eq. (110). We do so by employing the parameter \(\lambda\) introduced in eq. (49). We can solve directly the PDF evolution equations, and obtain:
\[F(\mu^{2})=\frac{\mathbb{S}_{\lambda=0}(\mu^{2})}{\mathbb{S}_{\lambda=0}(\mu_ {0}^{2})}\,F(\mu_{0}^{2})\,. \tag{115}\]
With eq. (27) we have instead
\[F(\mu^{2})=\frac{\mathbb{S}(\mu^{2})}{\mathbb{S}(\mu_{0}^{2})}\,F(\mu_{0}^{2} )+\lambda\int_{\mu_{0}^{2}}^{\mu^{2}}\frac{d\kappa^{2}}{\kappa^{2}}\frac{ \mathbb{S}(\mu^{2})}{\mathbb{S}(\kappa^{2})}\,\overline{\mathbb{B}}^{\text{ OUT}}(\kappa^{2})F(\kappa^{2})\,. \tag{116}\]
These two solutions coincide (as they should), since by using eq. (115) in the second term on the r.h.s. of eq. (116) one obtains:
\[\lambda\int_{\mu_{0}^{2}}^{\mu^{2}}\frac{d\kappa^{2}}{\kappa^{2}}\frac{ \mathbb{S}(\mu^{2})}{\mathbb{S}(\kappa^{2})}\,\overline{\mathbb{B}}^{\text{ OUT}}(\kappa^{2})F(\kappa^{2})=\frac{\mathbb{S}_{\lambda=0}(\mu^{2})}{ \mathbb{S}_{\lambda=0}(\mu_{0}^{2})}\,F(\mu_{0}^{2})-\frac{\mathbb{S}(\mu^{2 })}{\mathbb{S}(\mu_{0}^{2})}\,F(\mu_{0}^{2})\,, \tag{117}\]
which shows explicitly, in this simple case, the cancellation of dependence on \(\lambda\) on the r.h.s. of eq. (27).
We now consider an MC backward evolution. We start by noting that:
\[\text{NEP}_{i}=1\,,\qquad\text{NEP}_{i}^{(\text{E})}=1\,,\qquad\text{NEP}_{i} ^{(\text{R})}=\frac{S_{i,\lambda=0}(\mu_{0}^{2})}{S_{i,\lambda=0}(\mu^{2})}\, \frac{S_{i}(\mu^{2})}{S_{i}(\mu_{0}^{2})}\,. \tag{118}\]
The leftmost result in eq. (118) is what we expect in view of the physical interpretation of \(\text{NEP}_{i}\): only strictly resolved emissions may contribute to it, and in this model no resolved emissions can be generated - thus, the NEP must be equal to one. The middle result in eq. (118) shows that this model is too simple to allow one to distinguish the behaviour of \(\text{NEP}_{i}^{(\text{E})}\) from that of \(\text{NEP}_{i}\): the spurious terms of \(\mathbb{W}[F]\) origin potentially present in the former case according to eq. (12) are all identically equal to zero, being proportional to \(\mathbb{Z}[F]=0\). Therefore, also in this case the NEP is equal to one. Finally, the rightmost
result in eq. (B.10) shows that for any \(\lambda\neq 0\) the NEP is not equal to one since it receives, independently from one another, both resolved and unresolved contributions, and the latter are different from zero owing to the virtual contribution to \(\mathbb{W}[F]\) (see eq. (36)). Having said that, we point out that, in view of eq. (B.7), a sound probabilistic interpretation of \(\text{NEP}^{(\text{R})}_{i}\) requires that:
\[S_{i}(\mu^{2})<S_{i}(\mu_{0}^{2})\quad\Longleftrightarrow\quad\overline{B}^{ \text{\tiny IN}}_{i}\equiv\overline{B}_{i}-\lambda\overline{B}^{\text{\tiny OUT }}_{i}<0\,,\] (B.11)
for any \(\mu^{2}>\mu_{0}^{2}\). Therefore, in the simplest scenario \(\lambda=0\), this happens for the gluon (see eq. (B.5)) in the context of the model of eq. (B.1) we work with; however, this would _not_ happen had we chosen the different model of eq. (B.3) (see eq. (B.6)). This simple example confirms a general fact that has already been inferred before, namely that the interpretation of \(\text{NEP}^{(\text{R})}_{i}\) as a NEP might become problematic.
If we want to obtain the Neumann summands of eq. (A.16) relevant to the model of eq. (B.1) we need to use the fact that:
\[(\mathbb{O}^{\text{\tiny OUT}})_{ij}\left(z,\kappa^{2}\right)=\lambda\delta_{ ij}\overline{B}^{\text{\tiny OUT}}_{i}(\kappa^{2})\delta(1-z)\,,\qquad( \mathbb{O}^{\text{\tiny IN}}_{R})_{ij}\left(z,\kappa^{2}\right)=0\,.\] (B.12)
The absence of off-diagonal terms leads to an immediate dramatic simplification of eq. (A.16), which becomes:
\[f^{(k)}_{i_{0}}(x,\mu^{2})=\frac{S_{i_{0}}(\mu^{2})}{S_{i_{0}}(\mu_{0}^{2})}\, f_{i_{0}}(x,\mu_{0}^{2})\,\frac{\lambda^{k}}{k!}\int_{\mu_{0}^{2}}^{\mu^{2}} \frac{d\kappa^{2}}{\kappa^{2}}\overline{B}^{\text{\tiny OUT}}_{i_{0}}(\kappa ^{2})\,,\] (B.13)
a result that is also valid for \(k=0\). By summing over \(k\) one finds again the solution of eq. (B.7), as one must by construction. Moreover, as we have previous learned, eq. (B.13) can be seen as the MC contribution to the PDFs due to showers that feature \(k\) emissions. However, for this to be true in the model defined by eq. (B.1), one would have to have chosen \(\text{NEP}^{(\text{R})}_{i}\) as the NEP, since that is the only nontrivial NEP in this context (see eq. (B.10)). Therefore, this simple example confirms the previous general findings, namely that while \(\text{NEP}^{(\text{R})}_{i}\) is not the correct non-emission probability, it nevertheless formally allows one to recover the PDF given in input. Conversely, if \(\text{NEP}_{i}\) (or \(\text{NEP}^{(\text{E})}_{i}\)) had been adopted, both matrix elements in the analogue of eq. (B.12) would be equal to zero, leading to a Neumann series whose terms would be all equal to zero bar the first. The latter then coincides with the reconstructed PDF, and reads as follows:
\[f^{(0)}_{i_{0}}(x,\mu^{2})=\frac{S_{i_{0}}(\mu^{2})}{S_{i_{0}}(\mu_{0}^{2})}\, f_{i_{0}}(x,\mu_{0}^{2})\,.\] (B.14)
This is in general different from the exact solution of eq. (B.7). Clearly, the model of eq. (B.1) is maximally perverse, since all emissions are unresolved; it is therefore not particularly surprising that evolutions based on NEPs that can only account for resolved emissions fail.
|
2309.14699 | Triviality of the automorphism group of the multiparameter quantum
affine $n$-space | A multiparameter quantum affine space of rank $n$ is the $\mathbb F$-algebra
generated by indeterminates $X_1, \cdots, X_n$ satisfying $X_iX_j = q_{ij}
X_jX_i \ (1 \le i < j \le n)$ where $q_{ij}$ are nonzero scalars in $\mathbb
F^\ast$. The corresponding quantum torus is generated by the $X_i$ and together
with their inverses subject to the same relations. So far the automorphisms of
a quantum affine space have been considered mainly in the uniparameter case,
that is, $q_{ij} = q$. We remove this restriction here. Necessary and
sufficient conditions are obtained for the quantum affine space to be rigid,
that is, the only automorphisms are the trivial ones arising from the action of
the torus $(\mathbb F^\ast)^n$. These conditions are based on the
multiparameters $q_{ij}$ and also on the subgroup of $\mathbb F^\ast$ generated
by these multiparameters. We employ the results in J. Alev and M. Chamarie,
Derivations et automorphismes de quelques algebras quantiques, Communications
in Algebra, 1992 (20), 1787-1802, and point out a small error in a main theorem
in this paper which however remains valid with a small modification. We also
note that a quantum affine space whose corresponding quantum torus has
dimension one necessarily has a trivial automorphism group. This is a
consequence of a result of J.~M.~Osborne, D.~S.~Passman, Derivations of Skew
Polynomial Rings, J. Algebra, 1995, 176, 417--448. We expand the known list of
examples of quantum tori that have dimension one and are thus hereditary
noetherian domains. | Ashish Gupta, Sugata Mandal | 2023-09-26T06:24:37Z | http://arxiv.org/abs/2309.14699v1 | # Triviality of the automorphism group of the multiparameter quantum affine \(n\)-space
###### Abstract.
A multiparameter quantum affine space of rank \(n\) is the \(\mathbb{F}\)-algebra generated by indeterminates \(X_{1},\cdots,X_{n}\) satisfying \(X_{i}X_{j}=q_{ij}X_{j}X_{i}\) (\(1\leq i<j\leq n\)) where \(q_{ij}\) are nonzero scalars in \(\mathbb{F}^{*}\). The corresponding quantum torus is generated by the \(X_{i}\) and together with their inverses subject to the same relations. So far the automorphisms of a quantum affine space have been considered mainly in the uniparametric case, that is, \(q_{ij}=q\). We remove this restriction here.
Necessary and sufficient conditions are obtained for the quantum affine space to be rigid, that is, the only automorphisms are the trivial ones arising from the action of the torus \((\mathbb{F}^{*})^{n}\). These conditions are based on the multiparameters \(q_{ij}\) and also on the subgroup of \(\mathbb{F}^{*}\) generated by these multiparameters.
We employ the results in J. Alev and M. Chamarie, Derivations et automorphismes de quelques algebras quantiques, Communications in Algebra, 1992 (20), 1787-1802, and point out a small error in a main theorem in this paper which however remains valid with a small modification.
We also note that a quantum affine space whose corresponding quantum torus has dimension one necessarily has a trivial automorphism group. This is a consequence of a result of J. M. Osborne, D. S. Passman, Derivations of Skew Polynomial Rings, J. Algebra, 1995, 176, 417-448. We expand the known list of examples of quantum tori that have dimension one and are thus hereditary noetherian domains.
**Keywords.** quantum torus, quantum affine space, automorphism, dimension, hereditary ring
**2010 Math. Subj. Class.**: 16S38; 16S35; 16S36; 16W20
###### Contents
* 1 Introduction
* 2 The Quantum Torus
* 2.1 Twisted Group Algebra Structure
* 2.2 The commutator map \(\lambda\)
* 2.3 Dimension
* 2.4 The automorphism group of a quantum torus
* 3 Proof of Theorem 1
* 4 Proof of Theorem 2
* 5 \(\operatorname{Aut}_{\mathbb{F}}(\mathcal{O}_{\mathfrak{q}})\) when \(\dim(\widehat{\mathcal{O}}_{\mathfrak{q}})=1\)
## 1. Introduction
Quantum affine spaces and their localizations (known as quantum tori) are known to play a key role in the theory of quantum groups [22, 16] and also in non-commutative geometry [26]. The quantum tori arise also in Lie theory as coordinate structures of extended affine Lie algebras [21] and in the representation theory of nilpotent groups [23]. The automorphisms of these algebras have been considered in [5], [11], [22] and [21]. The substantial paper [9] considers the automorphisms of certain completions of a quantum torus algebra in connection with the automorphisms of quantum enveloping algebras. In this last paper it is noted that the automorphisms of a quantum affine space of certain types extend to bifinte unipotent automorphisms of the completion. The automorphisms of quantum division rings are studied in [10].
Let us briefly recall the definitions. Let \(\mathbb{F}\) be a field and \(\mathfrak{q}=(q_{ij})\) be a multiplicatively antisymmetric \(n\times n\)-matrix with entries in \(\mathbb{F}^{*}\). This means that \(q_{ii}=1\) and \(q_{ji}=q_{ij}^{-1}.\) A (rank-\(n\)
## 1. Introduction
Let \(\mathbb{F}\) be a finite field. Let \(\mathbb{F}\) be a finite field.
## 2. The Quantum Torus
### Twisted Group Algebra Structure
Let \(\Gamma:=\mathbb{Z}^{n}\). A twisted group algebra \(\mathbb{F}*\Gamma\) is an \(\mathbb{F}\)-algebra which has a copy \(\bar{\Gamma}:=\{\bar{\gamma}\mid\gamma\in\Gamma\}\) for an \(\mathbb{F}\)-basis satisfying
\[\bar{\gamma}\bar{\gamma}^{\prime}=f(\gamma,\gamma^{\prime})\overline{\gamma \gamma^{\prime}},\qquad\gamma,\gamma^{\prime}\in\Gamma.\]
for a suitable \(2\)-cocycle \(f:\Gamma\times\Gamma\to\mathbb{F}^{*}\). A rank-\(n\) quantum torus \(\widehat{\mathcal{O}}_{\mathfrak{q}}\) has such a structure. Indeed if we define a map \(\Gamma\to\widehat{\mathcal{O}}_{\mathfrak{q}}\) via
\[(\gamma_{1},\cdots,\gamma_{n})=:\gamma\mapsto\mathbf{X}^{\gamma}:=X_{1}^{ \gamma_{1}}\cdots X_{n}^{\gamma_{n}}\]
then it can be checked that the above conditions are satisfied.
Clearly each element \(\alpha\) of a twisted group algebra \(\mathbb{F}*\Gamma\) may be expressed as a finite sum \(\alpha=\sum_{\gamma\in\Gamma}a_{\gamma}\bar{\gamma}\) (\(a_{\gamma}\in\mathbb{F}\)). The subset of \(\gamma\in\Gamma\) such that \(a_{\gamma}\neq 0\) is called the support of \(\alpha\) and denoted as \(\operatorname{Supp}(\alpha)\). For a subgroup \(B\leq\Gamma\) the subset
\[\{\alpha\in\mathbb{F}*\Gamma\mid\operatorname{Supp}(\alpha)\subseteq B\}\]
is a twisted group algebra of \(B\) over \(\mathbb{F}\) and is denoted as \(\mathbb{F}*B\).
### The commutator map \(\lambda\)
In a quantum torus the monomials \(\mathbf{X}^{\gamma}:=X_{1}^{\gamma_{1}}\cdots X_{n}^{\gamma_{n}}\) are units and the group-theoretic commutator \([\mathbf{X}^{\gamma},\mathbf{X}^{\gamma^{\prime}}]\) defined as
\[[\mathbf{X}^{\gamma},\mathbf{X}^{\gamma^{\prime}}]:=\mathbf{X}^{\gamma} \mathbf{X}^{\gamma^{\prime}}(\mathbf{X}^{\gamma})^{-1}(\mathbf{X}^{\gamma^{ \prime}})^{-1}\]
yields an alternating bi-character ((e.g., [11, Section 1]))
\[\lambda:\Gamma\times\Gamma\to\mathbb{F}^{*},\qquad\lambda(\gamma,\gamma^{ \prime})=[\mathbf{X}^{\gamma},\mathbf{X}^{\gamma^{\prime}}],\qquad\gamma, \gamma^{\prime}\in\Gamma. \tag{2}\]
In particular,
\[\lambda(\mathbf{e}_{i},\mathbf{e}_{j})=[X_{i},X_{j}]=q_{ij},\qquad\forall 1\leq i,j\leq n,\]
where \(\mathbf{e}_{1},\cdots\mathbf{e}_{n}\) are the standard basis vectors of the \(\mathbb{Z}\)-module \(\Gamma\).
### Dimension
As noted above a quantum torus \(\widehat{O}_{\mathfrak{q}}\) is a twisted group algebra \(\mathbb{F}*\Gamma\). It was shown in [23, Theorem A] that the dimension of a quantum torus equals the supremum of the ranks of subgroups \(B\leq\Gamma\) such that the subalgebra \(\mathbb{F}*B\) is commutative (Note that \(\mathbb{F}*C\) is commutative for any cyclic subgroup \(C\leq\Gamma\)). It follows that \(\dim(\widehat{O}_{\mathfrak{q}})\) equals the cardinality of a maximal system of independent commuting monomials in \(\widehat{O}_{\mathfrak{q}}\).
### The automorphism group of a quantum torus
It is known (e.g., [12]) that the units of a quantum torus algebra are trivial, that is, are of the form \(a\mathbf{X}^{\gamma}\), where \(a\in\mathbb{F}^{*}\). By \(\operatorname{Aut}_{\mathbb{F}}(\widehat{O}_{\mathfrak{q}})\) we denote the group of all \(\mathbb{F}\)-automorphisms of the quantum torus \(\widehat{\mathcal{O}}_{\mathfrak{q}}\). It is easily seen (e.g., [11]) that the action of the group \(\operatorname{Aut}_{\mathbb{F}}(\widehat{\mathcal{O}}_{\mathfrak{q}})\) on the quantum torus \(\widehat{\mathcal{O}}_{\mathfrak{q}}\) induces an action of this same group on the group \(\mathscr{U}\) of trivial units fixing \(\mathbb{F}^{*}\) element-wise. There is thus an action of this same group \(\operatorname{Aut}_{\mathbb{F}}(\widehat{\mathcal{O}}_{\mathfrak{q}})\) on the quotient \(\mathscr{U}/\mathbb{F}^{*}\cong\Gamma\) yielding a homomorphism
\[\operatorname{Aut}_{\mathbb{F}}(\widehat{\mathcal{O}}_{\mathfrak{q}})\longrightarrow \operatorname{Aut}\Gamma=\operatorname{GL}(n,\mathbb{Z}) \tag{3}\]
whose kernel is the group of all _scalar automorphisms_ defined by \(\psi(\mathbf{X}^{\gamma})=\phi(\gamma)(\mathbf{X}^{\gamma})\) for \(\phi\in\operatorname{Hom}(\Gamma,\mathbb{F}^{*})\)[11]. Clearly this kernel can be identified with the algebraic torus \((\mathbb{F}^{*})^{n}\).
By [11, Lemma 3.3(iii)] the image (in \(\operatorname{GL}(n,\mathbb{Z})\)) of the map in (3) is the subgroup of all \(\sigma\in\operatorname{GL}(n,\mathbb{Z})\) such that
\[\lambda(\sigma\gamma,\sigma\gamma^{\prime})=\lambda(\gamma,\gamma^{\prime}) \qquad\quad\forall\gamma,\gamma^{\prime}\in\Gamma. \tag{4}\]
This subgroup is denoted \(\operatorname{Aut}(\mathbb{Z}^{n},\lambda)\) is known as the _nonscalar automorphism group_. By the foregoing discussion we obtain the following exact sequence noted in [21]:
\[1\to(\mathbb{F}^{*})^{n}\to\operatorname{Aut}_{\mathbb{F}}(\widehat{\mathcal{O} }_{\mathfrak{q}})\to\operatorname{Aut}(\mathbb{Z}^{n},\lambda)\to 1. \tag{5}\]
## 3. Proof of Theorem 1
**Definition 3.1** (Section 1.4 of [5]).: An automorphism \(\sigma\) of \(\mathcal{O}_{\mathfrak{q}}\) is called linear if it has the form
\[\sigma(X_{i})=\sum_{j=1}^{n}\alpha_{ij}X_{j}\qquad\forall i\in\{1,\cdots,n\}, \qquad(\alpha_{ij})\in\operatorname{GL}(n,\mathbb{F}). \tag{6}\]
For a matrix \((\alpha_{ij})\in\operatorname{GL}(n,\mathbb{F})\) to define an automorphism as in (6) the following necessary and sufficient conditions must hold ([5]):
\[\alpha_{ik}\alpha_{jl}(1-q_{ij}q_{lk})=\alpha_{il}\alpha_{jk}(q_{ij}-q_{lk}) \qquad\forall i<j,\ \ \forall k\leq l. \tag{7}\]
The last equation may be re-written as
\[\alpha_{ik}\alpha_{jl}(q_{kl}-q_{ij})=\alpha_{il}\alpha_{jk}(q_{kl}q_{ij}-1) \qquad\forall i<j,\ \ \forall k\leq l. \tag{8}\]
Setting \(k=l\) in the last equation we obtain
\[\alpha_{ik}\alpha_{jk}(1-q_{ij})=\alpha_{ik}\alpha_{jk}(q_{ij}-1)\qquad \forall i<j,\ \ \forall k\in\{1,\cdots,n\}. \tag{9}\]
**Observation**.: Clearly, the last equation means that if \(\operatorname{char}(\mathbb{F})\neq 2\) and none of the multiparameters \(q_{ij}\ (i<j)\) equals to unity then at least one of the coefficients \(\alpha_{ik}\) and \(\alpha_{jk}\) vanishes. It is immediate that in this case the nonsingular matrix \((\alpha_{ij})\) has exactly one nonzero entry in each row and each column.
The next proposition is an easy consequence of the preceding observation.
**Proposition 3.1**.: _Suppose that \(\operatorname{char}(\mathbb{F})\neq 2\) and the entries of \(\mathfrak{q}\) satisfy_
\[q_{ij}\neq 1,\qquad\qquad\forall 1\leq i<j\leq n.\]
_Then each linear automorphism \(\sigma\) of \(\mathcal{O}_{\mathfrak{q}}\) has the form \(X_{i}\to k_{i}X_{\sigma(i)}\) for a suitable permutation \(\sigma\) of the variables \(X_{i}\) where \(k_{i}\in\mathbb{F}^{*}\). Thus,_
\[\operatorname{Aut}_{\operatorname{L}}(\mathcal{O}_{\mathfrak{q}})\cong( \mathbb{F}^{*})^{n}\rtimes\mathcal{P} \tag{10}\]
_for a subgroup \(\mathcal{P}\) of \(S_{n}\)._
For a permutation \(\sigma\) we denote by \(\mathfrak{m}_{\sigma}\) the corresponding permutation matrix.
**Proposition 3.2**.: ([11, Remark 3.2]) _Suppose that \(n\geq 3\). Let \(\mathfrak{q}=(q_{ij})\) be a multiplicatively antisymmetric matrix. For a permutation \(\sigma\) the corresponding change of variables \(x_{i}\to x_{\sigma(i)}\) is an automorphism of \(\mathcal{O}_{\mathfrak{q}}\) if and only if_
\[q_{ij}=q_{\sigma(i)\sigma(j)},\qquad\qquad\forall 1\leq i<j\leq n. \tag{11}\]
_Equivalently,_
\[\mathfrak{q}\mathfrak{m}_{\sigma}=\mathfrak{m}_{\sigma}\mathfrak{q}.\]
**Remark 3.1**.: _In the situation of Proposition 3.1 if \(q_{ij}\neq-1\) and \(\sigma\) is a non-identity permutation in \(\mathcal{P}\) and \(\sigma\) is decomposed into cycles then any cycle in this decomposition of \(\sigma\) must have odd length. Indeed if \((i_{1}i_{2})\) is a 2-cycle in the decomposition of \(\sigma\) then in view of 11 we have \((q_{i_{1}i_{2}})^{2}=1\). Again if \((i_{1}i_{2}i_{3}\cdots i_{m})\) is an \(m\)-cycle (\(m\geq 4\)) where \(m\) is even then by (11) we have_
\[(q_{i_{m/2,i_{m}}})^{-1}=q_{i_{1},i_{m/2+1}}=q_{i_{2},i_{m/2+2}}=\cdots=q_{i_{ m/2-1},i_{m/2+m/2-1}}=q_{i_{m/2},i_{m}}\]
_whence \((q_{i_{m/2},i_{m}})^{2}=1\)._
Let \(\operatorname{Der}(\mathcal{O}_{\mathfrak{q}})\) denote the module of derivations of \(\mathcal{O}_{\mathfrak{q}}\). We recall the submodule \(E\) of \(\operatorname{Der}(\mathcal{O}_{\mathfrak{q}})\) in [5]. Suppose
\[\Lambda_{i}=\{\nu\in\mathbb{N}^{n}:\nu_{i}=0\ \ \text{and}\ \ \prod_{k}q_{kj}^{\nu_{k}}=q_{ij}\ \ \text{for all}\ \ j\ \ \text{such that}\ \ j\neq i\}.\]
For all \(\nu\in\Lambda_{i}\) there exists a derivation \(D_{i\nu}\) of \(\mathcal{O}_{\mathfrak{q}}\) defined by
\[D_{i\nu}(x_{j})=\delta_{ij}\mathbf{x}^{\nu}\]
for all \(j\). Note that \(D_{i\nu}^{2}=0\). We then define
\[E=\oplus_{i}(\oplus_{\nu\in\Lambda_{i}}\mathcal{O}_{\mathfrak{q}}D_{i\nu}).\]
**Lemma 3.3**.: _Suppose that \(\operatorname{char}(\mathbb{F})\neq 2\) and \(n\geq 3\). Let \(\mathfrak{q}=(q_{ij})\) be a multiplicatively antisymmetric matrix. Then no two rows of \(\mathfrak{q}\) are equal if \(E=0\)._
Proof.: Suppose that \(i\)-th and the \(j\)-th rows of \(\mathfrak{q}\) be equal, that is, \(q_{ir}=q_{jr}\) for all \(r\in\{1,2,\cdots,n\}\). Let \(k\neq i\) and \(\nu:=(0,\cdots,1_{j},\cdots,0)\). We claim that \(\nu\in\Lambda_{i}\). Indeed from the definition of \(\Lambda_{i}\) we must have
\[\prod_{\mathcal{P}}q_{ip}^{\nu_{p}}=q_{ik}.\]
But this is true as \(q_{ik}=q_{jk}\) for all \(k\neq i\). Thus \(E\neq 0\) - a contradiction.
**Theorem 3.4**.: ([5, Proposition 1.4.3]) _Suppose that \(\operatorname{char}(\mathbb{F})=0\) and \(n\geq 3\). Let \(\mathfrak{q}=(q_{ij})\) be a multiplicatively antisymmetric matrix such that for all \(i\) there exist \(j\) such that \(q_{ij}\neq 1\). If \(E=0\) then_
\[\operatorname{Aut}_{\mathbb{F}}(\mathcal{O}_{\mathfrak{q}})=\operatorname{Aut }_{L}(\mathcal{O}_{\mathfrak{q}})\]
The above theorem is a rectification of the proposition of [5, Section 1.4.3]. There is a small error in the proof of this proposition. They used the fact that \(E=0\) implies for all \(i\) there exist \(j\) such that \(q_{ij}\neq 1\). This is not true in general: for example, consider \(n=3\) and \(q_{12}=q_{13}=1\), \(q_{23}=q\) where \(q\) is not a root of unity. Then clearly \(E=0\) but there is an automorphism \(\phi_{b}\) for each \(b\in\mathbb{F}^{\times}\) defined by
\[\phi_{b}(X_{i})=\begin{cases}X_{i}&\text{if }i\neq 1\\ X_{i}+b,&\text{if }i=1\end{cases}\]
Clearly \(\phi_{b}\) is not a linear automorphism.
**Theorem 1**.: _Suppose that \(\operatorname{char}(\mathbb{F})=0\) and \(n\geq 3\). Let \(\mathfrak{q}=(q_{ij})\) be a multiplicatively antisymmetric matrix such that for \(i<j\) almost one entry \(q_{ij}\) equals to \(1\). Then_
1. _if no_ \(q_{ij}\) _equals to_ \(1\) _for all_ \(i<j\)_, then the automorphism group_ \[\operatorname{Aut}_{\mathbb{F}}(\mathcal{O}_{\mathfrak{q}})=(\mathbb{F}^{*})^ {n}\] _if and only if the following conditions hold_ 1. \(E=0\) _and_ 2. _there does not exist any non-identity permutation matrix_ \(m_{\sigma}\) _satisfying_ \[\mathfrak{q}\mathfrak{m}_{\sigma}=\mathfrak{m}_{\sigma}\mathfrak{q}.\]
2. _if exactly one entry, say_ \(q_{i^{\prime}j^{\prime}}\) _with_ \(i^{\prime}<j^{\prime}\) _is_ \(1\)_, then the automorphism group_ \[\operatorname{Aut}_{\mathbb{F}}(\mathcal{O}_{\mathfrak{q}})=\left(\mathbb{F}^{ *}\right)^{n}\] _if and only if the following conditions hold_ 1. \(E=0\) _and_ 2. _there does not exist any permutation_ \(\pi_{1}\) _on the set_ \(\{i^{\prime},j^{\prime}\}\) _and non-identity permutation_ \(\pi_{2}\) _on the set_ \(\mathcal{J}=\{1,2,\cdots,n\}\setminus\{i^{\prime},j^{\prime}\}\) _satisfying_ (13) \[q_{i^{\prime}r}=q_{\pi_{1}(i^{\prime})\pi_{2}(r)},\quad q_{j^{\prime}r}=q_{\pi_ {1}(j^{\prime})\pi_{2}(r)},\qquad\forall r\in\mathcal{J}\] _such that_ \(\mathfrak{q}^{\prime}\mathfrak{m}_{\pi_{2}}=\mathfrak{m}_{\pi_{2}}\mathfrak{q} ^{\prime}\) _where_ \(\mathfrak{q}^{\prime}\) _is a multiplicatively anti-symmetric matrix obtained from_ \(\mathfrak{q}\) _by removing the_ \(i^{\prime},j^{\prime}\) _rows and columns simultaneously._
Proof.: (1) We first show the necessity. By definiition if \(E\neq 0\) then there exists an \(i\) such that \(\Lambda_{i}\neq 0\). Let \(\nu=(\nu_{1},\cdots,\nu_{n})\in\Lambda_{i}\) such that \(\nu\neq 0\). It is easy to check that the automorphism \(\exp(D_{i\nu})\) is not a member of \((F^{*})^{n}\) where \(D_{i\nu}\) is the locally nilpotent derivation defined by \(D_{i\nu}(X_{j})=\delta_{ij}X^{\nu}\). Suppose there exist a non-identity permutation \(\sigma\) satisfying (12) then by Proposition 3.2 the map \(X_{i}\to X_{\sigma(i)}\) is a nontoric automorphism. Conversely, suppose the conditions in (1) hold. Now
\[\operatorname{Aut}_{\mathbb{F}}(\mathcal{O}_{\mathfrak{q}})=\operatorname{Aut }_{L}(\mathcal{O}_{\mathfrak{q}})\]
by the Theorem 3.4. Also from Observation 3 any linear automorphism must be of the form \(X_{i}\to k_{i}X_{\sigma(i)}\) for some permutation \(\sigma\) where \(k_{i}\in F^{\times}\). By Proposition 3.2\(\sigma\) must be the identity permutation.
(2) The proof of the necessity of the condition \(E=0\) is same as in part (1). Again \((b)\) is necessary as otherwise the map \(\phi\) defined by
\[\phi(X_{i})=\begin{cases}X_{\pi_{1}(i)}&\text{if }i\in\{i^{\prime},j^{\prime}\}\\ X_{\pi_{2}(i)},&\text{if }i\in\mathcal{J}\end{cases}\]
is a nontoric automorphism of \(\mathcal{O}_{\mathfrak{q}}\) by proposition 3.2. Conversely, the conditions supposed that 2(a) and 2(b) hold.
As seen above in this case, we have
\[\operatorname{Aut}_{\mathbb{F}}(\mathcal{O}_{\mathfrak{q}})=\operatorname{Aut} _{L}(\mathcal{O}_{\mathfrak{q}}).\]
Suppose that
\[A=(\alpha_{ij})\in\operatorname{GL}(n,\mathbb{F})\]
induces a linear automorphism \(\alpha\) of the given quantum affine space. Using (9) it is easily seen that in any column of \(A\) at most two entries can be nonzero and in the case there are two nonzero entries, these must be in the \(i^{\prime}\)-th and \(j^{\prime}\)-th rows.
We claim that there can be at most two columns in \(A\) that have two nonzero entries. Indeed suppose that \(2+s\) columns have two nonzero entries necessarily in the \(i^{\prime}\)-th and \(j^{\prime}\)-th rows. As just noted the remaining \(n-2-s\) columns each has exactly one non-zero entry. Clearly, if \(s>0\) these \(n-2-s\) nonzero entries in \(A\) cannot fulfill the requirement of a non-zero entry in each of the \(n-2\) rows other than the \(i^{\prime}\)-th and \(j^{\prime}\)-th rows. This shows that \(s=0\).
Next we note that at least one of the entries \(\alpha_{i^{\prime}i^{\prime}}\) and \(\alpha_{j^{\prime}i^{\prime}}\) is non-zero. To see this we pick \(m<p\) in the range \(1,\cdots,n\). By (8) we have noting that \(q_{i^{\prime}j^{\prime}}=1\)
\[\alpha_{i^{\prime}m}\alpha_{j^{\prime}p}(q_{mp}-1)=\alpha_{i^{\prime}p}\alpha _{j^{\prime}m}(q_{mp}-1). \tag{14}\]
If \((m,p)\neq(i^{\prime},j^{\prime})\) then by the hypothesis on the part (2) \(q_{mp}\neq 1\) and (14) means that the minor of \(A\) corresponding to the \(2\times 2\) submatrix formed by the \(i^{\prime}\)-th and \(j^{\prime}\)-th rows and the \(m\)-th and \(p\)-th columns is equal to zero. If \(\alpha_{i^{\prime}i^{\prime}}=\alpha_{j^{\prime}i^{\prime}}=0\) then the minor corresponding to the \(2\times 2\) submatrix \(K\) defined by the \(i^{\prime}\)-th and \(j^{\prime}\)-th rows and the \(i^{\prime}\)-th and \(j^{\prime}\)-th columns is also equal to zero. This would mean that a row of the exterior square \(\wedge^{2}A\) of \(A\) is the zero row contradicting the assumption \(A\) is non-singular.
By the same token at least one of the entries \(\alpha_{i^{\prime}j^{\prime}}\) and \(\alpha_{j^{\prime}j^{\prime}}\) in column \(j^{\prime}\) is nonzero. Moreover, the non-zero entries in the two columns, namely, \(i^{\prime}\) and \(j^{\prime}\) cannot be in only one of the rows \(i^{\prime}\) or \(j^{\prime}\) as in this case the determinant of \(K\) will be zero.
We now claim that if a column of \(A\) has two non-zero entries then it must be the \(i^{\prime}\)-th or the \(j^{\prime}\)-th column. Indeed let \(h\) be a column of \(A\) having two non-zero entries. As noted above these non-zero entries of \(h\) must be in rows \(i^{\prime}\) and \(j^{\prime}\). Thus the three columns \(i^{\prime}\), \(j^{\prime}\) and \(h\) have non-zero entries only in rows \(i^{\prime}\) and \(j^{\prime}\). Consequently there can be at most \(n-3\) nonzero entries in columns other than \(i^{\prime},j^{\prime}\) and \(h\) that are contained in the \(n-2\) rows other than \(i^{\prime}\) and \(j^{\prime}\). But this means there is a row with no non-zero entry contradicting the assumption that \(A\) is non-singular.
It follows that \(\alpha(X_{i^{\prime}})\) and \(\alpha(X_{j^{\prime}})\) both lie in the \(k\)-subspace spanned by \(X_{i^{\prime}}\) and \(X_{j^{\prime}}\) while \(\alpha(X_{r})=k_{r}X_{\pi_{2}(r)}\) for all \(r\in\mathcal{J}\) where \(k_{r}\in\mathbb{F}^{\times}\) and \(\pi_{2}\) is a permutation on the set \(\mathcal{J}\). We claim that either \(\alpha(X_{i^{\prime}})\in\mathbb{F}^{\times}X_{i^{\prime}}\) or \(\alpha(X_{i^{\prime}})\in\mathbb{F}^{\times}X_{j^{\prime}}\). Indeed let
\[\alpha(X_{i^{\prime}})=aX_{i^{\prime}}+bX_{j^{\prime}}\]
where \(a,b\in\mathbb{F}^{\times}\). Applying \(\alpha\) to the relations
\[X_{i^{\prime}}X_{r}=q_{i^{\prime}r}X_{r}X_{i^{\prime}},\qquad\forall r\in \mathcal{J}\]
we obtain
\[(aX_{i^{\prime}}+bX_{j^{\prime}})X_{\pi_{2}(r)}=q_{i^{\prime}r}X_{\pi_{2}(r)} (aX_{i^{\prime}}+bX_{j^{\prime}})\]
Comparing both sides we have
\[q_{i^{\prime}\pi_{2}(r)}=q_{i^{\prime}r}=q_{j^{\prime}\pi_{2}(r)},\qquad \forall r\in\mathcal{J}.\]
Moreover the hypothesis \(q_{i^{\prime}j^{\prime}}=1\) means that \(1=q_{i^{\prime}i^{\prime}}=q_{j^{\prime}i^{\prime}}\) and \(1=q_{i^{\prime}j^{\prime}}=q_{j^{\prime}j^{\prime}}\). It follows that the \(i^{\prime}\)-th and \(j^{\prime}\)-th row of the matrix \(\mathfrak{q}\) coincide, which is contrary the Lemma 3.3. Similarly \(\alpha(X_{j^{\prime}})\in\mathbb{F}^{\times}X_{i^{\prime}}\) or \(\alpha(X_{j^{\prime}})\in\mathbb{F}^{\times}X_{j^{\prime}}\). It follows that \(\alpha(X_{i^{\prime}})\in\mathbb{F}^{\times}X_{\pi_{1}(i^{\prime})}\) and \(\alpha(X_{j^{\prime}})\in\mathbb{F}^{\times}X_{\pi_{1}(i^{\prime})}\) for some permutation \(\pi_{1}\) on the set \(\{1,2\}\). Now we claim that \(\pi_{2}\) is identity. If not from the relations \(X_{r}X_{s}=q_{rs}X_{s}X_{r}\) where \(r,s\in\mathcal{J}\) we have \(\mathfrak{m}_{\pi_{2}}\mathfrak{q}^{\prime}=\mathfrak{q}^{\prime}\mathfrak{m}_{\pi _{s}}\).
Applying \(\alpha\) to the relations
\[X_{i^{\prime}}X_{r}=q_{i^{\prime}r}X_{r}X_{i^{\prime}},\qquad\forall r\in \mathcal{J}\]
and
\[X_{j^{\prime}}X_{r}=q_{j^{\prime}r}X_{r}X_{j^{\prime}},\qquad\forall r\in \mathcal{J}\]
we have \(q_{i^{\prime}r}=q_{\pi_{1}(i^{\prime})\pi_{2}(r)}\), \(q_{j^{\prime}r}=q_{\pi_{1}(j^{\prime})\pi_{2}(r)}\) respectively \(\forall r\in\mathcal{J}\). But this contradicts the theorem hypothesis. Thus \(\pi_{2}\) is an identity permutation.
If \(\alpha(X_{i^{\prime}})\in\mathbb{F}^{\times}X_{j^{\prime}}\) and \(\alpha(X_{j^{\prime}})\in\mathbb{F}^{\times}X_{i^{\prime}}\) then applying \(\alpha\) to the relation
\[X_{i^{\prime}}X_{r}=q_{i^{\prime}r}X_{r}X_{i^{\prime}},\qquad\forall r\in \mathcal{J}\]
we have \(q_{i^{\prime}r}=q_{j^{\prime}r}\)\(\forall r\in\mathcal{J}\) which contradicting the hypothesis \(E=0\) in view of Lemma 3.3.
## 4. Proof of Theorem 2
The following fact shown in [11] reduces the question of automorphisms to the case where the group \(\Lambda\) (Definition 1.1) is torsion-free.
**Lemma 4.1** ([11]).: _Let \(p\) denote the size of the torsion subgroup of \(\Lambda\). The subalgebra \(\widehat{\mathcal{O}}^{\prime}\) of \(\widehat{\mathcal{O}}_{\mathfrak{q}}\) generated by the powers \(X_{i}^{\pm p}\) of the indeterminates \(X_{i}\) is a characteristic sub-algebra of the same rank. Moreover \(\widehat{\mathcal{O}}_{\mathfrak{q}}\) is free left \(\widehat{\mathcal{O}}^{\prime}\)-module of finite rank and the corresponding \(\lambda\)-group \(\Lambda^{\prime}\) associated with \(\widehat{\mathcal{O}}^{\prime}\) is torsion free._
We may thus assume that \(\Lambda\cong\mathbb{Z}^{l}\) for some natural number \(l\). Fixing a \(\mathbb{Z}\)-basis \(p_{1},\cdots,p_{l}\) in \(\Lambda\) we have
\[\lambda(\gamma,\gamma^{\prime})=p_{1}^{e_{1}(\gamma,\gamma^{\prime})}p_{2}^{e _{2}(\gamma,\gamma^{\prime})}\cdots p_{l}^{e_{l}(\gamma,\gamma^{\prime})}, \qquad\gamma,\gamma^{\prime}\in\Gamma. \tag{15}\]
**Notation 1**.: In view of (15) let
\[\lambda(\mathbf{e}_{i},\mathbf{e}_{j})=p_{1}^{m_{(ij),1}}\cdots p_{l}^{m_{(ij),l}},\qquad 1\leq i<j\leq n, \tag{16}\]
where \(\mathbf{e}_{i},\mathbf{e}_{j}\) are standard basis vectors of the free \(\mathbb{Z}\)-module \(\Gamma\). On the \(\binom{n}{2}\) pairs \((ij),\ (i<j)\) we assume the lexicographic order. Let \(\mathsf{M}\in\operatorname{Mat}_{\binom{n}{2}\times l}(\mathbb{Z})\) be the matrix whose \(((ij),s)\) entry is the exponent \(m_{(ij),s}\) of \(p_{s}\) in (16) \((s=1,\cdots,l)\).
We recall that for a given matrix \(A\in\operatorname{GL}(n,\mathbb{Z})\) the exterior square \(\wedge^{2}A\) of \(A\) is the \(\binom{n}{2}\times\binom{n}{2}\)-matrix whose rows and columns are indexed by the pairs \((ij)\)\((1\leq i<j\leq n)\) ordered lexicographically and whose \(((ij),(kl))\) entry is the \(2\times 2\)-minor corresponding to rows \(i,j\) and columns \(k,l\). With \(\mathsf{M}\) as defined above we have the following.
**Proposition A**.: _Set \(N=\binom{n}{2}\) and let \(\mathsf{M}\) be as defined in Notation 1 above. Then_
\[\operatorname{Aut}(\mathbb{Z}^{n},\lambda)=\bigl{(}\operatorname{Stab}_{ \operatorname{GL}(n,\mathbb{Z})}(\mathsf{M})\bigr{)}^{t}\]
_where \(t\) denotes transposition and \(\operatorname{Stab}_{\operatorname{GL}(n,\mathbb{Z})}(\mathsf{M})\) the stabilizer of \(\mathsf{M}\) in \(\operatorname{GL}(n,\mathbb{Z})\) with respect to the bivector representation_
\[\bigwedge^{2}:\operatorname{GL}(n,\mathbb{Z})\to\operatorname{GL}(N,\mathbb{Z }),\qquad\ A\to\wedge^{2}A\]
_of \(\operatorname{GL}(n,\mathbb{Z})\), that is,_
\[\operatorname{Stab}_{\operatorname{GL}(n,\mathbb{Z})}(\mathsf{M})=\{A\in \operatorname{GL}(n,Z)\mid(\wedge^{2}A)\mathsf{M}=\mathsf{M}\}.\]
Proof.: Writing the group \(\Lambda\leq\mathbb{F}^{*}\) additively, in view of Notation 1 we have
\[\lambda(\mathbf{e}_{i},\mathbf{e}_{j})=\sum_{s=1}^{l}m_{(ij),s}p_{s},\qquad \quad\forall 1\leq i<j\leq n, \tag{17}\]
where \(m_{(ij),s}\in\mathbb{Z}\). Now let
\[A=(a_{ij})\in\operatorname{GL}(n,\mathbb{Z})\]
be such that \(A^{t}\in\operatorname{Aut}(\mathbb{Z}^{n},\lambda)\). Setting
\[\mathbf{e}^{\prime}_{j}=A^{t}\mathbf{e}_{j}=\sum_{t=1}^{n}a_{ji}\mathbf{e}_{t}\]
we note that since \(\lambda\) is an alternating function therefore \(\lambda(\mathbf{e}^{\prime}_{i},\mathbf{e}^{\prime}_{j})\) may be expressed as follows:
\[\lambda(\mathbf{e}^{\prime}_{i},\mathbf{e}^{\prime}_{j})=\sum_{(uv)}a_{(ij),(uv)} \lambda(\mathbf{e}_{u},\mathbf{e}_{v}), \tag{18}\]
where the coefficients appearing in the RHS of the above expression constitute row (ii) of the matrix \(\wedge^{2}A\). Since \(A^{t}\) is \(\lambda\)-preserving, by (4) we have
\[\lambda(\mathbf{e}^{\prime}_{i},\mathbf{e}^{\prime}_{j})=\lambda(\mathbf{e}_{i},\mathbf{e}_{j})\qquad\forall 1\leq i<j\leq n.\]
Expanding and comparing the coefficients of \(p_{s}\) (\(s=1,\cdots,l\)) in both sides of the last equation we get
\[\sum_{uv}a_{(ij),(uv)}m_{(uv),s}=m_{(ij),s}\qquad\forall s=1,\cdots,l. \tag{19}\]
It follows that
\[\wedge^{2}(A)\mathsf{M}=\mathsf{M}.\]
Clearly the above reasoning is reversible. This establishes the assertion of the theorem.
**Notation 2**.: As before, let \(\{\mathbf{e}_{1},\cdots,\mathbf{e}_{n}\}\) denote the standard basis in \(\Gamma=\mathbb{Z}^{n}\). Then \(\{\mathbf{e}_{i}\wedge\mathbf{e}_{j}\mid 1\leq i<j\leq n\}\) is a basis in \(\wedge^{2}\Gamma\). As usual, for a permutation \(\pi\in S_{n}\) let \(P\in\operatorname{GL}(n,\mathbb{Z})\) denote the corresponding permutation matrix. Clearly \(S_{n}\) acts on the \(\mathbb{Z}\)-module \(\wedge^{2}\Gamma\) via
\[\pi(\mathbf{e}_{i}\wedge\mathbf{e}_{j})=\wedge^{2}P(\mathbf{e}_{i}\wedge \mathbf{e}_{j})=\mathbf{e}_{\pi(i)}\wedge\mathbf{e}_{\pi(j)},\qquad\forall\pi \in S_{n}. \tag{20}\]
By restriction we get an action of \(S_{n}\) on the subset
\[\bar{B}=\{\mathbf{e}_{i}\wedge\mathbf{e}_{j}\mid i<j\text{ and }\epsilon\in\{-1, 1\}\}.\]
Restricting the above action of \(S_{n}\) to \(C_{\pi}:=\langle\pi\rangle\) we consider the \(C_{\pi}\)-orbits.
**Definition 4.1**.: We define an _orbit-sum_\(\mathscr{O}_{ij}\) (\(1\leq i\neq j\leq n\)) as
\[\mathscr{O}_{ij}=\wedge^{2}P(\mathbf{e}_{i}\wedge\mathbf{e}_{j})+(\wedge^{2}P )^{2}(\mathbf{e}_{i}\wedge\mathbf{e}_{j})+\cdots+(\wedge^{2}P)^{m}(\mathbf{e}_ {i}\wedge\mathbf{e}_{j}),\]
where \(m\) stands for the order of \(\wedge^{2}P\).
**Lemma 4.2**.: _Let \(\pi\in S_{n}\) (\(n\geq 3\)) be a non-identity permutation and let \(P\) denote the corresponding permutation matrix. Let \(\operatorname{Fix}(\wedge^{2}P)\) stand for the sub-module of \(\wedge^{2}\Gamma=\mathbb{Z}^{N}\) (\(N=\binom{n}{2}\)) left fixed by the \(\mathbb{Z}\)-linear map \(\wedge^{2}P\in\operatorname{GL}(N,\mathbb{Z})\). Then_
\[\operatorname{rk}(\operatorname{Fix}(\wedge^{2}P))<\binom{n-1}{2}+1.\]
Proof.: We begin by observing that the submodule \(\operatorname{Fix}(\wedge^{2}P)\) is generated by the orbit-sums \(\mathscr{O}_{ij}\) of Definition 4.1 where we note that either \(\mathscr{O}_{ij}=-\mathscr{O}_{ji}\) or else \(\mathscr{O}_{ij}=\mathscr{O}_{ji}=0\). In other words if \(w\in\operatorname{Fix}(\wedge^{2}P)\) then
\[w=\sum_{ij}\gamma_{ij}\mathscr{O}_{ij},\qquad\gamma_{ij}\in\mathbb{Z}.\]
Denoting by \(\mathcal{N}_{\pi}\) the number of \(C_{\pi}\)-orbits it follows from the above that \(\operatorname{Fix}(\wedge^{2}P)\) is generated by at most \(\lfloor\frac{N_{\pi}}{2}\rfloor\) orbit sums \(\mathscr{O}_{ij}\) and thus
\[\operatorname{rk}(\operatorname{Fix}(\wedge^{2}P))\leq\frac{N_{\pi}}{2}. \tag{21}\]
Evidently for any non-identity permutation \(\pi\in S_{n}\) the number of fixed points \(\operatorname{Fix}(\pi)\) is bounded above by \(n-2\) and this bound in attained only by a transposition \((ij)\). Using this fact and the Burnside formula an upper bound for \(\mathcal{N}_{\pi}\) was obtained in [1] as follows
\[\mathcal{N}_{\pi} =\frac{1}{|C_{\pi}|}\Bigg{(}n(n-1)+\sum_{\phi\in C_{\pi},\phi\neq 1 }2\binom{\operatorname{Fix}(\pi)}{2}\Bigg{)}\leq\frac{n(n-1)}{|C_{\pi}|}+\frac{ (|C_{\pi}|-1)}{|C_{\pi}|}2\binom{n-2}{2}\] \[=(n-2)(n-3)+\frac{4n-6}{|C_{\pi}|}\leq(n-2)(n-3)+\frac{4n-6}{2}= n^{2}-3n+3.\]
Noting (21) We thus obtain
\[\operatorname{rk}(\operatorname{Fix}(\wedge^{2}P)\leq\frac{N_{\pi}}{2}= \frac{n^{2}-3n+3}{2}<\frac{n^{2}-3n+4}{2}=\binom{n-1}{2}+1.\]
**Lemma 4.3**.: _For a quantum torus \(\widehat{\mathcal{O}}_{\mathsf{q}}\) suppose that the \(\lambda\)-group \(\Lambda\) is torsion-free with rank equal to \(l\). Let \(\mathsf{M}\) be the matrix defined in Notation 1 (with respect to some choice of a basis in the \(\lambda\)-group). The \(l\) columns of \(\mathsf{M}\) generate a free submodule of \(\wedge^{2}\Gamma=\mathbb{Z}^{\binom{n}{2}}\) of rank \(l\)._
Proof.: We recall that for a matrix over a commutative ring \(R\) the _row rank_ is defined as the maximum number of linearly independent rows of \(R\) and _column rank_ has a parallel definition. If \(R\) is a domain then these two ranks coincide (e.g., [15, Chapter 4, Corollary 2.29]). The choice of a basis \(\{p_{1},\cdots,p_{l}\}\) in \(\Lambda\) induces an isomorphism \(\Lambda\cong\mathbb{Z}^{l}\). Since
\[\{\lambda(\mathbf{e}_{i},\mathbf{e}_{j})\mid 1\leq i<j\leq n\}\]
is a generating set for the \(\mathbb{Z}\)-module \(\Lambda\) therefore the rows of \(\mathsf{M}\) generate \(\Gamma_{l}:=\mathbb{Z}^{l}\). Let \(s\leq l\) be the row-rank of \(\mathsf{M}\) and let \(U\) be the free submodule spanned by some \(s\) linearly independent rows. We note that \(\Gamma_{l}/U\) is a torsion \(\mathbb{Z}\)-module and hence finite implying that
\[l=\operatorname{rk}(\Gamma_{l})=\operatorname{rk}(U)=s.\]
The assertion in the lemma is now immediate.
**Lemma 4.4**.: _A quantum torus \(\widetilde{\mathcal{O}}_{\mathfrak{q}}\) whose \(\lambda\)-group is torsion-free and has rank at least \(\binom{n-1}{2}+1\) has center equal to \(\mathbb{F}\)._
Proof.: To this end we recall (e.g., [12]) that a quantum torus \(\widetilde{\mathcal{O}}_{\mathfrak{q}}\) has the structure of a twisted group algebra \(\mathbb{F}*\Gamma\) for \(\Gamma=\mathbb{Z}^{n}\). By [11, Lemma 1.1(i)] the center of such an algebra is itself a twisted group algebra \(\mathbb{F}*Z\) for a subgroup \(Z\leq\Gamma\).
We claim that \(\Gamma/Z\) is torsion-free. Indeed let \(\gamma\in\Gamma\) be such that \(k\gamma\in Z\) for some \(k\in\mathbb{N}\). In view of (2)
\[\lambda(k\gamma,\gamma^{\prime})=\lambda(\gamma,\gamma^{\prime})^{k}=1\qquad \forall\gamma^{\prime}\in\Gamma.\]
But as the group \(\Lambda\) is torsion free (by hypothesis) therefore \(\lambda(\gamma,\gamma^{\prime})=1\). Thus \(\gamma\in Z\) and it follows that \(\Gamma/Z\) is torsion-free. Let \(\rho:=\operatorname{rk}(\Gamma/Z)\). As the map \(\lambda\) of (2) is constant on the cosets of \(Z\) in \(\Gamma\) it induces an alternating bicharacter \(\bar{\lambda}\) on \(\Gamma/Z\) such that
\[\bar{\lambda}(\gamma+Z,\gamma^{\prime}+Z)=\lambda(\gamma,\gamma^{\prime}), \qquad\forall\gamma,\gamma^{\prime}\in\Gamma.\]
As the group \(\Lambda\) is generated by the
\[q_{ij}=\lambda(\mathbf{e}_{i},\mathbf{e}_{j})=\bar{\lambda}(\mathbf{e}_{i}+Z, \mathbf{e}_{j}+Z),\]
letting \(u_{1}+Z,\cdots u_{\rho}+Z\) be a basis of \(\Gamma/Z\) we must have
\[\Lambda\leq\langle\bar{\lambda}(u_{r}+Z,u_{s}+Z)\mid 1\leq r<s\leq\rho\rangle=: \Lambda_{1}.\]
Clearly the group \(\Lambda_{1}\) has rank at most \(\binom{\rho}{2}=\binom{n-\operatorname{rk}(Z)}{2}\). If \(Z\) is nontrivial we thus obtain \(\operatorname{rk}(\Lambda)\leq\binom{n-1}{2}\). But \(\operatorname{rk}(\Lambda)=\binom{n-1}{2}+1\) by the hypothesis. It follows that \(\operatorname{rk}(Z)=0\) and consequently the center of \(\widetilde{\mathcal{O}}_{\mathfrak{q}}\) must be \(\mathbb{F}\).
**Proposition 4.5**.: _Let \(\mathfrak{q}\) be a multiplicatively antisymmetric matrix such that \(q_{kj}=1\)\((j=1,\cdots,n)\). Then \(\operatorname{Aut}_{\mathbb{F}}(\mathcal{O}_{\mathfrak{q}})=\) contains the group \((\mathbb{F}^{*})^{n}\mathbb{F}^{+}\rtimes\mathbb{F}^{+}\)._
Proof.: Let \(b\in\mathbb{F}\). We define a map \(\phi_{b}:\mathcal{O}_{\mathfrak{q}}\to\mathcal{O}_{\mathfrak{q}}\) via
\[\phi_{b}(X_{i})=\begin{cases}X_{i}&\text{if }i\neq k\\ X_{i}+b,&\text{if }i=k\end{cases}\]
It is easily checked that \(\phi_{b}\in\operatorname{Aut}_{\mathbb{F}}(\mathcal{O}_{\mathfrak{q}})\) and that \(b\mapsto\phi_{b}\) is an embedding of \(\mathbb{F}^{+}\) into \(\operatorname{Aut}_{\mathbb{F}}(\mathcal{O}_{\mathfrak{q}})\). If \(\tau\in\operatorname{Aut}_{\mathbb{F}}(\mathcal{O}_{\mathfrak{q}})\) defined by \(\tau(X_{i})=t_{i}X_{i}\) is a scalar automorphism in \((\mathbb{F}^{*})^{n}\) then it easy to check that
\[\tau^{-1}\phi_{b}\tau=\phi_{t_{k}b_{1}}.\]
In other words the image of \(\mathbb{F}^{+}\) is normalized by the subgroup \((\mathbb{F}^{*})^{n}\) of scalar automorphisms. The assertion of the proposition now follows.
**Theorem 2**.: _A quantum affine space \(\mathcal{O}_{\mathfrak{q}}=\mathcal{O}_{\mathfrak{q}}(\mathbb{F}^{n})\)\((n\geq 3)\) whose \(\lambda\)-group is torsion-free and has rank no smaller than \(\binom{n-1}{2}+1\) satisfies_
\[\operatorname{Aut}_{\mathbb{F}}(\mathcal{O}_{\mathfrak{q}})=(\mathbb{F}^{*})^{n}.\]
_Moreover, for each \(n\geq 3\) and each \(r<\binom{n-1}{2}+1\) there exists an algebra \(\mathcal{O}_{\mathfrak{q}}\) with \(\lambda\)-group equal to \(\mathbb{Z}^{r}\) but whose automorphism group embeds \((\mathbb{F}^{*})^{n}\rtimes\mathbb{F}^{+}\)._
Proof.: By Lemma 4.4 the corresponding quantum torus \(\widehat{\mathcal{O}}_{\mathfrak{q}}\) has center equal to \(\mathbb{F}\). Hence by [11, Propositon 1.5] each \(\mathbb{F}\)-automorphism \(\sigma\) of \(\mathcal{O}_{\mathfrak{q}}\) lifts to an \(\mathbb{F}\)-automorphism \(\hat{\sigma}\) of \(\widehat{\mathcal{O}}_{\mathfrak{q}}\). Consequently there is permutation \(\pi\in S_{n}\) such that
\[\hat{\sigma}(X_{i})=\sigma(X_{i})=a_{i}X_{\pi(i)},\quad i\in\{1,\cdots,n\},a_{ i}\in\mathbb{F}^{*}.\]
Clearly, the image of \(\hat{\sigma}\) in \(\operatorname{Aut}(\mathbb{Z}^{n},\lambda)\) under the map in (3) is the permutation matrix \(P\) corresponding to a permutation \(\pi\in S_{n}\). Since \(P^{t}=P^{-1}\) hence \(P^{t}\in\operatorname{Aut}(\mathbb{Z}^{n},\lambda)\) as well. Proposition A now tells us that \(P\in\operatorname{Stab}_{\operatorname{GL}(n,\mathbb{Z})}(\mathsf{M})\) where \(\mathsf{M}\) is a relations matrix for \(\widehat{\mathcal{O}}_{\mathfrak{q}}\) as defined in Notation 1. In other words \((\wedge^{2}P)\mathsf{M}=\mathsf{M}\). Thus \(\wedge^{2}P\) fixes each column of \(\mathsf{M}\) and since \(\wedge^{2}P\) is \(\mathbb{Z}\)-linear it fixes the submodule \(W\) of \(\wedge^{2}\Gamma\) spanned by the columns of \(\mathsf{M}\). By Lemma 4.3
\[\operatorname{rk}(W)=\binom{n-1}{2}+1.\]
If \(\pi\) is a non-identity permutation this contradicts Lemma 4.2. The first part of the theorem now follows. For the second part let \(r\in\{1,\cdots,\binom{n-1}{2}\}\). Clearly in this case we can find a (multiplicatively antisymmetric) matrix \(\mathfrak{q}\) such that \(q_{1j}=1\) (\(j=2,\cdots,n\)) and the subgroup \(\langle q_{ij}\mid i\geq 2,i<j\leq n\rangle\) is torsion free with rank \(r\). Noting Proposition 4.5 we are done.
\(\operatorname{Aut}_{\mathbb{F}}(\mathcal{O}_{\mathfrak{q}})\) when \(\dim(\widehat{\mathcal{O}}_{\mathfrak{q}})=1\)
**Theorem 5.1**.: _A quantum affine space \(\mathcal{O}_{\mathfrak{q}}\) such that the corresponding quantum torus \(\widehat{\mathcal{O}}_{\mathfrak{q}}\) has dimension one has a trivial automorphism group, that is, \(\operatorname{Aut}_{\mathbb{F}}(\mathcal{O}_{\mathfrak{q}})=(\mathbb{F}^{*})^{n}\)._
Proof.: Viewing the corresponding quantum torus \(\widehat{\mathcal{O}}_{\mathfrak{q}}\) as a twisted group algebra \(\mathbb{F}*\Gamma\) we recall ([11, Lemma 1.1(i)]) that the center of this algebra has the form \(\mathbb{F}*Z\) for a subgroup \(Z\leq\Gamma\). We recall (Section 2.3) that the dimension of the algebra \(\widehat{\mathcal{O}}_{\mathfrak{q}}\) equals the cardinality of a maximal independent system of commuting monomials.
Clearly by the hypothesis such a subgroup \(B\) must have rank one. This easily implies that \(Z\) is the trivial subgroup and thus \(F*\Gamma\) has center equal to \(\mathbb{F}\). By the proposition of [12, Section 1.3] such an algebra is simple.
In this case by [11, Proposition 3.2] each \(\mathbb{F}\)-automorphism of \(\mathcal{O}_{\mathfrak{q}}\) is of the form \(X_{i}\mapsto a_{i}X_{\sigma(i)}\) where \(a_{i}\in\mathbb{F}^{*}\) and \(\sigma\) is a permutation of the subscripts \(\{1,\cdots,n\}\). Furthermore, the permutation \(\sigma\) occurs if and only if
\[q_{ij}=q_{\sigma(i)\sigma(j)},\qquad\forall\leq 1\leq i<j\leq n. \tag{22}\]
We consider the decomposition of \(\sigma\) into disjoint cycles. If there is a transposition \((ij)\) in this decomposition then by (22)
\[q_{ij}=q_{ji}=q_{ij}^{-1}\]
and so \(q_{ij}^{2}=1\). It follows that
\[[X_{i}^{2},X_{j}]=[X_{i},X_{j}]^{2}=q_{ij}^{2}=1.\]
But this means that \(\dim(\widehat{\mathcal{O}}_{\mathfrak{q}})\geq 2\) (para 1) and we thus get a contradiction to the hypothesis.
Now let \((i_{1}i_{2}i_{3}\cdots i_{r})\) be an \(r\)-cycle (\(r\geq 3\)) in the decomposition of \(\sigma\). In view of (22) we have \(q_{i_{r-1}i_{r}}=q_{i_{r}i_{1}}\) whence
\[1=q_{i_{r-1}i_{r}}q_{i_{1}i_{r}}=[X_{i_{r-1}}X_{i_{1}},X_{i_{r}}].\]
But again (in view of para 1) this is a contradiction to the assumption on the dimension of the corresponding quantum torus \(\widehat{\mathcal{O}}_{\mathfrak{q}}\).
For the quantum tori the case of dimension one means that no two independent monomials commute and so this may seem to be a rather restrictive. We may think that such a case can only occur if the \(\lambda\)-group has a relatively large rank, possibly even the maximal possible rank \(\frac{1}{2}n(n-1)\).
However an example was given in [12, Section 3.11] of a quantum tori of rank \(4\) that has dimension one and whose \(\lambda\)-group has rank equal to \(5\). Using the results of [7] we will now show that there exist rank \(n\) quantum tori having dimension one and whose \(\lambda\)-group has rank as low as
**Example 5.1**.: _Let \(\mathbf{q}\) be the matrix_
\[\mathbf{q}=\begin{pmatrix}1&\zeta^{-1}&\mu^{-1}&\eta^{-1}\\ \zeta&1&\eta^{-1}&\mu\\ \mu&\eta&1&\zeta^{-1}\\ \eta&\mu^{-1}&\zeta&1\end{pmatrix}, \tag{23}\]
_where \(\zeta,\mu,\nu\in\mathbb{K}^{*}\) are assumed to be multiplicatively independent. Then the \(\lambda\)-group of the quantum torus \(\widehat{\mathcal{O}}_{\mathbf{q}}\) has rank \(3\) but \(\dim\widehat{\mathcal{O}}_{\mathbf{q}}=1\)._
Proof.: We consider the three alternating forms \(\alpha_{i}\) (\(i=1,\cdots,3\)) defined on the \(\mathbb{Z}\)-module \(\mathbb{Z}^{4}\) given by
\[\alpha_{1}(x,y) =x_{2}y_{1}-x_{1}y_{2}+x_{4}y_{3}-x_{3}y_{4}\] \[\alpha_{2}(x,y) =x_{3}y_{1}-x_{1}y_{3}+x_{2}y_{4}-x_{4}y_{2}\] \[\alpha_{3}(x,y) =x_{4}y_{1}-x_{1}y_{4}+x_{3}y_{2}-x_{2}y_{3}\]
We claim that these three forms have no common isotropic \(\mathbb{Z}\)-submodule of \(\mathbb{Z}^{4}\) of rank greater than one by which we mean a \(\mathbb{Z}\)-submodule \(B\) with \(\operatorname{rk}(B)\geq 2\) such that the restriction of \(\alpha_{i}\) to \(B\) is trivial for all \(i=1\cdots 3\).
Indeed we may regard \(\alpha_{i}\) as a form on the \(\mathbb{Q}\)-space \(V:=\mathbb{Q}^{4}=\mathbb{Z}^{4}\otimes_{\mathbb{Z}}\mathbb{Q}\). If \(B\) is an isotropic submodule as in the preceding paragraph then clearly \(B\otimes_{\mathbb{Z}}\mathbb{Q}\) is \(\mathbb{Q}\)-subspace of \(\mathbb{Q}^{4}\) with dimension at least two that is a common isotropic subspace for \(\alpha_{i}\) (\(i=1\cdots 3\)).
Let \(\mathbb{H}\) denote the quaternion algebra over \(\mathbb{Q}\). We know that \(\mathbb{H}\) is a division algebra with the usual \(\mathbb{Q}\)-basis \(\{1,i,j,k\}\). It can be easily checked that the matrix images of \(i\), \(j\) and \(k\) in the regular representation \(\mathbb{H}\to\operatorname{End}_{\mathbb{Q}}\mathbb{H}\) of \(\mathbb{H}\) are the gram matrices \(M_{i}\) of the forms \(\alpha_{1},\alpha_{2}\) and \(\alpha_{3}\) respectively. Since \(\mathbb{H}\) is a division algebra the nonzero matrices in the \(\mathbb{Q}\)-subspace of \(\operatorname{End}_{\mathbb{Q}}\mathbb{H}\) spanned by the \(M_{i}\) (\(i=1\cdots 3\)) are non-singular. Then by [7, Corollary 4] the alternating forms \(\alpha_{i}\) (\(i=1,\cdots,3\)) on the \(\mathbb{Q}\)-space \(\mathbb{Q}^{4}\) have no common isotropic subspace of dimension greater than one. Thus there cannot exist a common isotropic submodule \(B\) of rank greater than one for the given alternating forms \(\alpha_{i}\) (\(i=1,\cdots,3\)). Let \(\{\mathbf{e}_{1},\cdots,\mathbf{e}_{4}\}\) be the standard basis elements of \(\mathbb{Z}^{4}\) and set
\[q_{ij}=\zeta^{\alpha_{1}(\mathbf{e}_{i},\mathbf{e}_{j})}\mu^{\alpha_{2}( \mathbf{e}_{i},\mathbf{e}_{j})}\nu^{\alpha_{3}(\mathbf{e}_{i},\mathbf{e}_{j})},\qquad\forall 1\leq i<j\leq 4.\]
Then \(\mathbf{q}=(q_{ij})\) and the commutator map \(\lambda\) (Section2.1) of the quantum torus \(\widehat{\mathcal{O}}_{\mathbf{q}}\) has the form
\[\lambda(\gamma,\gamma^{\prime})=\zeta^{\alpha_{1}(\gamma,\gamma^{\prime})}\mu ^{\alpha_{2}(\gamma,\gamma^{\prime})}\nu^{\alpha_{3}(\gamma,\gamma^{\prime})},\qquad\forall 1\leq i<j\leq 4.\]
If there exist two independent commuting monomials \(X^{\gamma},X^{\gamma^{\prime}}\in\widehat{\mathcal{O}}_{\mathbf{q}}\) then clearly \(\alpha_{i}(\gamma,\gamma^{\prime})=0\) for all \(i\), which is a contradiction as we have seen that there does not exist common isotropic submodule \(B\leq\mathbb{Z}^{4}\) with \(\operatorname{rk}(B)\geq 2\) for the three forms \(\alpha_{i}\) (\(i=1,\cdots,3\)). The theorem now follows in view of Section 2.3.
**Remark 5.1**.: _Similarly using the (non-associative) octonion algebra over \(\mathbb{Q}\) we may define a quantum torus \(\widehat{\mathcal{O}}_{\mathbf{q}}\) of rank \(8\) and dimension \(1\) whose \(\lambda\)-group has rank \(7\)._
Finally we comment on the general case of quantum tori of rank \(n\). It is shown in [8, 7] that it is always possible to find \(n\) alternating forms on the space \(\mathbb{Q}^{n}\) for which there is no common isotropic subspace of dimension greater than one. We may follow an approach similar to that of the preceding proposition to come up with examples of dimension one quantum tori of rank \(n\) whose \(\lambda\)-group has rank equal to \(n\). This is much smaller than the maximal possible rank \(\frac{1}{2}n(n-1)\).
## Acknowledgements
The first author thanks the National Board of Higher Mathematics (NBHM) for financial support (DAE/MATH/2015/059). The second author gratefully acknowledges support from an NBHM research award.
|
2306.00069 | Unravelling the Dust Attenuation Scaling Relations and their Evolution | We explore the dependence of dust attenuation, as traced by the $\rm
H_{\alpha}/\rm H_{\beta}$ Balmer decrement, on galactic properties by using a
large sample of SDSS spectra. We use both Partial Correlation Coefficients
(PCC) and Random Forest (RF) analysis to distinguish those galactic parameters
that directly and primarily drive dust attenuation in galaxies, from parameters
that are only indirectly correlated through secondary dependencies. We find
that, once galactic inclination is controlled for, dust attenuation depends
primarily on stellar mass, followed by metallicity and velocity dispersion.
Once the dependence on these quantities is taken into account, there is no
dependence on star formation rate. While the dependence on stellar mass and
metallicity was expected based on simple analytical equations for the
interstellar medium, the dependence on velocity dispersion was not predicted
and we discuss possible scenarios to explain it. We identify a projection of
this multi-dimensional parameters space which minimises the dispersion in terms
of the Balmer decrement and which encapsulates the primary and secondary
dependences of the Balmer decrement into a single parameter defined as the
reduced mass $\mu = \log {\rm M}_{\star} +3.67 [{\rm O/H}] + 2.96 \log
(\sigma_v/100~km~s^{-1})$. We show that the dependence of the Balmer decrement
on this single parameter also holds at high redshift, suggesting that the
processes regulating dust production and distribution do not change
significantly through cosmic epochs at least out to z$\sim$2. | Gabriel Maheson, Roberto Maiolino, Mirko Curti, Ryan Sanders, Sandro Tacchella, Lester Sandles | 2023-05-31T18:00:06Z | http://arxiv.org/abs/2306.00069v2 | # Unravelling the Dust Attenuation Scaling Relations and their Evolution
###### Abstract
We explore the dependence of dust attenuation, as traced by the H\({}_{\alpha}\)/H\({}_{\beta}\) Balmer decrement, on galactic properties by using a large sample of SDSS spectra. We use both Partial Correlation Coefficients (PCC) and Random Forest (RF) analysis to distinguish those galactic parameters that directly and primarily drive dust attenuation in galaxies, from parameters that are only indirectly correlated through secondary dependencies. We find that, once galactic inclination is controlled for, dust attenuation depends primarily on stellar mass, followed by metallicity and velocity dispersion. Once the dependence on these quantities is taken into account, there is no dependence on star formation rate. While the dependence on stellar mass and metallicity was expected based on simple analytical equations for the interstellar medium, the dependence on velocity dispersion was not predicted and we discuss possible scenarios to explain it. We identify a projection of this multi-dimensional parameters space which minimises the dispersion in terms of the Balmer decrement and which encapsulates the primary and secondary dependences of the Balmer decrement into a single parameter defined as the reduced mass \(\mu=\log{\rm M_{\star}}+3.67\,[{\rm O/H}]+2.96\log(\sigma_{\nu}/100\ km\ s^{-1})\). We show that the dependence of the Balmer decrement on this single parameter also holds at high redshift, suggesting that the processes regulating dust production and distribution do not change significantly through cosmic epochs at least out to z\(\sim\)2.
keywords: radiative transfer - HII regions - ISM: structure
## 1 Introduction
Although it only constitutes a small fraction of the mass in galaxies, interstellar dust plays an important role in the thermodynamics and chemistry of the interstellar medium, which impacts how galaxies form stars, as well as the reprocessing of light, shaping the galaxies Spectral Energy Distributions (SEDs) (e.g. Conroy, 2013). Dust can form in the atmospheres around AGB stars, or even in the ejecta of supernovae, and are then released into the interstellar medium (ISM) (Draine, 2011). The formed dust grains have a range of sizes, typically from 5 to 250 nm (Weingartner and Draine, 2001), and the dust grain size and mass evolve through various different mechanisms (Asano et al., 2013) such as growth by accretion from the ISM, coagulation, and destruction by shocks. Dust is formed for various chemical elements, such as Si, C, Fe and H\({}_{2}\)O (Draine, 2003), and so a fraction of the metals in galaxies is stored in the dust. Additionally, dust acts as a catalyst in the formation of H\({}_{2}\) regions, where the Hydrogen atoms adsorb onto the surface of the dust and chemically bond to form molecular Hydrogen (Draine, 2003). An important cooling mechanism in galaxies is via dust (Montier and Giard, 2004; Vogelsberger et al., 2019) through thermal infrared emission of heated dust grains. Cooling by dust facilitates gravitational collapse and fragmentation, hence the formation of (low mass) stars (Schneider et al., 2006).
The most important aspect of dust concerned in this work is its ability to scatter and absorb UV and optical light emitted by stars and by the ISM, and re-emit them at longer IR wavelengths, effectively reshaping the whole SED of the galaxy. These processes cause the effects known as dust extinction, dust attenuation and dust reddening (Draine, 2003). Being able to accurately reverse the effects of dust on galaxy SEDs is essential to accurately determining galaxy parameters and investigating the physics within galaxies that drives their evolution.
Additionally, understanding how dust content scales with other galactic parameters can tell us a lot about how dust forms, and what role it plays in the evolution of galaxies. There are several scaling laws in the literature which relate different galactic parameters, several of which relate (or may relate) to the dust mass, which in turn is related to the dust attenuation in the galaxies.
The Star Forming Main Sequence (SFMS, Brinchmann et al., 2004; Sandles et al., 2022) relates the stellar mass to the star formation rate (SFR), and recent studies suggest that this is actually an indirect relation, a by-product of other more fundamental relations (Lin et al., 2019; Baker et al., 2022b on resolved scales, and Baker et al., 2023 on integrated scales). The relation between the SFR and the stellar mass is also present on resolved kpc to sub-kpc scales, forming the so-called resolved star forming main sequence (rSFMS, Sanchez et al., 2013; Cano-Diaz et al., 2016; Hsieh et al., 2017).
The resolved Molecular Gas Main Sequence (MGMS) relates the
stellar mass surface density to the molecular gas mass surface density, as found in Lin et al. (2019); Morselli et al. (2020); Barrera-Ballesteros et al. (2020); Ellison et al. (2021a,b); Pessa et al. (2021); Baker et al. (2022b). The molecular gas mass surface density is also related to the star formation rate surface density, and this is known as the Schmidt-Kennicutt law (SK, Schmidt 1959; Kennicutt 1998), which is understood as the molecular gas acting as fuel for star formation (Kennicutt 1998).
The Fundamental Metallicity Relation (FMR) empirically relates the stellar mass, SFR and metallicity of the ISM, and has been reported by several authors (Mannucci et al. 2010; Salim et al. 2014; Nakajima & Ouchi 2014; Hunt et al. 2016; Gebhardt et al. 2016; Hirschauer et al. 2018; Curti et al. 2020a; Baker et al. 2022a), with the metallicity depending primarily on the stellar mass and showing a secondary, inverse dependence on the SFR. This is a generalisation of the mass-metallicity relation (MZR, Lequeux et al. (1979)) which is the correlation between the stellar mass and the metallicity. The FMR indicates a non-linear relationship between the stellar mass, metallicity and star formation rate, with the metallicity decreasing with SFR for low masses and becoming almost independent of SFR at higher masses. This relation is believed to hold true up to a redshift of \(z\sim 3\)(Cresci et al. 2019; Sanders et al. 2021).
Various works have investigated the dependence of the dust attenuation, traced by the Balmer decrement (H\({}_{\alpha}\)/H\({}_{\beta}\)), on galactic properties. In particular, Garn & Best (2010) studied a local sample of galaxies at \(z\sim 0.7\), investigating how the dust attenuation determined using the Balmer decrement method depends on the stellar mass, SFR and gas-phase metallicity. They determine a positive, non-linear correlation between the dust attenuation and each of these quantities. However, to understand which parameter is most important and driving the dust attenuation, and which of the other parameters only contribute a secondary dependence through the dependence on the other dominant parameter, if there is one, they employ principal component analysis (PCA). This method identifies which parameter causes the most variation in the dust attenuation. They determine the most important parameter to be the stellar mass, and they claim that the dependence of the dust attenuation on the other parameters are all secondary due to their dependence on stellar mass. They argue that the galaxies with a larger stellar mass will have built up a larger reservoir of dust, since dust is produced in stars Draine (2003). However, the PCA method is only accurate when there is a simple linear relationship between the quantities, which is not necessarily present here.
Several other works have identified correlations between the dust attenuation and different galactic parameters, such as SFR (Garn et al. 2010), stellar mass (Pannella et al. 2009) and metallicity (Asari et al. 2007), however few have investigated if the correlation identified is a direct correlation or if it is an indirect correlation introduced by secondary correlations between these and other galactic parameters.
To investigate how the dependencies between the dust attenuation and the galactic parameters evolve with redshift, giving insight into both the dust production mechanisms and how these evolve, works such as Shapley et al. (2022) have compared samples of local galaxies and higher redshift galaxies around cosmic noon. Cosmic noon, at \(z\sim 2-3\), is the period where the cosmic average star formation rate was largest, and about half of the stellar mass content of today's galaxies was formed (Madau & Dickinson 2014), making this epoch interesting for examining star formation mechanisms. This epoch is also when most of today's massive disk and elliptical galaxies formed (e.g. Forster Schreiber & Wuyts 2020).
Several studies have identified the dependence of the dust attenuation on the stellar mass as the most important, and some have even observed this relation not evolving up to a redshift of about \(z\sim 2\)(Whitaker et al. 2017; McLure et al. 2018; Shapley et al. 2022), and more recently up to \(z\sim 6.5\)(Shapley et al. 2023). In particular, Shapley et al. (2022) identified no significant evolution in the relationship between the dust attenuation (using both the Balmer decrement and the UV continuum) and the stellar mass between SDSS (\(z\sim 0\)) and MOSDEF (\(z\sim 2.3\)) galaxies, and argue that this lack of evolution can be explained by considering the evolution of the other parameters, such as metallicity, dust mass and gas mass.
There is some evidence that at \(z>3\) the dust attenuation evolves to lower values for a given stellar mass (Fudamoto et al. 2020). Following this, Bogdanoska & Burgarella (2020) compared samples of galaxies from the literature in the redshift range \(0<z<10\) to identify how the UV dust attenuation vs. stellar mass relation may evolve. They assumed a linear fit to the relation for all the samples and then tracked the evolution of this gradient with redshift. From this they conclude that the relation between the dust attenuation and stellar mass evolves across the entire redshift range investigated, with the gradient of the linear fit peaking at cosmic noon and decreasing at higher and lower redshifts. This is in contrast to the results from Shapley et al. (2022), however this work considered different samples of galaxies.
In this work we wanted to first investigate how the Balmer decrement depends on the different galactic parameters, and in contrast to previous works, disentangle the various dependencies from primary and secondary relations using advanced statistical techniques, such as Random Forest regression and Partial Correlation Coefficient analysis. This part of the work was applied to large sample of local galaxies observed using the Sloan Digital Sky Survey (SDSS, York et al. 2000).
We also wanted to see if these relations evolve with redshift by comparing the local galaxies with samples of galaxies at higher redshift. In this work we used two higher redshift samples, one observed by the K-band Multi Object Spectrograph (KMOS, Sharples et al. 2013) on the VLT, and the other observed by the Multi-Object Spectrometer for Infrared Exploration (MOSFIRE, McLean et al. 2012) on the Keck I telescope.
The layout of this paper is as follows. In Section 2, we discuss our data sources. In Section 3, we present the physics behind the determination of the dust attenuation from the emission line fluxes. In Section 4 we make theoretical predictions on the most important galactic properties in determining the dust attenuation through known scaling laws. In Section 5 we present the statistical analysis tools used in this work. In Section 6 we present results of our statistical analysis and comparison between the samples of galaxies. In Section 7 we conclude the main findings of this work.
## 2 Data and sample selection
In this work we consider data from three spectroscopic surveys: SDSS, i.e. local galaxies at \(z\sim 0\), and KLEVER and MOSDEF, i.e. galaxies at \(z\sim 1-3\).
### Local Sample
To understand the dependency of the Balmer decrement on galaxy properties we first explored local galaxies, using the Sloan Digital Sky Survey (SDSS, York et al. 2000) Data release 12 (DR12, Alam et al. 2015). SDSS uses a 2.5-m wide-field telescope at the Apache Point Observatory in New Mexico, US, utilising the \(u,g,r,i\) and \(z\) bands (Fukugita et al. 1996). The spectra of the objects are obtained
by a pair of multi-object double spectrographs with 3-arcsecond diameter fibres, producing a spectral coverage of 3800-9200 A (Abazajian et al., 2009).
The spectroscopic redshifts, emission line fluxes, stellar masses and star formation rates of the SDSS galaxies are calculated by the MPA-JHU group1 providing measurements for 927,552 galaxies at redshifts \(z<0.7\).
Footnote 1: [https://skyserver.sdss.org/dr12/en/help/docs/tabledesc.aspx?name=galSpecInfo](https://skyserver.sdss.org/dr12/en/help/docs/tabledesc.aspx?name=galSpecInfo)
#### 2.1.1 Emission Line Fluxes and Nebular Velocity Dispersion
To determine the emission line fluxes of the galaxies in the SDSS survey, the MPA-JHU group subtracted the best fitting stellar population model of the stellar continuum from the spectra for each galaxy, then fit the nebular emission lines (Tremonti et al., 2004). To better measure the weak nebular lines, they fit Gaussians to the spectra simultaneously whilst requiring all the Balmer lines have the same width and velocity offset, and similarly for the forbidden lines, whilst also taking into account the spectral resolution. The nebular velocity dispersion was calculated from the spectra using the width of the emission lines.
#### 2.1.2 Stellar Mass
To calculate the stellar mass of the DR12 galaxies, the observed SEDs of the galaxies are compared to a large number of model SEDs, which tells us of the stellar population in the galaxies, allowing us to calculate the mass to light ratio, and hence the stellar mass of the galaxy (Kauffmann et al., 2003; Salim et al., 2007). These stellar masses were then converted to the Chabrier (2003) initial mass function (IMF).
#### 2.1.3 Star Formation Rate
The star formation rate of galaxies can be calculated in several ways, one of which uses the H\({}_{\alpha}\) luminosity. Young massive stars, such as O/B, emit largely in the UV ionising the gas around them, producing HII regions where recombination produces Balmer lines, with the H\({}_{\alpha}\) line being the brightest. Assuming all H\({}_{\alpha}\) emission is produced in HII regions around O/B stars and all the photons emitted by the star ionise the surrounding Hydrogen, the SFR can be shown to be proportional to the dust corrected H\({}_{\alpha}\) luminosity (Kennicutt Jr & Evans II, 2012). The SFR data taken from the MPA-JHU catalogue was calculated using this method, as is described in Brinchmann et al. (2004), with the H\({}_{\alpha}\) flux being dust corrected using the Balmer decrement method. These SFRs were then converted to the Chabrier (2003) IMF.
One issue of this method within the context of this paper is that the SFR derived from H\({}_{\alpha}\) will be strongly correlated with the Balmer decrement (H\({}_{\alpha}\)/H\({}_{\beta}\)), due to the H\({}_{\alpha}\) flux being dust corrected using the Balmer decrement, as well as the fact that H\({}_{\alpha}\) appears in both the SFR and the Balmer decrement. This cross-correlation between the SFR derived from H\({}_{\alpha}\) and the Balmer decrement is discussed in Appendix A. To avoid this spurious correlation affecting our results, we calculated another tracer for the SFR using the D4000 break following the methodology in Bluck et al. (2020). The produced calibration is shown in Appendix B.
#### 2.1.4 Metallicity
The gas-phase metallicity of the galaxies in the SDSS survey was calculated for this work through the strong line calibration method presented in Curti et al. (2020). These calibrations between metallicity and ratios of strong emission lines were determined by using the "direct" electron temperature (T\({}_{\rm e}\)) method (Maiolino & Mannucci, 2019), where electron temperatures, T\({}_{\rm e}\), are measured by stacking thousands of local galaxies to detect auroral lines and then used to infer the metallicity. The strong line ratios used in this work are shown in Table 1, following the same definitions as are used in Curti et al. (2020).
Table showing definitions of the emission line ratios used to determine the metallicity of the galaxies through strong line calibrations, as is used in Curti et al. (2020). [OII]\(\lambda\)3727,29 notation is equivalent to writing [OII]\(\lambda\)3727 + [OII]\(\lambda\)3729.
There are various caveats to the different diagnostics, such as some being double valued (Curti et al., 2020). To mitigate these issues, we combine different combinations of the diagnostics for each galaxy as is done in Curti et al. (2020). In this work we chose to use the gas phase metallicity relative to the solar metallicity, [O/H]=12+log(O/H) - 8.69 (Asplund et al., 2009).
#### 2.1.5 Inclination
In this work we explore the dependence of dust attenuation on galaxy inclination. The inclination measurements for the galaxies in this sample were extracted from the Simard et al. (2011) morphological catalogue. To determine the morphological parameters such as the galaxy inclinations, they used a galaxy model with the sum of a pure exponential disk and a de Vaucouleurs bulge (Sersic index n\({}_{\rm B}\)=4).
#### 2.1.6 Selection Criteria
The DR7 sample consists of 927,552 galaxies. To reduce the effect of noise contributed by the sky background, the detector and the fluctuations in the source itself, we set signal-to-noise cuts on the line fluxes. Following Mannucci et al. (2010) and Hayden-Pawson et al. (2022), we adopt a high signal to noise ratio on the H\({}_{\alpha}\) line (\(>20\sigma\)); this is to reduce potential biases in determining the metallicity which may arise by imposing cuts on weaker optical emission lines. We also set a signal-to-noise cut on H\({}_{\beta}\) of \(2\sigma\) to ensure the measured Balmer decrement is reliable. In addition to this, when determining the metallicities of the galaxies, any diagnostic with lines detected
\begin{table}
\begin{tabular}{c c} \hline Notation & Line Ratio \\ \hline R\({}_{2}\) & [OII]\(\lambda\)3727, 29/H\({}_{\beta}\) \\ R\({}_{3}\) & [OIII]\(\lambda\)5007/H\({}_{\beta}\) \\ N\({}_{2}\) & [NII]\(\lambda\)6584/H\({}_{\alpha}\) \\ S\({}_{2}\) & [SII]\(\lambda\)6717, 3/H\({}_{\beta}\) \\ R\({}_{23}\) & \(\{\)[OII]\(\lambda\)3727, 29 + \(\{\)OIII\(\}\)44959, 5007\(\}\)/H\({}_{\beta}\) \\ O\({}_{3}\)O\({}_{2}\) & [OII]\(\lambda\)5007/ [OII]\(\lambda\)3727, 29 \\ RS\({}_{22}\) & [OIII]\(\lambda\)5007/H\({}_{\beta}\) + [SII]\(\lambda\)6717, 31/H\({}_{\alpha}\) \\ O\({}_{3}\)S\({}_{2}\) & [OIII]\(\lambda\)5007/H\({}_{\beta}\) /[SII]\(\lambda\)6717, 31/H\({}_{\alpha}\) \\ O\({}_{3}\)N\({}_{2}\) & [OIII]\(\lambda\)5007/H\({}_{\beta}\)/[NII]\(\lambda\)6584/H\({}_{\alpha}\) \\ \hline \end{tabular}
\end{table}
Table 1: Table showing definitions of the emission line ratios used to determine the metallicity of the galaxies through strong line calibrations. [OII]\(\lambda\)3727,29 notation is equivalent to writing [OII]\(\lambda\)3727 + [OII]\(\lambda\)3729.
above the three \(\sigma\) level were combined to calculate the metallicity following Curti et al. (2020).
The metallicity diagnostics used in this work relied on only star forming galaxies (SFGs) in the sample, which were selected using BPT emission line diagnostic diagrams (Baldwin et al., 1981), which compare the [OIII]\(\lambda\)5007/H\({}_{\beta}\) line ratio against the [NII]\(\lambda\)6583/H\({}_{\alpha}\) line ratio. We used the Kauffmann et al. (2003) demarcation line to define SFGs in this work. Due to the spectral resolution of SDSS being 2000, we also selected only galaxies with log nebular velocity dispersion above \(\log_{10}(\sigma_{\rm H_{\alpha}}[{\rm km/s}])>1.75\).
To ensure not just the central region of the galaxies was being sampled, we enforced the projected fibre aperture to be at least 2kpc, which set a lower limit on the redshift of \(z=0.043\), since the aperture diameter was 3 arcseconds. As is explained in Section 6.1.1, the galaxies were also selected such that their inclination was less than \(45\degr\), to minimise the effect of increased dust attenuation with increased inclination. The result of these selection criteria reduced the local sample to 21,488 galaxies.
### High Redshift Samples
To see how the relations between dust attenuation and other galaxy properties evolve, we studied samples of galaxies at higher redshifts observed spectroscopically in the near-IR. The first sample we investigated at higher redshift was the near-infrared KMOS Lensed Velocity and Emission Line Review (KLEVER, Curti et al., 2020). KLEVER is an ESO Large Programme which has observed 192 galaxies in the redshift range \(1.2<z<2.5\) using the K-band Multi Object Spectrograph (KMOS) on the VLT (Sharples et al., 2013). KMOS is a near-IR multi-object spectrograph using integral-field units (IFUs) observing in the YJ, H and K bands.
We additionally used galaxies taken from the MOSFIRE Deep Evolution Field (MOSDEF, Kriek et al., 2015) survey to compare with our local galaxies (SDSS). This survey is observed with the Multi-Object Spectrometer for Infrared Exploration (MOSFIRE, McLean et al., 2012) on the Keck I telescope observing in the Y, J, H and K bands. The MOSDEF survey measured the rest frame optical spectra of 1,824 galaxies in the three redshift intervals, \(1.37<z<1.7\), \(2.09<z<2.61\) and \(2.95<z<3.80\), which were selected such that the brightest rest-optical emission lines fall within the atmospheric transmission windows. Because of this, the galaxies in the higher redshift interval (\(2.95<z<3.80\)) were ignored in this analysis, as the H\({}_{\alpha}\) line was redshifted out of all the bands used on MOSFIRE. In this work we only used the galaxies in the \(2.09<z<2.61\) redshift bin.
The integral field unit (IFU) observations from KMOS allowed us to perform very accurate measurements of the Balmer decrement at high redshift. MOSFIRE instead observes emission lines through single slits which require slit loss corrections, which are prone to vary with operation time. However, the effect of these slit loss corrections are shown to be insignificant in determining the Balmer decrement due to the relative flux calibrations between different bands agreeing to within 13% by comparing the MOSFIRE spectra with photometric SED models (Kriek et al., 2015). The line flux measurements were additionally compared with 3D-HST grism line fluxes in Kriek et al. (2015) finding good agreement. The grism spectra do not have any slit aperture, much like IFU observations, which suggests the slit loss corrections are robust and the aperture affects are not significant.
Additionally, the galaxies in the KLEVER survey were selected using H\({}_{\alpha}\) emission in the rest-frame optical, from the 3D-HST survey (Curti et al., 2020). The galaxies in the MOSDEF survey are instead selected using the flux in the H band (dominated by the optical, rest-frame stellar continuum), aiming at obtaining a flat distribution in stellar mass (Kriek et al., 2015). The two selection criteria may potentially have different selection (bias) effects in terms of dust attenuation. The MOSDEF selection criteria does lead to a mass complete sample at \(\log(M_{\star}/M_{\odot})<10.5\), however at higher stellar masses the survey had its lowest mass completeness for red dusty star forming galaxies (Kriek et al., 2015; Runco et al., 2022). Hence the dust attenuation of the galaxies in the MOSDEF survey will be lower than expected at stellar masses above \(10^{10.5}\)M\({}_{\odot}\).
More recently, results have been reported for Balmer decrements at even higher redshift by using NIRSpec-JWST data (e.g. Shapley et al., 2023); however, not all the information required for our analysis (SFR, metallicity and velocity dispersion) is provided for those galaxies, so they are not considered in this study. Moreover, the strongly wavelength-dependent slit losses of the small NIRSpec shutters, convolved with the galaxy sizes, make the uncertainties of line ratios spanning a large wavelength range uncertain. For these reasons, in this work we mostly focus on the KLEVER and MOSDEF samples at z\(\sim\)1-2.
In the following section we provide additional information on how the measurements of the galaxy parameters used in this work were extracted from these two surveys.
#### 2.2.1 Klever
The emission line fluxes and widths were measured for the KLEVER survey as described in Hayden-Pawson et al. (2022). To determine the emission line fluxes and widths, first a linear continuum was subtracted from the spectrum whose slope and normalisation were free parameters. They did not fit a proper stellar continuum since the observed continuum was so faint. All emission lines within the same observing band were fit simultaneously with Gaussian curves of equal width, but across bands the widths were allowed to vary to account for different resolving powers within each band in KMOS. The nebular velocity dispersion was calculated in a consistent fashion with SDSS.
To correct for the stellar absorption of the Balmer emission lines (namely H\({}_{\alpha}\) and H\({}_{\beta}\)), we calculated the stellar continuum from photometry for each galaxy. We used the Bayesian SED fitting code beagle(Chevallard & Charlot, 2016) to perform SED modeling of publicly available photometry (Merlin et al., 2016; Criscienzo et al., 2017; Bradac et al., 2019) for all of the objects in our sample with the aim of producing continuum-only spectra (without emission lines). We use a Chabrier (2003) IMF and assume a delayed exponential star-formation history. Redshifts were fixed to their spectroscopic values. These continuum spectra were then normalised to the continuum around the Balmer lines in each band and subtracted from the integrated spectra. The H\({}_{\alpha}\), H\({}_{\beta}\) and [NII] doublet lines were simultaneously fit with Gaussians which had their redshift and widths listed. Three Gaussians were fit to deblend the H\({}_{\alpha}\) and [NII] doublet, and the amplitudes of the two [NII] Gaussians were fixed to have a ratio of 3:1.
The galaxy stellar mass is taken from the KMOS\({}^{3D}\) data release (Wisnioski et al., 2019) for KMOS, calculated through SED fitting following the methodology in Wuyts et al. (2011), similar to that in the SDSS sample. For the lensed galaxies in the sample, the stellar masses were calculated in Concas et al. (2022) by using SED fitting following the methodology in Curti et al. (2020).
We calculated the metallicity measurements for KLEVER following the same choice of metallicity diagnostics as for SDSS Curti et al. (2020).
The KLEVER sample provided by Hayden-Pawson et al. (2022)
consisted of 192 galaxies. The only selection criteria we enforce are that both the H\({}_{\alpha}\) and H\({}_{\beta}\) lines are clearly detected. As a criterion, we conservatively use both the uncertainty from the Monte Carlo fitting (requiring a signal-to-noise ratio S/N\(>\)2) and the nominal uncertainties on the spectrum, summed in quadrature over the FWHM of the line and centred at the nominal line location (requiring S/N\(>\)3). The uncertainty from the Monte Carlo fitting of the fluxes is determined by perturbing the spectra with Gaussian noise randomly extracted from the noise spectra, repeating the fit one hundred times, and taking the 16th and 84th percentiles of the 2.5-sigma clipped resulting distribution of fluxes In addition to this, when determining the metallicities of the galaxies, any diagnostic with lines detected above the 3\(\sigma\) level were combined to calculate the metallicity following Curti et al. (2020). We then required the galaxies to have at least two diagnostics available, else the resulting metallicity may have strong biases.
The BPT selection that was applied to the galaxies in the SDSS survey is not valid at this redshift, since the demarcation derived by Kauffmann et al. (2003) is for local galaxies. To remove AGNs and ensure only star forming galaxies were considered in our analysis, we combined a visual inspection of the spectra with the information from their X-ray luminosities. If a galaxy has X-ray luminosity greater than \(2\times 10^{42}\) ergs\({}^{-1}\), it was considered an AGN and removed from our sample. After these cuts, 51 galaxies were left in our sample.
#### 2.2.2 Mosdef
The emission-line fluxes were measured for the MOSDEF survey as described in Kriek et al. (2015). The systemic redshift was measured from the highest signal-to-noise emission line, usually the H\({}_{\alpha}\) or the [OIII]\(\lambda\)5008 line. Line fluxes were measured by fitting Gaussian profiles over a linear continuum, where the centroids and widths were allowed to vary. Uncertainties on the line fluxes were estimated using a Monte Carlo method where the spectrum was perturbed according to the error spectrum and the line fluxes were remeasured. This process was repeated 1000 times, and the uncertainty on the line flux was taken to be the 84-16th percentile range of the resulting distribution. This method also produced the emission lines respective FWHMs, which were converted to the velocity dispersion. This data is publicly available2.
Footnote 2: [https://mosdef.astro.berkeley.edu/for-scientists/data-releases/](https://mosdef.astro.berkeley.edu/for-scientists/data-releases/)
Since a stellar continuum was not able to be measured, the Balmer lines were corrected for underlying stellar atmospheric absorption (typically from type A stars) by modelling the galaxy stellar populations, as described in Reddy et al. (2015).
The stellar mass measurements for the MOSDEF galaxies were calculated by Sanders et al. (2021) using SED fitting to the photometry for the galaxies in the \(z\sim 2.3\) redshift interval. This data was made available upon direct request to the authors.
The metallicity measurements for MOSDEF were calculated in this work following the same metallicity diagnostics as for SDSS and KLEVER from Curti et al. (2020).
AGNs were identified and removed from the galaxies used in this work provided by Sanders et al. (2021) using their X-ray and infrared properties, as well as their value of \(\log(\mathrm{[NII]}/\mathrm{H}_{\alpha})\)(Coil et al., 2015; Azadi et al., 2017).
The only flat signal-to-noise cuts were made on the H\({}_{\alpha}\) and H\({}_{\beta}\) lines, setting them to be greater than 3 for each line. In addition to this, when determining the metallicities of the galaxies following the same metallicity diagnostics as for SDSS and KLEVER from Curti et al. (2020), any diagnostic with lines detected above the 3\(\sigma\) level were combined to calculate the metallicity, requiring the galaxies to have at least two diagnostics available. These cuts reduced the sample to 188 galaxies.
## 3 Balmer decrement, reddening and dust attenuation
In this work, to measure the dust attenuation, \(A_{\lambda}\), from observational data we use the Balmer decrement method. The Balmer decrement is defined as the ratio of the flux from the H\({}_{\alpha}\) to the H\({}_{\beta}\) emission lines. If we assume a Case B recombination, temperature of T = \(10^{4}\)K and an electron density of \(n_{e}=10^{2}\)cm\({}^{-3}\), as is done in many similar studies (Garn & Best, 2010; Piotrowska et al., 2020; Reddy et al., 2020), the Balmer decrement has an intrinsic value of 2.86. Recent work (Taccella et al., 2022) suggests that the intrinsic Balmer decrement may be slightly higher when the contribution to the Balmer decrement from collisional ionisation is taken into account.
To determine the dust attenuation \(A_{\lambda}\) from these Balmer line fluxes, we first define the attenuation curve, \(k_{\lambda}\), related to the dust attenuation and the reddening, \(E(B-V)\), through the definition
\[k_{\lambda}=A_{\lambda}/E(B-V) \tag{1}\]
Some attenuation laws (Calzetti et al., 2000; Reddy et al., 2015) are derived using empirical methods consisting in comparing the observed galaxy's SEDs with SEDs of galaxies which are assumed to be unattenuated. Another method to determine the attenuation law is SED fitting to a model of galaxy spectra built theoretically (Buat et al., 2012; Kriek & Conroy, 2013). Both these methods are explained in depth in the review by Salim & Narayan (2020).
It can be shown that the dust attenuation \(A_{\lambda}\) is related to the Balmer decrement through
\[A_{\lambda}=-2.5\frac{k_{\lambda}}{k_{\mathrm{H}_{\alpha}}-k_{\mathrm{H}_{ \beta}}}\log_{10}\left(\frac{F_{\mathrm{H}_{\alpha}}/F_{\mathrm{H}_{\beta}}}{2. 86}\right) \tag{2}\]
where \(k_{\mathrm{H}_{\alpha}}\) and \(k_{\mathrm{H}_{\beta}}\) are the values of the attenuation curve at each Balmer wavelength, and \(F_{\mathrm{H}_{\alpha}}\) and \(F_{\mathrm{H}_{\beta}}\) are the observed flux of each Balmer line. Due to this relation, the Balmer decrement is a measure of the reddening of the spectra; to then measure the dust attenuation one must assume an attenuation curve, which itself could depend on attenuation (e.g. Chevallard et al., 2013; Taccella et al., 2022).
## 4 Expected attenuation dependencies on Galactic properties
Considering the galactic scaling laws from the literature, it is possible to make theoretical expectations on how the dust content, hence the dust attenuation and its observational proxy, the Balmer decrement, scales with the galactic parameters discussed so far.
We expect that the dust attenuation, \(A_{\lambda}\), scales with dust mass, \(M_{d}\), as well as geometric factors, \(\gamma_{g}\). Hence \(A_{\lambda}\propto M_{d}\gamma_{g}\). Geometrical factors, such as the configuration of the stars, gas and dust within the galaxies, are difficult to constrain and are not considered in depth in this work. However, we can relate the dust mass to the other galactic properties. First, the dust mass scales with the mass of the gas in the galaxy, \(M_{g}\), through the dust to gas ratio, DGR, allowing us to write \(M_{d}=\mathrm{DGR}\times M_{g}\). Additionally, the dust mass scales with the mass of the metals, \(M_{Z}\), through the dust to metal ratio, DZR, allowing us
to write \(M_{d}=\mathrm{DZR}\times M_{Z}\). We therefore can relate the DGR to the gas metallicity (\(Z_{\mathrm{g}}=\frac{M_{Z}}{M_{g}}\)) and DZR, giving
\[\mathrm{DGR}=\frac{M_{d}}{M_{g}}=\frac{M_{d}}{M_{Z}}\times\frac{M_{Z}}{M_{g}}= \mathrm{DZR}\times Z_{g} \tag{3}\]
We can relate the gas mass to the stellar mass through the MGMS (Molecular Gas Main Sequence, Lin et al., 2019), \(M_{g}=k_{MGMS}\times M_{\star}\), where \(k_{MGMS}\) is a proportionality constant. Hence we can write the following equation to determine how the dust mass, and hence dust attenuation, should scale with the galactic parameters
\[M_{d}=\mathrm{DZR}\times Z_{g}\times k_{MGMS}\times M_{\star} \tag{4}\]
where the metallicity \(Z_{g}\) depends strongly on the stellar mass, and has a secondary inverse correlation with SFR Curti et al. (2020). Since the DZR is approximately constant these relations suggest the stellar mass will be the most important parameter in determining the dust mass, and so the dust attenuation. This follows from the assertion that all dust is produced in stars, and so if a galaxy has a larger stellar mass, it will likely have more dust. Equation 4 also suggests the dust attenuation will depend indirectly on the SFR and the metallicity.
## 5 Data analysis methods and statistical analysis
Based on the simple modelling and assumptions described in Section 4 we have identified some of the galaxy properties which should be observationally more strongly related to dust content - this includes the stellar mass, star formation rate and metallicity, as is also suggested in Garn and Best (2010). To determine which are most important for our local sample of galaxies (SDSS), in this work we combine Partial Correlation Coefficient (PCC) analysis and Random Forest (RF) analysis. These two methods are described in the following sections.
### Partial Correlation Coefficient Analysis
Partial correlation coefficient (PCC) analysis (Lawrance, 1976) is a useful tool to describe the correlation between two quantities whilst controlling for the others. This allows us to disentangle primary correlations from indirect, secondary, correlations.
The PCC for variable A with variable B, fixing for variable C, \(\rho_{AB|C}\), is related to the Spearmann rank correlation coefficient between A and B, \(\rho_{AB}\), and other combinations of the correlations between these variables. Specifically,
\[\rho_{AB|C}=\frac{\rho_{AB}-\rho_{AC}\rho_{BC}}{\sqrt{1-\rho_{AC}^{2}}\sqrt{1 -\rho_{BC}^{2}}} \tag{5}\]
as in Lawrance (1976).
We recall that the use of the Spearmann rank correlation is advantageous over Pearson correlation since the Spearmann rank correlaion first rank orders the parameters, which reduces the assumption of linearity between the parameters and instead favours monotonicity, which is useful in this work due to the non-linearity of many of the predicted relations (Bluck et al., 2020; Baker et al., 2022). See Baba et al. (2004) for further details.
The PCCs can be expanded to include more than three variables by using the methods provided in the _pingouin_(Vallat, 2018) package. Yet, controlling for only the two most important variables is often adopted, as this maximises performance and accuracy.
These coefficients can also be used to identify the direction of maximum variance of variable C in the parameter space defined by parameters A and B. On a plot of variable A on the y-axis against variable B on the x-axis, with variable C on the z-axis (e.g. colour-coded), an arrow can be drawn in the x-y plane with angle \(\theta\) clockwise from the positive y-axis, denoting the direction of maximum variation, or largest gradient, in variable C. Such arrow angles can be quantified by using the PCCs through the following equation,
\[\tan\theta=\frac{\rho_{AC|B}}{\rho_{BC|A}}, \tag{6}\]
adapted from Piotrowska et al. (2020); Bluck et al. (2020).
To determine the error on the PCCs and \(\theta\) in this method when applied to the galaxies in the samples, bootstrap random sampling was used, taking 100 random samples of the data with replacement, with each sample the same size as the original dataset, and computing the standard deviation on these results, as is done in Baker et al. (2022).
We note that PCCs can provide a useful indication of the direct correlations as long as these are monotonic.
### Random Forest Analysis
In this work we also use Random Forest (RF) analysis. This is a widely used machine learning method, which uses decision trees to determine which parameters are most important in predicting the target variable.
We used Random Forest regression to predict the parameter importances in determining the target variable (the Balmer decrement in this work). The data is split into a train and test sample, with a 50:50 split, where the train sample is used to train the regressor, and the test sample used to evaluate the accuracy of the regressor. We used the Random Forest regressor from the python _Scikit-learn_ package (Pedregosa et al., 2011).
Compared to PCC analysis, RF analysis does not require the variables to have a monotonic relationship and can simultaneously explore the dependence on multiple inter-correlated quantities (Bluck et al., 2020, 2020). PCC analysis additionally tells us the direction of the dependence, where RF does not.
To maximise the efficiency and accuracy of the regressor, we fine-tune the minimum number of samples on the final leaf, which dictates how many splits the trees need to make in the training of the regressor. If this hyper-parameter is set too low, the regressor has a tendency to overfit to the training sample, however if the value is too large the accuracy of the regressor is low. The results of the fine-tuning for the local galaxies are presented in Appendix C.
The error on the determined importances were calculated by repeating the whole process 100 times, re-splitting the data and re-training the regressor each time. The error on the importances was then taken as the standard deviation of the calculated importances for each parameter.
For further detail on Random Forest analysis, see Bluck et al. (2022).
## 6 Results
In this section we present the results of our statistical analysis using both PCC and RF. We also explore the dependence of the Balmer decrement on the various parameters and identify projections of these multi-dimensional parameter spaces that minimise the scatter of the individual relations, hence finding analytical relations that
simultaneously describe the dependence of the Balmer decrement on multiple galactic quantities.
We then use these projections to compare the derived analytical relations between local galaxies (SDSS) and galaxies at \(z\sim 1-3\) (KLEVER and MOSDEF).
### Sdss
#### 6.1.1 Random Forest
We investigate the importance of various galactic parameters in predicting the Balmer decrement using Random Forest regressors in order to explore the theoretical framework described above. Specifically, we investigate the importance of the following galactic parameters: stellar mass (M\({}_{\star}\)), velocity dispersion inferred from H\({}_{\alpha}\) (\(\sigma_{\rm H\alpha}\)), gas-phase metallicity ([O/H]), star formation rate inferred from the D4000 break (SFR\({}_{\rm D4}\)), inclination of the galaxy (i) and a control random variable (R) to test how valid the calculated importances are. The details of the tuning of the hyperparameters used in the Random Forest, in order to increase the accuracy of the regressor, can be found in Appendix C.
The importance of the velocity dispersion of the stars is additionally investigated in Appendix D, showing it has a similar importance compared to the nebular velocity dispersion, however stellar velocity dispersion data was not available for the higher redshift galaxies, and so was not used further in this work.
We do not include the SFR inferred from the H\({}_{\alpha}\) emission line as the same quantity enters into the Balmer Decrement. Together also with the fact that the \(H_{\alpha}\) flux is further corrected for extinction through the Balmer Decrement, these aspects result in a spurious correlation with the Balmer Decrement itself. This is however discussed in Appendix A.
The Random Forest regressor was first run on our sample of local galaxies with no pre-selection on their inclination. The importance of each parameter, along with its error, in determining the Balmer decrement is shown in Figure 1 as the blue bars with circles. This tells us that the stellar mass is the most important parameter in determining the Balmer decrement by far (consistent with previous studies e.g. Garn & Best 2010), followed by the inclination, \(i\).
From Figure 1 we can see that the inclination is very important. This makes sense physically, since if we see a galaxy edge-on, the flux from the galaxy will encounter more dust on average as it travels to us compared with if the galaxy was face-on, implying the dust attenuation will be larger. Hence, this importance is not intrinsic to how the dust attenuation or dust content are related to the galaxy properties, but simply consequence of the viewing angle. To reduce this effect and focus on more fundamental parameters, we pre-select galaxies in terms of their inclination such that its importance was as small as possible whilst maintaining a large enough sample of galaxies to which we can apply our statistical tools to confidently. We investigated this by cutting the inclination to be less than 60\({}^{\circ}\), 45\({}^{\circ}\) and 30\({}^{\circ}\), with 90\({}^{\circ}\) being edge-on and 0\({}^{\circ}\) being face-on. The inclination importance dropped significantly, and we found a sample with inclination less than 45\({}^{\circ}\) had inclination with negligible importance (as quantified further below) whilst maintaining a large sample. This cut the sample from 65,613 galaxies to 21,488 galaxies, whilst maintaining all other signal-to-noise selection criteria discussed in Section 2.1.6.
Using the sample of local galaxies with inclination \(i<45^{\circ}\) we re-calculated the importance of each galactic parameter in determining the Balmer decrement. These importances and their errors are shown as the green bars with stars in Figure 1. The stellar mass is still the most important parameter, now followed by the velocity dispersion and then by the metallicity. The SFR has very little importance, barely above the random variable. The inclination is now as important as the random variable, R. Therefore, this selection on the inclination was adopted for this work, and henceforth all local galaxies analysed will have inclinations \(i<45^{\circ}\) unless specified otherwise, to control for its effect on the Balmer decrement.
#### 6.1.2 2D Histogram Visualisation and PCC Arrows
To better visualise the relative importance of the parameters, in Figure 2 we plot the local galaxies used in this work (with inclination \(i<45^{\circ}\)) in a 2D binning scheme. Since the stellar mass is found by the Random Forest to be the most important parameter, we keep the x-axis as stellar mass, and vary the y-axis between the SFR\({}_{\rm D4000}\), metallicity and the nebular velocity dispersion. The dependent variable (i.e. the one for which we want to find the dependence on the other quantities), on the z-axis (colour-coded), is always the Balmer decrement. The galaxies were binned in hexagonal bins, and the median Balmer decrement of the galaxies in each bin was calculated. Bins with less than 25 galaxies were ignored. The contours show the density of the galaxies in this space, with the outermost contour containing 95% of the galaxies in the sample. The contours in Figure 2 (c) do not connect due to the sharp cut in the velocity dispersion, producing a discontinuity in the density distribution.
The PCC arrows were calculated using the binned galaxy parameters rather than the individual galaxies, to avoid the analysis being dominated by the inner, most populated regions. Here the PCC-derived arrows indicate the direction in which the Balmer decrement has the largest average gradient on the 3D surface of each diagram, with its angle defined clockwise from the positive y-axis. The error on the angles was calculated through bootstrap random sampling.
The colour-shading and gradient arrows visually illustrate how the Balmer decrement depends on all of these parameters with varying strength. Considering the angles of the arrows on each of the plots,
Figure 1: Relative importance of the different galactic parameters in determining the Balmer decrement for local galaxies (SDSS) calculated using Random Forests. Green bars with stars show importances for galaxies with inclination \(i<45^{\circ}\), while blue bars with circles show the importance for galaxies with all inclinations. Stellar mass is the most important parameter in both samples. Selecting for the inclination \(i<45^{\circ}\) reduces the importance of the inclination to be negligible. SFR\({}_{\rm D4}\) is the SFR derived from D4000. The parameter R is a control, random variable.
the Balmer decrement has a strong correlation with the stellar mass. The colour shading and PCC arrow in panel (a) visually confirms that, at fixed stellar mass, there is essentially no dependence of the Balmer Decrement on SFR. Panels (b) and (c) visually show that, at a fixed stellar mass, the Balmer Decrement also depend significantly both on the metallicity and on the velocity dispersion. The inclination of the PCC arrow in (b) and (c) being close to \(45^{\circ}\) would naively indicate that the dependence on metallicity and velocity dispersion is stronger than inferred from the Random Forest, and nearly as strong as the dependence on the Stellar Mass. However, one has to take into account that these 2D histograms only consider the Balmer Decrement dependence on only two quantities at a time, therefore any residual dependence not associated with the quantities on the plot must be taken by one of them. Therefore, it is likely that in plot (b) the metallicity it also picking the Balmer decrement dependence on the velocity dispersion, while vice-versa in plot (c) the velocity dispersion is picking the Balmer decrement dependence on the metallicity, if metallicity and velocity dispersion are correlated with each other.
#### 6.1.3 Partial Correlation Coefficients
In this section we further investigate the importance of the various parameters identified in the previous sections by using the Partial Correlation Coefficient (PCC) analysis on all parameters whilst keeping the two most important parameters constant. With respect to the Random Forest analysis, the (full) PCC additionally tells us the direction
Figure 2: Star formation rate (estimated from D4000), metallicity and velocity dispersion (normalised by 100 km/s) as a function of stellar mass, colour-coded by the Balmer decrement (i.e. 2D histograms in which the Balmer decrement is the dependent variable), for local galaxies (SDSS). The grey arrows denote the direction in which the Balmer decrement has the largest gradient, determined using the the PCC coefficients, with its angle defined clockwise from the positive y-axis. The colour gradients and arrows clearly indicate a strong dependence on stellar mass and, at a given stellar mass, also a strong dependence on both metallicity and velocity dispersion, but little or no dependence on SFR. The black contours indicate the density of the galaxies in each diagram, with the outermost contour containing 95% of the galaxies. The contours in (d) do not join due to the sharp cut in \(\sigma_{\mathbf{h}_{\alpha}}\).
(sign) of the dependence. Again we apply this analysis to both samples of local galaxies with and without a selection on their inclination in order to explore its effect.
To determine the PCCs for this sample of local galaxies with no cut on their inclination, we kept the two most important parameters constant. The PCCs between the Balmer decrement and the galaxy parameters deemed important in this analysis (stellar mass, metallicity, velocity dispersion, inclination and the D4000 derived SFR) are shown in Figure 3. Similar analysis including the SFR derived from H\({}_{\alpha}\) is shown in Appendix A, supporting the conclusion that the relation between the Balmer decrement and SFR\({}_{\rm H_{\alpha}}\) is driven mostly by the fact the Balmer decrement was used to dust correct the H\({}_{\alpha}\) flux in SFRH\({}_{\alpha}\), as well as H\({}_{\alpha}\) also appearing in the Balmer decrement. The green bars with stars show the PCCs using galaxies with inclination \(i<45^{\circ}\), and the blue bars with circles represent the PCCs using the galaxies with no selection on their inclination.
For the sample with no selection on its inclination, the strongest correlation with the Balmer decrement is with the stellar mass and then the inclination, which is consistent with the RF results shown in Figure 1. The PCCs for the galaxies with inclination \(i<45^{\circ}\) show that the stellar mass is still the most strongly correlated parameter with the Balmer decrement, now followed by the metallicity and the velocity dispersion, with the inclination being much less correlated compared with the sample with no selection on its inclination. Hence, these results also support the selection on the inclination of the local galaxies, allowing the effects of the inclination on the Balmer decrement to be controlled for.
We additionally see that the PCC between the Balmer decrement and all parameters except the the SFR derived from D4000 are positive for both samples, implying a positive correlation between the Balmer decrement and those parameters. The PCC between the Balmer decrement and the SFR derived from D4000 however is negative for the sample with no selection on its inclination, and almost zero for the sample with inclination \(i<45^{\circ}\), implying any correlation is driven by the inclination, or some other parameter which cross correlates them, and this effect is reduced when the inclination is controlled for.
#### 6.1.4 Establishing the Analytical Dependence of the Balmer Decrement on Galaxy Properties
Combining the results from the RF and PCC analysis, we see that the stellar mass is by far the most important parameter in determining the Balmer decrement. Both the metallicity and the velocity dispersion have significant importance, however the two analysis methods do not agree on the order of their importance, with RF analysis ranking the velocity dispersion slightly higher than the metallicity, and the PCC analysis ranking the metallicity and the velocity dispersion at similar levels. In this section we investigate the analytical dependence of the Balmer decrement on these important galaxy properties.
To quantitatively investigate how the Balmer decrement depends on these galactic parameters, we created track plots of the galaxies, with the stellar mass on the x-axis, Balmer decrement on the y-axis, and the tracks binned in SFRD4000, metallicity and nebular velocity dispersion. The tracks represent the stellar mass vs. Balmer decrement relation in constant bins of the third variable. We chose the stellar mass to be on the x-axis since it showed the strongest correlation with the Balmer decrement from analysis in the previous sections.
These track plots are shown in Figure 4, which illustrates that, while there is always a strong Balmer decrement dependence on the stellar mass, at a fixed stellar mass, the Balmer decrement is dependent most on metallicity and least on SFR. The strength of the Balmer decrement vs. metallicity, vs. SFR and vs. velocity dispersion dependencies are stellar mass-dependent themselves. The dependence on velocity dispersion is strongest at higher mass, and this trend is inverted for metallicity, with the dependence largest at low masses. There is negligible dependence of the Balmer decrement on the SFR at all masses investigated here, and so is not considered further in the analysis below.
The interdependencies of these galactic parameters are clearly shown here. Following the methodology proposed in Mannucci et al. (2010), we attempted to reduce the dimensionality of the problem by rotating the stellar mass, metallicity and velocity dispersion parameters space such that this projection minimises the dispersion in the Balmer decrement. This projection reduces the dependence of the Balmer decrement to only one parameter, which we defined as the reduced mass \(\mu\),
\[\mu=\log{\rm M_{\star}}+\alpha\left[{\rm O/H}\right]+\delta\log\sigma_{100} \tag{7}\]
where M\({}_{\star}\) is in units of solar mass, and \(\sigma_{100}\), is the velocity dispersion (measured from H\({}_{\alpha}\)) normalised by 100 km/s. This definition and normalisation is maintained throughout the rest of this paper. Additionally, \(\alpha\) and \(\delta\) are parameters that should be determined to minimise the dispersion in the Balmer decrement. The minimisation method used is discussed further in Appendix E, where the determined values of \(\alpha\) and \(\delta\) which minimise the dispersion in the Balmer decrement are 3.67 and 2.96 respectively.
Using these minimisation parameters, we recreated Figure 4 but replacing the stellar mass with the reduced mass \(\mu\) at minimum dispersion, as shown in Figure 5. The tracks in both Figure 5 (a) and (b) are much less spread in Balmer decrement, at a given value of the reduced mass, compared to the tracks in Figure 4. This shows the dependence of the Balmer decrement on the metallicity and velocity
Figure 3: Partial correlation coefficients (PCC) of the Balmer decrement with the different galaxy parameters for local galaxies (SDSS). Green bars with stars show the PCC values for galaxies with inclination \(i<45^{\circ}\), and blue bars with circles show the PCC values for galaxies with all inclinations. The stellar mass is most strongly and intrinsically correlated with the Balmer decrement, in agreement with the RF results, followed by the inclination for the sample of galaxies with no selection on their inclination. Selecting for the inclination \(i<45^{\circ}\) greatly reduces the PCC value of the Balmer decrement with inclination, and in this case the second strongest Balmer decrement correlation is with metallicity and velocity dispersion, while the correlation with SFRD4 (SFR derived from D4000), becomes insignificant.
dispersion is being greatly reduced., indicating that our minimisation analysis reduced the dimensionality of our problem so that the Balmer decrement depends on one parameter, \(\mu\).
To further illustrate the effect of using the reduced mass over the stellar mass, we ran the RF regression on the galaxies with inclination \(i<45^{\circ}\), whilst including the reduced mass \(\mu\) in the analysis as an extra parameter to test whether \(\mu\) is now the most important parameter compared to the other global galactic parameters. We also included the un-minimised parameter \(\mu_{0}=\log\mathrm{M_{\star}}+[\mathrm{O/H}]+\log\sigma_{100}\) to test whether the importance of \(\mu\) is simply due to the RF picking up on the linear combination of the other parameters, or whether the minimisation has had an effect. The importances of each of the parameters are shown in Figure 6, showing that the importance of \(\mu\) in determining the Balmer decrement is dominant, with all other parameters including \(\mu_{0}\) having next to no importance relatively. Hence, the reduced mass encapsulates the majority of the importance of all the other galactic parameters considered in this work.
To show how well this minimisation worked on the galaxies themselves, we plot the galaxy contours with the Balmer decrement on the y-axis against (a) the stellar mass and (b) the reduced mass, shown in Figure 7. In both plots, the mean and error on the mean of the Balmer decrement are shown in blue, and the median and the 84-16th percentile range are shown in green, each calculated in bins 0.15 dex wide of the x-axis parameter. It can be seen that the dispersion or 84-16th percentile range of the Balmer decrement of the galaxies is reduced when moving from having the stellar mass on the x-axis in (a) to the reduced mass on the x-axis in (b). The unweighted average 84-16th percentile range across all the bins in the x parameter was calculated for each plot, and shown as the red error bar on the plots, with (a) having 0.906 and (b) having 0.849. This shows the effect of the minimisation, reducing the percentile range by 6.3%. This result indicates that part of the dispersion in the Balmer decrement vs. stellar mass diagram is not intrinsic, but a consequence of the secondary dependences on metallicity and velocity dispersion. Once these dependences are taken into account by introducing \(\mu\), then the scatter is reduced. The residual scatter is likely due to diverse evolutionary processes within the galaxies, although it may also partly be due to observational errors. The reduction in the percentile range is small, although this is not surpising since the stellar mass accounted for the majority of the variation in the Balmer decrement, so accounting for the less important (but still important) parameters would have a small, non-negligible effect.
In order to provide a functional form of the Balmer decrement vs. stellar mass and vs. reduced mass dependence, we fit a 3rd order polynomial to the mean of the Balmer decrement in each of those parameter spaces, providing the following fits:
Figure 4: Balmer decrement as a function of stellar mass in bins of (a) SFR (calculated via D4000), (b) metallicity, and (c) velocity dispersion. These tracks confirm that, at a fixed stellar mass, the Balmer decrement depends on metallicity and velocity dispersion, but has negligible dependence on SFR. The contours in black display the density of galaxies in each diagram, with the outermost contour containing 95% of the galaxies.
\[{\rm H}_{\alpha}/{\rm H}_{\beta}= (-0.193\pm 0.067)\log({\rm M}_{\bullet}[{\rm M}_{\odot}])^{3}\] \[+(6.097\pm 2.059)\log({\rm M}_{\bullet}[{\rm M}_{\odot}])^{2}\] \[+(-63.163\pm 21.236)\log({\rm M}_{\bullet}[{\rm M}_{\odot}])\] \[+(218.705\pm 72.946) \tag{8}\]
and
\[{\rm H}_{\alpha}/{\rm H}_{\beta}= (-0.027\pm 0.005)\mu^{3}+(0.848\pm 0.160)\mu^{2}\] \[+(-8.385\pm 1.608)\mu+(29.856\pm 5.368) \tag{9}\]
and the resulting fits are shown in Figure 7 as the orange lines.
In order to further demonstrate that the reduced mass \(\mu\) encapsulates all the variation of the Balmer decrement due to indirect correlations with the other galactic parameters (metallicity and velocity dispersion), we recreate Figure 2 by plotting both the metallicity and velocity dispersion now as functions of the reduced mass \(\mu\), colour coded by the Balmer decrement, as is shown in Figure 8. When replacing the stellar mass with the reduced mass \(\mu\), the dependence of the Balmer decrement on both the metallicity and the velocity dispersion (whilst controlling for the reduced mass) greatly reduces, as it can be seen by eye or by considering the arrow angles, which each rotate to within about \(20^{\circ}\) or horizontal (\(90^{\circ}\)). Hence using this projected parameter space encapsulates the dependence of the Balmer decrement on a single parameter.
#### 6.1.5 Summary
We have identified the most important parameters in determining the Balmer decrement through Random Forest and Partial Correlation Coefficient analysis, finding the stellar mass to be the most important, followed by metallicity and nebular velocity dispersion (once the dependence on the inclination, \(i\), is removed by selecting galaxies with \(i<45^{\circ}\)).
The strong dependence on stellar mass is in line with the expectation from Equation 4, where this dependence primarily comes from the MGMS. The dependence on metallicity is also in line with Equation 4, where this dependence comes from the relationship between the dust to gas ratio and the metallicity, since DGR=DZR+\(Z_{g}\).
The additional dependence on the nebular velocity dispersion was not expected. This may be due to how the nebular velocity dispersion traces the gravitational potential in the galaxy, which is the capability of the galaxy to retain dust against the strong radiation pressure on the dust and to retain metals against metal loss via winds and gas outflows (Chisholm et al., 2015).
Figure 5: Balmer decrement versus reduced mass \(\mu=\log{\rm M}_{\star}+\sigma[{\rm O}/{\rm H}]+\delta\log\sigma_{100}\), where \(\alpha=3.67\) and \(\delta=2.96\), in bins of metallicity (left) and velocity dispersion (right). The fact that, at a fixed \(\mu\), there is little/no dependence on the Balmer decrement on either metallicity or velocity dispersion indicates that the reduced mass has captured well these secondary dependences. In particular, compared to the track plots in Figure 4, the dependence of the Balmer decrement on the colour coded parameters is considerably reduced. The contours in black display the density of galaxies in these diagrams, with the outermost contour containing 95% of the galaxies.
Figure 6: Importance of the various galactic parameters and the reduced mass \(\mu\) in determining the Balmer decrement, for local galaxies (SDSS) with inclination \(i<45^{\circ}\), as inferred with the RF analysis. The reduced mass \(\mu\) is now by far the most important parameter, reducing the importance of the stellar mass, metallicity and velocity dispersion to be negligible, meaning all importance of these three galactic parameters are well incorporated in the reduced mass \(\mu\) for what concerns their role in determining the Balmer decrement. The un-minimised parameter \(\mu_{0}=\log{\rm M}_{\star}+[{\rm O}/{\rm H}]+\log\sigma_{100}\) is also included to show the importance of the minimised \(\mu\) is not simply due to the RF regressor picking up the linear combination of the other parameters. The difference between the average MAE and MSE of the train and test samples being so small implies no overfitting. Parameter R is the random variable and SFR\({}_{\rm D4}\) is the SFR derived from D4000.
Additionally, our analysis has determined the SFR derived from H\({}_{\alpha}\) is solely important due to the H\({}_{\alpha}\) flux used in calculating the SFR being dust corrected itself, and the SFR derived from D4000 is a more valid tracer of the SFR in this work. This tracer of the SFR is shown to be not important in determining the Balmer decrement when compared to the stellar mass, metallicity and velocity dispersion.
By combining these important parameters into the reduced mass \(\mu\), we have been able to collapse the majority of the dependence of the Balmer decrement to this parameter. This will allow for much easier comparison with other samples of galaxies in the next section.
Quantitatively, the dependence on stellar mass and metallicity is in the right direction, however does not match exactly with the expectations from the simple predictions in Equation 4. To better predict
Figure 8: Metallicity and velocity dispersion as a function of reduced stellar mass \(\mu\), colour-coded by Balmer decrement (i.e. 2D histograms in which the Balmer decrement is the dependent variable), for local galaxies (SDSS). The grey arrows denote the direction in which the Balmer decrement has the largest gradient, determined using the the PCC coefficients. The colour gradients and arrows clearly indicate an even stronger dependence of the Balmer decrement on reduced stellar mass \(\mu\), relative to the dependence on the stellar mass seen in Fig. 2, while the dependence of the Balmer decrement on metallicity and velocity dispersion is now greatly reduced with respect to Fig. 2 (the residual dependence is due to the fact that these diagrams explore only the dependence of two quantities at each time, hence they pick the residual dependence on all other quantities). The black contours indicate the density of the galaxies in each diagram, with the outermost contour containing 95% of the galaxies. The contours in (b) do not join due to the sharp cut in \(\sigma_{\rm H_{\alpha}}\).
Figure 7: Contours showing the distribution of the Balmer decrement of local galaxies (SDSS) as a function of (a) stellar mass and (b) reduced mass. The red error bars represent the average 16-84 percentile range of the Balmer decrement. The blue line represents the mean Balmer decrement in bins of 0.15dex in the x-axis quantity; the shaded blue region represents the error on the mean in each bin. The green lines represents the median Balmer decrement, and the green shaded region the 84-16th percentile range in each bin. The orange line represents the 3rd order polynomial fit to the mean. The outermost contour contains 95% of the galaxies.
these observations, more advanced modelling would be required, including a comparison with numerical simulations and potentially considering a geometrical factor dependent on mass, which is currently assumed constant. Zuckerman et al. (2021) argue that the dust attenuation is related to the thickness of the galaxy, and since a galaxy with higher stellar mass will have a greater thickness, it is likely that the disc thickness contributes some of the correlation between stellar mass and dust attenuation we observe.
### Comparison of Samples at High Redshifts
Although the focus of this paper is primarily to investigate the scaling relations between dust attenuation (traced by the Balmer decrement) and galaxy properties in the local universe, it is interesting to explore also whether such relations hold at high redshift. Samples at high redshift have much lower statistics and higher uncertainties, hence the level of analysis performed in this paper on the local sample is certainly not possible on high redshift samples, at least not yet. However, we can explore whether their properties are consistent with the local findings.
Specifically, we investigate whether the scaling relations given by Equations 8 and 9 that we have found in the local Universe between dust attenuation and other global galactic properties hold at high redshift. We do this by comparing the observed values of the Balmer decrement from the galaxies in the KLEVER and MOSDEF surveys (\(z\sim 1-3\)) with the galaxies in the SDSS survey in stellar mass space and in reduced mass space, which should encapsulate all of the parameters important in determining the Balmer decrement. As mentioned previously, new samples at even higher redshift from NIRSpec-JWST surveys do not yet have the information required to perform these tests.
Plots comparing the local and higher redshift galaxies are shown in Figure 9, where the Balmer decrement is plotted against stellar mass and also against reduced mass for the galaxies in the KLEVER survey (a) and (b), and the galaxies in the MOSDEF survey (c) and (d). Here the mean and error on the mean were calculated in order to focus on the primary dependences. For both the galaxies in KLEVER and MOSDEF this shows the Balmer decrements to overlap with those of the local galaxies in both the stellar mass and reduced mass space. These findings are consistent with no redshift evolution of these relations. For the simple dependence on mass, this finding agrees with the results from Shapley et al. (2022) for the galaxies in MOSDEF, and with the results from Shapley et al. (2023) at even higher redshifts.
The Balmer decrement vs. stellar mass relationship for the galaxies in the MOSDEF survey seems to slightly flatten compared to galaxies in the SDSS survey, which can be explained by the fact the MOSDEF survey is complete for stellar masses less than \(10^{10.5}\mathrm{M}_{\odot}\), however incomplete for dusty red star forming galaxies with stellar mass above \(10^{10.5}\mathrm{M}_{\odot}\). Hence the dustiset massive galaxies might be missed, causing the Balmer decrement vs. stellar mass relationship to flatten at high stellar masses.
This lack of evolution in the stellar mass space could be due to a truly redshift invariant Balmer decrement vs. stellar mass relationship. As suggested by Shapley et al. (2022), a non-evolving relation can arise due to offsetting effects from the simultaneous evolution of gas mass surface density, DGR, metallicity, dust geometry, and/or dust mass absorption coefficients. Our finding of no evolution even in the Balmer decrement relation with _reduced_ mass \(\mu\), makes the explanation of a combination of different evolutionary effects canceling each other unlikely. Our findings are more supportive of a scenario in which the dust production mechanism and associated distribution in galaxies do not change with cosmic time up to \(z\sim 2-3\), i.e. the multi-dimensional relationship between dust attenuation and the galactic quantities does not change with cosmic epoch, galaxies simply populate different regions of this multidimensional surface at different cosmic epochs.
We conclude, however, by warning that the comparison with high redshift galaxies is still plagued by the large dispersion and poor statistics, which makes even the errors on the means still relatively large (as highlighted in Figure 9), and a more thorough exploration requires much larger samples, which may become available with the next generation near-IR MOS spectrographs (Maiolino et al., 2020).
## 7 Conclusion
In this work we have investigated which galactic parameters are most important in determining the dust attenuation in galaxies, as traced by the Balmer decrement, and exploring how this varied at different cosmic epochs by comparing local galaxies (SDSS) with samples at \(z\sim 1-3\) (KLEVER and MOSDEF).
We summarise our results as follows:
* Partial Correlation Coefficient (PCC) and Random Forest (RF) analysis on local (SDSS) galaxies show that the stellar mass is the most important parameter in determining the dust attenuation traced by the Balmer decrement. Metallicity and nebular velocity dispersion are also important but less so than the stellar mass.
* Galaxy inclination has obviously an important effect on the observed attenuation. However, its effect on these results was controlled for by selecting galaxies with inclination \(i<45^{\circ}\); with this selection the Balmer decrement had negligible dependence on the inclination using PCC and RF analysis.
* The dependence of the Balmer decrement on SFR traced by H\({}_{\alpha}\) is driven by the fact that H\({}_{\alpha}\) is also included in the Balmer decrement, and by the fact that the Balmer decrement being used to dust correct the H\({}_{\alpha}\) flux; hence the correlation between Balmer decrement and SFR inferred from H\({}_{\alpha}\) is spurious. No dependence of the Balmer decrement on SFR is found if the latter is inferred from the D4000.
* The dispersion of the Balmer decrement in the rotated parameter space defined by the reduced mass, \(\mu=\log\mathrm{M}_{\star}+3.67\mathrm{[O/H]}+2.96\log\sigma_{100}\), is reduced compared to the dispersion in stellar mass space. This indicates that the variation in the Balmer decrement due to the metallicity and velocity dispersion are captured by this reduced mass.
* The dependence of the Balmer decrement on the stellar mass is expected from the molecular gas main sequence relation (M\({}_{H2}\) vs M\({}_{\star}\)). The dependence on metallicity is also expected from the dust-to-gas ratio. The dependence on velocity dispersion was not expected and may trace the capability of more massive systems (traced by the higher velocity dispersion) to better retain dusty clouds against radiation-driven pressure outflows.
* We observe no significant evolution of the relationship between the Balmer decrement and stellar mass relationship up to \(z\sim 1-3\). Hence the dust attenuation vs stellar mass relationship does not evolve up to this redshift. We additionally see no significant evolution of the relationship between the Balmer decrement and the reduced mass, \(\mu\), indicating that the scaling relations found locally capture the dust attenuation properties also of distant galaxies.
This work can be greatly expanded at high redshift with the next generation, large multiplexing near-IR spectrographs (e.g. MOONS Cirasuolo et al., 2020; Maiolino et al., 2020), which will provide
spectra for several hundred thousands galaxies, i.e. with statistics similar to the SDSS, around cosmic noon (z\(\sim\)1-3).
This work can also be extended to higher redshifts using data from JWST's NIRSpec surveys and NIRCam slitless mode. This exploration has already started for what concerns the dependence of the Balmer decrement as a function of stellar mass (Shapley et al., 2023), but can be expanded further to also investigate the relation with the reduced mass. JWST spectroscopic surveys are expected to detect thousands of galaxies out to z\(\sim\)7, for which both H\({}_{\alpha}\) and H\({}_{\beta}\) will be available. Hence it will be possible to investigate the Balmer decrement across a very large range of redshifts and track the evolution of the dust attenuation versus reduced mass relation.
## 8 Acknowledgements
GM and RM acknowledge support from the ERC Advanced Grant 695671, 'QUENCH' and from the Science and Technology Facilities
Figure 9: Balmer decrement as function of stellar mass (left) and reduced mass (right) comparing local galaxies (SDSS) with galaxies at high redshift (z\(\sim\)1–3) from the KLEVER survey (top) and the MOSDEF survey (bottom). The distribution of local galaxies in SDSS is shown by the black contours, where the outermost contour contains 95% of the galaxies, and the blue line is the mean Balmer decrement in each 0.15 dex wide bin in stellar mass or reduced mass with at least 25 galaxies present. Shaded blue regions represent the error on the mean in each bin. High-z galaxies in the KLEVER and MOSDEF surveys are shown in orange. The purple segments show the means and the purple shaded regions the errors on the mean for each 0.5 dex wide bin in stellar mass and reduced mass. The green error bars represents the average 16-84 percentile ranges and the red error bars represent the median error in the Balmer decrement measurement for the high-z galaxies. (a) and (b) show that there is no significant evolution between the Balmer decrement of the galaxies both in stellar mass and reduced mass space. Similarly (c) and (d) show that there is no significant evolution between the Balmer decrement of the galaxies from MOSDEF and the local SDSS galaxies in both stellar mass and reduced mass space. These results indicate that there is no evolution of the relation between the Balmer decrement versus stellar mass and of the Balmer decrement versus reduced mass up to a redshift of z\(\sim 1-3\). The slight flattening of the Balmer decrement stellar mass relation at high masses (M\({}_{\bullet}\)\(>\)10\({}^{10.5}\)M\({}_{\odot}\)) might be due to the MOSDEF survey possibly missing a portion of the massive dusty red star-forming galaxies.
Council (STFC). RM also acknowledges funding from a research professorship from the Royal Society.
## 9 Data Availability
The MPA-JHU catalogue is publicly available at [https://skyserver.sdss.org/dr12/en/help/docs/tabledesc.aspx?name=galSpecInfo](https://skyserver.sdss.org/dr12/en/help/docs/tabledesc.aspx?name=galSpecInfo). The SDSS morphological catalogue is publicly available at [https://cdsarc.cds.unistra.fr/viz-bin/cat/J/ApJS/196/11](https://cdsarc.cds.unistra.fr/viz-bin/cat/J/ApJS/196/11). The MOSDEF catalogue is publicly available at [https://mosdef.astro.berkeley.edu/for-scientists/data-releases/](https://mosdef.astro.berkeley.edu/for-scientists/data-releases/). The KLEVER data is publicly available and attached to its survey paper Curti et al. (2020).
|
2306.17515 | Mass and Shape Determination of Optically Levitated Nanoparticles | When introducing a nanoparticle into an optical trap, its mass and shape are
not immediately apparent. We combine a charge-based mass measurement with a
shape determination method based on light scattering and an analysis of the
damping rate anisotropy, all on the same set of silica nanoparticles, trapped
using optical tweezers in vacuum. These methods have previously only been used
separately, and the mass determination method has not been applied to
asymmetric particles before. We demonstrate that the combination of these
classification techniques is required to distinguish particles with similar
mass but different shape, and vice versa. The ability to identify these
parameters is a key step for a range of experiments on precision measurements
and sensing using optically levitated nanoparticles. | Bart Schellenberg, Mina Morshed Behbahani, Nithesh Balasubramanian, Ties H. Fikkers, Steven Hoekstra | 2023-06-30T10:02:55Z | http://arxiv.org/abs/2306.17515v2 | # Mass and shape determination of optically levitated nanoparticles
###### Abstract
When introducing a nanoparticle into an optical trap, its mass and shape are not immediately apparent. We combine a number of methods to determine the mass and shape of trapped nanoparticles, which have previously only been used separately. We demonstrate that the use of multiple classification techniques is in certain cases required to avoid incorrect or ambiguous results. The ability to identify these parameters is a key step for a range of experiments on precision measurements and sensing using optically levitated nanoparticles.
With a rapidly increasing number of developments over the recent years, levitated nanospheres have evolved into an exciting platform for innovative measurement opportunities and applications. Demonstrated applications span from the manipulation of microscopic biological systems [1; 2; 3; 4] to ultra-sensitive accelerometers and force-sensors [5; 6; 7; 8; 9], torque detectors [10; 11; 12; 13], hyper-fast mechanical rotors [14; 15; 16], and measurements on thermal diffusion [17; 18; 19; 20; 21]. Numerous proposals in recent years have explored the potential of using isolated nanometer-sized particles for probing gravitational waves [22; 23], to observe quantum gravity [24; 25; 26], to employ in dark matter scattering experiments [27; 28], or to trial quantum collapse models [29]. For numerous of these [27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37] and other [30; 31; 32; 33; 34] applications, knowing the precise mass and morphology of the levitated particles is essential. When introducing a particle into an optical tweezer however, its shape and mass are not immediately apparent. Particles from a monodisperse solution of spheres have been observed to regularly carry some non-negligible ellipticity [35], or they may aggregate to form composite structures [10].
While a number of methods has been demonstrated on an individual basis, typically capable of classifying a single property of the particles, a comprehensive correlative study of these techniques applied to a set of differently shaped and sized particles remains unexplored. In this work, we present the shape and mass determination for optically levitated silica nanoparticles of various sizes and shapes. We demonstrate that, in selected cases, the combination of multiple techniques is in fact required to discriminate between different particles. We manage to capture both single particles as well as aggregated structures in an optical tweezer trap by adjusting the concentration of a monodisperse solution of nanospheres [10]. As illustration of some of the typical shapes that we encounter, Fig. 1 shows a scanning electron microscope picture of our solution. To unambiguously determine the mass and shape of the optically levitated nanoparticles we combine a number of in situ classification methods.
Specifically, we extend the mass determination that was previously demonstrated for nanospheres [36; 37] to smaller sizes and to composite nanoparticles. We combine this method with the shape determination from a non-isotropic damping rate due to residual background gas [10; 38]. In addition, by employing a secondary probe laser, we find the particle's morphology through its angle-resolved Rayleigh scattering profile [35]. We discuss the reliability of each method individually as well as their combined results. Our approach does not rely on precise calculations of the moment of inertia or the polarisability of the particle.
The structure of this paper is as follows. Following an introducing of the experimental setup, we present the methods and results from each individual classification technique to determine the properties of a fixed set of particles. After that we correlate and discuss the combined results.
We combine a number of standard techniques [39] to optically trap nanoparticles in a controlled pressure environment. A schematic of our experimental setup is shown in Fig. 2. We use the light of a Coherent Nd:YAG Melphoto MOPA trapping laser (\(\lambda_{\text{trap}}=1064\,\text{nm}\)) and a spatial filter (SF) to obtain a Gaussian trapping beam. Using a
Figure 1: Two pictures revealing possible shapes of the (composite) nanoparticles used for this paper, taken using a scanning electron microscope. The black patches represent holes in the holey carbon substrate. The left picture shows some of the nanoparticles used for trapping, where each sphere has a diameter of \(142\pm 4\,\text{nm}\), according to the manufacturer. The right picture shows (for \(216\pm 6\,\text{nm}\) spheres) a close-up of a nanosphere, a dumbbell, and a triangle trimer.
half-waveplate (HWP) we control its polarisation. The light then enters the vacuum chamber through a window and is focused using a microscope objective (MO; NA = 0.8), establishing the dipole trap at its focus point. The strong and inhomogeneous electric field at the focus creates a gradient force on the dielectric nanoparticles, allowing them to be trapped at the focus. We evaporate silica (SiO\({}_{2}\)) nanospheres (diameters \(103\pm 6\), \(142\pm 4\) nm; Microrparticles GmbH) from an ethanol solution using a medical nebuliser [40] at ambient pressure near the trapping region. When a droplet from the cloud of ethanol, carrying a nanoparticle, passes through the optical trap, the particle can be caught. Once the particle is trapped, we reduce the pressure towards the operational domain between \(\sim 20\) and \(\sim 0.1\) mbar.
Surrounding the trapping region, we have placed two copper electrodes (CE), with which we create a controlled electric field at the location of the trapped particle. We placed a discharge electrode (QE) to be able to charge/discharge the trapped nanoparticle [36; 37]. The electrode setup is used in the mass measurement of the nanoparticle. To visualise the trapped nanoparticle, and to measure its angular Rayleigh scattering profile, we use the light of a probe diode laser (\(\lambda_{\mathrm{probe}}=660\) nm), and overlap this beam with the trapping beam using a dichroic mirror (DM). A fraction of the scattered light at \(\lambda_{\mathrm{probe}}\) is collected using a CMOS camera. To track the dynamics of the particle inside the trap, we collect the transmitted trapping light using a collection lens (CL), and guide it towards a series of beamsplitters. We first use 10% of the remaining light for the angular detection, by employing a polarising beamsplitter (PBS) and a differential photodiode (PD\(\uptheta\)), which is balanced by another HWP. The final 90% is used for the X and Y detection, which include a D-shaped mirror and a differential photodiode (PDX and PDY), to measure the spatial intensity differences within the beam profile.
For each particle whose data is represented in this paper, the same experimental procedure was conducted. A detailed description on this procedure is given in Appendix A of the supplementary text. The particle is essentially cleaned from possible residual water in the porous internal silica structure [37; 41; 42], and charged to about 6 to 10 charges in preparation of the mass measurement. Once ready, we perform the mass measurements and record the angular Rayleigh scattering profile in the harmonic regime at around 15 mbar. We then periodically record the particle's signal during a slow pump-down, to determine its damping rate as a function of pressure.
When the translational damping of the particle is sufficiently high (\(\gtrsim 3\) mbar), its motion is primarily described by the (single-sided) linear power spectral density (PSD) [43; 44]
\[S_{F}(\Omega)=\frac{k_{B}T}{m}\frac{\Gamma_{0}}{(\Omega_{0}^{2}-\Omega^{2})^{2 }+\Omega^{2}\Gamma_{0}^{2}}, \tag{1}\]
which can be directly obtained from the signals recorded by PDX and PDY. Here \(\Omega_{0}\) represents the natural oscillation frequency and \(\Gamma_{0}\) the damping rate of the nanoparticle due to the background gas, both in rad/s. The mass of the particle is denoted with \(m\), and \(k_{B}\) and \(T\) are the Boltzman constant and the heatbath temperature respectively. The subscript \(F\) is used to denote that Eq. 1 is driven by the thermal fluctuation force. Fig. 3a shows a typical translational PSD from a spherical nanoparticle. The cyclic frequencies of the X and Y channels are not degenerate due to some ellipticity in the optical trap [45]. We obtain the natural frequency \(\Omega_{0}\) and the damping rates \(\Gamma_{0}\) by fitting Eq. 1 to Fig. 3a. To obtain
Figure 3: Typical power spectral densities (PSD) of the trapped nanoparticles. **(a)** The transverse (X,Y) motion of a spherical nanoparticle (particle 15 from Fig. 7). The dashed/dotted lines indicate the best-fit of Eq. 1; **(b)** PSDs of the signal recorded by PDB for three particles (15, 18, and 6 from Fig. 7); The additional peaks in both subfigures correspond to the Z, X, and Y motions of the particle. (See Appendix C of the supplementary text for further details.)
Figure 2: A schematic representation of the experimental setup. Details are presented in the main text.
an impression of the ellipticity of the particle's morphology, we also use the torsional PSD, recorded by PD\(\theta\). Typical torsional PSDs for three different particles are shown in Fig. 3b. The three particles were classified as a nanosphere, a dumbbell, and a triangle trimer, using the techniques described in this paper.
In linearly polarised light, the asymmetric susceptibility tensor of a non-spherical particle will introduce a torque to the system that causes the particle to align its major axis of polarisability with the polarisation of the trapping light. One of the consequences of this laser-induced alignment is that the measured damping rates in the X and Y channels becomes unequal. This anisotropy in the damping rates can be used as a measure of the asymmetry of the particle's shape [10; 38]. Fig. 4a shows the damping rates of a nanoparticle, which was classified as a dumbbell, as a function of the pressure. Fig. 4b shows the corresponding ratio \(\Gamma_{0}^{(y)}/\Gamma_{0}^{(x)}\), as well as that of a sphere and a triangle trimer. The nanodumbbell shows a high degree of asymmetry, with a ratio \(\Gamma_{0}^{(y)}/\Gamma_{0}^{(x)}\approx 1.27\), which is in agreement with that of a dumbbell with a length to diameter ratio close to 1.7 [10]. Meanwhile, the sphere (\(\Gamma_{0}^{(y)}/\Gamma_{0}^{(x)}\approx 1.03\)) and trimer (\(\Gamma_{0}^{(y)}/\Gamma_{0}^{(x)}\approx 1.11\)) appear to be much more symmetric in the XY plane of the setup, as is in agreement with simulated results [38], and they require additional classification techniques to be unambiguously distinguished from one another.
In addition to the asymmetric damping rates, we also record the angle-resolved Rayleigh scattering profile of the particle, which was recently demonstrated to allow the detection of asymmetries in the particle's morphology down to a few nanometers [35] in the XY-plane of our setup. To realise this procedure, we use the probe laser to illuminate the nanoparticle inside the optical trap at a different wavelength (\(\lambda_{\text{probe}}=660\,\text{nm}\)) and lower power (\(\sim 2\,\text{mW}\)) from the trapping laser (\(\lambda_{\text{trap}}=1064\,\text{nm}\); \(\sim 330\,\text{mW}\)). A HWP is used to rotate the linear polarisation of the trapping laser, whilst keeping that of the probe laser fixed. The laser induced optical alignment of an asymmetric particle then allows us to effectively rotate it about the longitudinal Z axis of our setup. The scattered light is then detected using a 12-bit CMOS camera, from which we track the averaged intensity over a closed pixel region around the particle. To improve the signal-to-noise ratio, we image the nanoparticle slightly out of focus, such that we could adjust the gain of the camera without causing the pixels to immediately saturate. A dichroic mirror is used to prevent the scattered light at the trapping wavelength from reaching the camera.
Theoretically, the expected scattering intensity can be obtained from the particle's susceptibility tensor following [35]
\[I(\theta)\propto\big{(}\chi^{(yy)}\big{)}^{2};\ \ \text{with}\ \ \mathbf{\chi}(\theta)=\mathbf{R}\mathbf{\chi}_{0}\mathbf{R}^{ \intercal}, \tag{2}\]
where \(\mathbf{\chi}_{0}\) represents the susceptibility tensor in the particle's eigenframe, and \(\mathbf{R}\) is the rotation matrix to map to the laboratory frame. For the case of a dumbbell specifically, values for \(\chi_{0}^{(yy)}/\chi_{0}^{(xx)}\) have been computed [10], and suggest \(I_{\text{min}}/I_{\text{max}}\approx 0.75\) when the length to diameter ratio reaches close to 1.7. Fig. 5 shows the resulting data for three particles, which were classified as a nanosphere, a dumbbell, and a triangle trimer. The susceptibility tensor of the particle in Fig. 5a appears to be almost fully symmetrical in the XY-plane, however data reveals some small ellipticity of the particle at \(I_{\text{min}}/I_{\text{max}}\approx 0.99\). In contrast, the particle in Fig. 5b shows a high degree of asymmetry between \(\chi_{0}^{(yy)}\) and \(\chi_{0}^{(xx)}\), with \(I_{\text{min}}/I_{\text{max}}\approx 0.72\), and most likely resulted from a dumbbell. The particle in Fig. 5c however shows far less contrast, and could not be distinguished from a single elliptical nanosphere based on the scattering data alone. By including its mass and damping rate measurements, this particle was classified as a triangle trimer.
The instantaneous inertial response of the nanoparticle to a well defined externally applied force may be used to directly determine the mass of the particle. A protocol for the active control over the nanoparticle's electric charge has already been developed [37], and is also elaborated on in Appendix A of the supplementary text. We use the CE to harmonically drive the charged particle using a quasi-static electric field near its natural oscillation frequency. Due to the zero correlation between the electrical driving force and the random Brownian fluctuation force, which is responsible for Eq. 1, the PSD of this driven system can be written by simply appending
Figure 4: **(a)** The damping rate of a nanoparticle as a function of pressure, corresponding to particle 3 from Fig. 7; **(b)** The ratio \(\Gamma_{0}^{(y)}/\Gamma_{0}^{(x)}\) for a set of different particles, which were classified as a sphere, a dumbbell, and a triangle trimer. The particles correspond to 15, 3, and 6 from Fig. 7 respectively.
the additional term [36]
\[S_{M}(\Omega)=\frac{F_{0}^{2}\tau}{8m^{2}}\frac{\operatorname{sinc}\left(\frac{1}{ 2}(\Omega-\Omega_{M})\tau\right)}{(\Omega_{0}^{2}-\Omega^{2})^{2}+\Omega^{2} \Gamma_{0}^{2}}, \tag{3}\]
to Eq. 1. In this new term, \(F_{0}\) represents the amplitude of the applied sinusoidal force, \(\Omega_{M}\) the frequency of this force, and \(\tau\) the duration of the finitely-recorded signal used to compute the Fourier transform. The subscript \(M\) is used to denote that Eq. 3 results from the modulation force that is applied to the particle. Whilst Eq. 1 scales with the inverse of the particle's mass \(m\), Eq. 3 does with its square. Therefore, the ratio
\[\frac{S_{F}(\Omega_{M})}{S_{M}(\Omega_{M})}=\frac{8\Gamma_{0}k_{B}T}{F_{0}^{2} \tau}m, \tag{4}\]
may be used to extract the particle's mass.
Fig. 6a and Fig. 6b show two examples of the PSD following two harmonically driven nanoparticles, which were classified as a sphere and a dumbbell. The inset in both figures shows a close-up of the PSD around the driving frequency \(\Omega_{M}\), revealing the sinc-shape as described by Eq. 3. The dashed lines on both show the corresponding fits to Eq. 1 for the large-scale figure, and Eq. 3 for the inset. Both particles are driven using the exact same force, and the presented data were recorded at the same pressure (\(\sim 15\,\)mbar). Nonetheless, it can be seen that Fig. 6b has a lower peak and a smaller overall width than Fig. 6a, the latter of which relates to a lower damping rate \(\Gamma_{0}\). Both properties indicate that Fig. 6b was taken from a heavier particle than Fig. 6a. Below, in Fig. 6c, the resulting mass is shown for both particles in femtograms. To improve our statistics, we measure each particle for a series of driving frequencies \(\Omega_{M}\) around the particle's natural frequency \(\Omega_{0}\). From the results it can be seen that the mass of the dumbbell is about twice as much as that of a single sphere.
We will now consider the combined results of the damping rate, scattering, and mass measurements. Fig. 7 shows an overview of a set of 18 different nanoparticles. For each particles we have performed the three classification methods, which leads us to the categorise as indicated at the bottom of the figure. The horizontal dashed lines represent the average results and the standard deviation of the spread, for each classification category.
As a single classification technique, the mass measurements appear to show the best resolution. We classify particles whose masses are approximately twice as much as that of the corresponding spheres as dumbbells, and those which are three times as much as trimers. We find that the mass of a nanoparticle is typically a bit high compared to the specifications of the manufacturer. One likely reason for this is the presence of residual water in the silica structure [41; 42; 37]. But also uncertainties in the mass density of the nanoparticles play a role [42; 37]. (See also Appendix A in the supplementary text for more details.) Our results on the mass determination are in line with those obtained in other publications [36; 37].
The measurements on the scattering profile and the damping rates do not show a significant difference between the nanoparticles composed of \(103\,\)nm or \(142\,\)nm
Figure 5: Measured Rayleigh scattering profile for several different particles. Each particle shown in this figure
consisted of a number of nanospheres, with an individual size of \(d=103\pm 6\,\)nm. The data was taken from particles 1, 3, and 6 from Fig. 7 respectively.
Figure 6: Mass data for two particles. **(a)** and **(b)** show the PSDs following two harmonically driven particles, alongside an enlarged version of the driving peak in the inset, following Eq. 3. The dashed lines in both figures represent the best-fit to Eq. 1 and Eq. 3. In **(b)** one may observe that part of the torsional signal is leaked into the data starting around \(180\,\)kHz. **(a)** and **(b)** correspond to particles 15 and 17 in Fig. 7. **(c)** shows the resulting mass measurements for both particles, measured for a series of driving frequencies \(\Omega_{M}\) around the natural frequency \(\Omega_{0}\).
spheres, as these methods focus on the morphology of the particle. Neither does either of the methods single-handedly give a satisfactory resolution that allows for an unambiguous distinction between single spheres, triangle trimers, and particles composed of four or more nanospheres (denoted as 'other' in Fig. 7). In this case, the combination of multiple classification techniques is required to classify the particle.
On a single particle level, the measurements on the mass and shape using the three methods (PSD analysis, scattering anisotropy and mass determination) are reproducible and show consistent results. When comparing multiple particles of the same category with each other however, we see significant scattering of the data beyond the statistical error bars, which lead us to perform an analysis of the main systematic effects that influence these measurements. Here we present the main conclusions, while the full details are given in the supplementary material.
Concerning the mass measurement, the accuracy with which we can determine the distance between the copper electrodes limits the accuracy with which we can quantify the mass of the particle. In our results this is given as a symmetric uncertainty of \(5.6\%\) in the value of the electric field. The fitting of the PSD, which is used to asses the morphology of the particle, is influenced by the cross talk between the different motional modes of the particle. At pressure below a few mbar, where damping by background gas is reduced, the anharmonic part of the potential is explored which leads to a reduced quality of the fits. We find an overestimation of the damping rate of a given motional degree of freedom in the pressure range we use by at most \(5\%\). Regarding the scattering anisotropy we have evaluated detector linearity and alignment as potential systematic effects, which play a minor role.
We have combined a set of three classification techniques to classify the consistency of a series of optically trapped silica nanospheres. We observe a large variation in the shapes and sizes within a sample of nanoparticles. This is not only caused by the variation in the shape and mass of the individual spheres, but also by the way in which they combine to form composite particles. We have demonstrated that, in some cases, the combination of multiple classification techniques is required to obtain an unambiguous conclusion on the particle's shape, size, and mass.
A detailed description of the measurement protocol, the data analysis (specifically for obtaining the damping rate \(\Gamma_{0}\)), and the consideration of systematic effects in all three classification techniques can be found in the supplementary text.
###### Acknowledgements.
We acknowledge support from Gert ten Brink, Leo Huisman and George Palasantzas. This project has received funding from NWO through NWA Startimpuls (400.17.608/4303).
## Author Declarations
### Conflict of Interest
The authors have no conflicts to disclose.
### Author Contributions
**Bart Schellenberg:** Conceptualisation (equal); Data curation (equal); Formal analysis (lead); Investigation (equal); Methodology (equal); Software (equal); Validation (equal); Visualisation (equal); Writing - original draft (equal); Writing - review & editing (equal). **Mina Morshed Behbahani:** Conceptualisation (equal); Data curation (equal); Investigation (equal); Methodology (equal); Software (equal); Validation (equal); Visualisation (equal); Writing - original draft (equal); Writing
Figure 7: The combined results from the mass, damping rate, and scattering measurements, sorted by their classified categories. The data on each column corresponds to the same particles. The horizontal dashed lines represent the average result per category (excluding ”other”, which represents compositions of four or more spheres), and coloured band shows the standard deviation of the data points.
- review & editing (equal). **Nithesh Balasubramanian:** Conceptualisation (equal); Formal analysis (supporting); Investigation (equal); Methodology (equal); Software (equal); Writing
- original draft (supporting); Writing
- review & editing (equal). **Ties H. Fikkers:** Conceptualisation (equal); Investigation (equal); Methodology (equal); Writing
- original draft (supporting); Writing
- review & editing (equal). **Steven Hoekstra:** Conceptualisation (equal); Funding acquisition (lead); Investigation (equal); Methodology (equal); Validation (equal); Visualisation (equal); Writing
- original draft (equal); Writing
- review & editing (equal).
## Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2309.17390 | Forward Flow for Novel View Synthesis of Dynamic Scenes | This paper proposes a neural radiance field (NeRF) approach for novel view
synthesis of dynamic scenes using forward warping. Existing methods often adopt
a static NeRF to represent the canonical space, and render dynamic images at
other time steps by mapping the sampled 3D points back to the canonical space
with the learned backward flow field. However, this backward flow field is
non-smooth and discontinuous, which is difficult to be fitted by commonly used
smooth motion models. To address this problem, we propose to estimate the
forward flow field and directly warp the canonical radiance field to other time
steps. Such forward flow field is smooth and continuous within the object
region, which benefits the motion model learning. To achieve this goal, we
represent the canonical radiance field with voxel grids to enable efficient
forward warping, and propose a differentiable warping process, including an
average splatting operation and an inpaint network, to resolve the many-to-one
and one-to-many mapping issues. Thorough experiments show that our method
outperforms existing methods in both novel view rendering and motion modeling,
demonstrating the effectiveness of our forward flow motion modeling. Project
page: https://npucvr.github.io/ForwardFlowDNeRF | Xiang Guo, Jiadai Sun, Yuchao Dai, Guanying Chen, Xiaoqing Ye, Xiao Tan, Errui Ding, Yumeng Zhang, Jingdong Wang | 2023-09-29T16:51:06Z | http://arxiv.org/abs/2309.17390v1 | # Forward Flow for Novel View Synthesis of Dynamic Scenes
###### Abstract
This paper proposes a neural radiance field (NeRF) approach for novel view synthesis of dynamic scenes using forward warping. Existing methods often adopt a static NeRF to represent the canonical space, and render dynamic images at other time steps by mapping the sampled 3D points back to the canonical space with the learned backward flow field. However, this backward flow field is non-smooth and discontinuous, which is difficult to be fitted by commonly used smooth motion models. To address this problem, we propose to estimate the _forward flow_ field and directly warp the canonical radiance field to other time steps. Such forward flow field is smooth and continuous within the object region, which benefits the motion model learning. To achieve this goal, we represent the canonical radiance field with voxel grids to enable efficient forward warping, and propose a differentiable warping process, including an average splatting operation and an inpaint network, to resolve the many-to-one and one-to-many mapping issues. Thorough experiments show that our method outperforms existing methods in both novel view rendering and motion modeling, demonstrating the effectiveness of our forward flow motion modeling. Project page: [https://npucvr.github.io/ForwardFlowDNeRF](https://npucvr.github.io/ForwardFlowDNeRF).
## 1 Introduction
Novel view synthesis (NVS) is a challenging and long-standing problem in computer vision and graphics, which has many applications in virtual reality, augmented reality, data augmentation, image editing,. Recently, differentiable neural rendering [26, 30, 59] has been introduced into this area. In particular, the neural radiance field (NeRF) [26] promotes this area significantly and attracts lots of interest within a short time. NeRF [26] produces realistic images by representing the 3D world with a multi-layer perceptron (MLP), which maps the input 3D coordinates and 2D view direction to target density and color.
While the original NeRF [26] can only model static scenes, a series of works extend the NeRF-based framework from static to dynamic scenes [9, 13, 19, 36, 51, 55, 32]. One of the promising directions is using a canonical space representation [15, 49, 36]. This representation sets one of the time steps as canonical time and models the static scene with a canonical radiance field. To render images at other time steps, a deformation field is used to estimate the
Figure 1: **Comparison of backward flow and forward flow.** This figure shows an example of backward and forward flow changes. **(a)** An example of dynamic scene. **(b)** With the bucket lifting up, different types of points cover the green point \(\mathbf{p}\), which needs very different backward flows to map this point back to canonical space. **(d)** shows the norm changes of the backward flow, which is not smooth. **(c)** On the other hand, the forward flow of position \(\mathbf{q}\), which maps the constant object point from canonical space to other times, is smooth and continuous. **(e)** shows the norm changes of the forward flow. |